I ran into the same 404 error last week and it turned out to be a simple client-vs-server version mismatch.
Why it’s happening
MLflow 2.8 (and anything newer) added a “logged-model” API. When you pass a name= argument to mlflow.pytorch.log_model() (or any log_model call), the client tries to hit /api/2.0/mlflow/logged-models/*. Azure ML workspaces are still running an MLflow server ≤ 2.7, which only understands the older /registered-models/* routes. The server doesn’t recognise the new endpoint and sends back a 404.
Two quick ways to unblock your run
Downgrade the clientbash<br>pip install "mlflow<2.8"<br># e.g. mlflow==2.7.1<br>Your script keeps the one-step “log + register” flow and the server responds with 200.
Keep the new client, skip the new API
Drop the name= parameter, then register in a second line: python mlflow.pytorch.log_model( pytorch_model=model, artifact_path="simple_pytorch_model" ) run_uri = f"runs:/{mlflow.active_run().info.run_id}/simple_pytorch_model" mlflow.register_model(run_uri, "simple_pytorch_model")
With no name=, the client falls back to the classic workflow the server already supports.
Tip: Inside an Azure ML job you don’t need to set MLFLOW_TRACKING_URI; the runtime injects the correct azureml://… value automatically. Using the HTTPS URI from the portal can also trigger 404s.
Quick checklist if it still fails
python -m pip show mlflow — confirm the version you expect.
az ml workspace show -n <name> — check the region; rollouts happen region by region.
az ml job create ... in a fresh venv after pinning MLflow < 2.8.
Still 404? Grab the request ID from the error, share a minimal repro notebook to Q and A support here to create suppor ticket —they can confirm whether your workspace has the newer server build yet.
Once the backend upgrades past MLflow 2.8 you can go back to the latest client and the single-line log_model(name=…) call will work again.
Hope that helps—shout if you get stuck!
Best Regards,
Jerald Felix