Share via


Feedback model (deprecated)

Important

Deprecation notice: The feedback model has been deprecated as of December 4, 2025 and is no longer supported in the latest version of databricks-agents.

Action required: Use MLflow 3 to log your model 3 instead. Then use the log_feedback API and MLflow 3 Assessments API instead to collect feedback.

The feedback model allows you to programmatically collect feedback on agent responses. When you deploy an agent using agents.deploy(), Databricks automatically creates a feedback model endpoint alongside your agent.

This endpoint accepts structured feedback (ratings, comments, assessments) and logs it to inference tables. However, this approach has been replaced by MLflow 3's more robust feedback capabilities.

Migrate to MLflow 3

Instead of using the deprecated feedback model, migrate to MLflow 3 for comprehensive feedback and assessment capabilities:

  • First-class assessment logging with robust validation and error handling
  • Real-time tracing integration for immediate feedback visibility
  • Review App integration with enhanced stakeholder collaboration features
  • Production monitoring support with automated quality assessment

To migrate existing workloads to MLflow 3:

  1. Upgrade to MLflow 3.1.3 or above in your development environment:

    %pip install mlflow>=3.1.3
    dbutils.library.restartPython()
    
  2. Enable the Review App for stakeholder feedback collection.

  3. Replace feedback API calls with MLflow 3 assessment logging.

  4. Deploy your agent with MLflow 3:

    • Real-time tracing automatically captures all interactions
    • Assessments attach directly to traces for unified visibility
  5. Set up production monitoring (optional):

How the feedback API works (deprecated)

The feedback model exposed a REST endpoint that accepted structured feedback about agent responses. You would send feedback via a POST request to the feedback endpoint after your agent processed a request.

Example feedback request:

curl \
  -u token:$DATABRICKS_TOKEN \
  -X POST \
  -H "Content-Type: application/json" \
  -d '
      {
          "dataframe_records": [
              {
                  "source": {
                      "id": "user@company.com",
                      "type": "human"
                  },
                  "request_id": "573d4a61-4adb-41bd-96db-0ec8cebc3744",
                  "text_assessments": [
                      {
                          "ratings": {
                              "answer_correct": {
                                  "value": "positive"
                              },
                              "accurate": {
                                  "value": "positive"
                              }
                          },
                          "free_text_comment": "The answer used the provided context to talk about pipelines"
                      }
                  ],
                  "retrieval_assessments": [
                      {
                          "ratings": {
                              "groundedness": {
                                  "value": "positive"
                              }
                          }
                      }
                  ]
              }
          ]
      }' \
https://<workspace-host>.databricks.com/serving-endpoints/<your-agent-endpoint-name>/served-models/feedback/invocations

You can pass additional or different key-value pairs in the text_assessments.ratings and retrieval_assessments.ratings fields to provide different types of feedback. In the example, the feedback payload indicates that the agent's response to the request with ID 573d4a61-4adb-41bd-96db-0ec8cebc3744 is correct, accurate, and grounded in context fetched by a retriever tool.

Feedback API limitations

The experimental feedback API has several limitations:

  • No input validation; The API always responds successfully, even with invalid input
  • Required Databricks request ID: You need to pass the databricks_request_id from the original agent request
  • Inference table dependency: Feedback is collected using inference tables with their inherent limitations
  • Limited error handling: No meaningful error messages for troubleshooting

To get the required databricks_request_id, you must include {"databricks_options": {"return_trace": True}} in your original request to the agent serving endpoint.

Next steps