Skip to main content
User feedback is one of the most valuable signals for evaluating AI quality. Adaline lets you attach feedback data — thumbs up/down, ratings, comments, and custom signals — to traces and spans so you can correlate user satisfaction with specific prompts and model outputs.

How it works

User feedback is captured as attributes on traces or spans. When a user provides feedback in your application, you update the corresponding trace or span with the feedback data. This feedback then appears in the Monitor pillar alongside the trace details, and can be used to filter logs and build datasets from cases with low satisfaction.

Capture feedback via SDK

After a user provides feedback, use the trace or span’s update method to attach it:
// When the user gives a thumbs up/down
async function handleUserFeedback(
  traceId: string,
  feedback: "positive" | "negative",
  comment?: string
) {
  // Retrieve or store the trace reference from your application
  const trace = getStoredTrace(traceId);

  trace.update({
    attributes: {
      "user_feedback": feedback,
      "user_feedback_comment": comment || "",
      "user_feedback_timestamp": new Date().toISOString(),
    },
    tags: [feedback === "positive" ? "thumbs-up" : "thumbs-down"],
  });
}

Capture feedback via API

If you are using the REST API directly, update the trace with a PATCH request:
PATCH /v2/logs/trace
Authorization: Bearer your-api-key
Content-Type: application/json

{
  "traceId": "your-trace-id",
  "projectId": "your-project-id",
  "attributes": {
    "user_feedback": "negative",
    "user_feedback_comment": "Response was too vague",
    "user_feedback_rating": 2
  },
  "tags": ["thumbs-down", "needs-improvement"]
}
See the Update Log Trace API reference for complete endpoint documentation.

Capture feedback via Proxy

When using the Proxy, you can attach feedback attributes on subsequent requests within the same trace:
# Use the same trace reference ID as the original request
headers["adaline-trace-reference-id"] = original_trace_id
headers["adaline-trace-attributes"] = json.dumps([
    {"operation": "create", "key": "user_feedback", "value": "negative"},
    {"operation": "create", "key": "user_feedback_comment", "value": "Too vague"},
])
headers["adaline-trace-tags"] = json.dumps([
    {"operation": "create", "tag": "thumbs-down"},
])

Feedback data patterns

Design your feedback attributes to be consistent and filterable:
AttributeTypeExample valuesPurpose
user_feedbackString"positive", "negative"Binary satisfaction signal.
user_feedback_ratingNumber1 - 5Numeric rating scale.
user_feedback_commentString"Response was helpful"Free-text feedback from the user.
user_feedback_categoryString"incorrect", "incomplete", "off-topic"Categorized issue type.
user_feedback_timestampStringISO 8601 timestampWhen the feedback was provided.
For more structured triage workflows, consider adding attributes that support an annotation queue — a feedback_reason with a controlled taxonomy (e.g., "wrong_policy", "hallucination", "missing_context") and an annotation_status ("empty" / "filled") to track which cases still need human review.
Use consistent attribute names across your application. This makes it easy to create filters in the Monitor like “show all traces with negative feedback” or “show traces rated below 3”.

Use feedback for improvement

Once feedback is captured, it becomes part of your improvement workflow:
  1. Filter in MonitorFilter traces by feedback attributes to find cases where users were dissatisfied.
  2. Build datasetsAdd negative feedback cases to evaluation datasets so you can test fixes against real user complaints. In Monitor, open a trace, click Add to Dataset, and map feedback attributes into dataset columns.
  3. Annotation queues — Use an annotation_status column in your dataset to track which rows need human review. Reviewers fill in corrective annotations, then mark rows as filled. This gives you a structured backlog with clear ownership.
  4. Run evaluations — Attach evaluators to the dataset to verify that prompt fixes actually resolve the failure class. Run on annotated rows first, then the full dataset. Previously failed rows stay in the dataset permanently as regression guardrails.
  5. Track trends — Use charts with custom attributes to monitor satisfaction trends over time.
  6. Correlate with eval scores — Compare user feedback against continuous evaluation scores to validate your evaluators.

Best practices

  • Capture immediately — Send feedback as soon as the user provides it, even if the trace has already ended.
  • Use tags for quick filtering — Tags like "thumbs-up" and "thumbs-down" make it easy to filter in the Monitor without complex attribute queries.
  • Include context — When possible, capture the reason for negative feedback (comment or category) to make it actionable.
  • Link to the right trace — Store the trace ID in your application when you display a response, so you can attach feedback to the correct trace later.

Next steps

Build Datasets from Logs

Turn feedback-tagged logs into evaluation datasets.

Log Attachments

Attach additional data to traces and spans.