Adaline home page
Search...
⌘K
Get started
Introduction
Quickstart
Pillars
Iterate
Overview
Editor
Playground
Variable Editor
Evaluate
Overview
Evaluators
Datasets
Evaluations
Overview
Runs
View
Rollbacks
Deploy
Overview
Environments
Deployments
Monitor
Overview
Guides
Proxy
Prompt Library
References
Proxy
Providers
Support
Dashboard
Adaline home page
Search...
⌘K
Ask AI
Support
Dashboard
Search...
Navigation
Evaluations
Overview
Platform
API Reference
Platform
API Reference
Evaluations
Overview
Copy page
Run and analyze evaluations at scale.
Copy page
Take your configured evaluators from setup to insights. Run tests, view detailed results, and iterate based on real performance data.
Features
Execute Evaluations
Run your tests against real datasets automatically:
Run evaluation in the background for large datasets
Get instant insights on prompt performance
Analyze Results
View detailed reports to understand evaluation outcomes:
Inspect individual evaluation runs
View which dataset rows passed or failed the evaluation
Filter and Search
Find specific patterns in your evaluation data:
Filter by status, reason, response content, or variables
Search across tokens, cost, and latency metrics
Rollback and Iterate
Restore previous states to understand performance changes:
Return to the exact prompt configurations from any evaluation run
Compare different versions to optimize prompts
Was this page helpful?
Yes
No
Previous
Runs
Execute your evaluator tests and get instant insights on prompt performance.
Next
On this page
Features
Execute Evaluations
Analyze Results
Filter and Search
Rollback and Iterate
Assistant
Responses are generated using AI and may contain mistakes.