EvaluationResultsClient
adaline.prompts.evaluations.results returns per-row evaluation results as they become available, with optional grade and score filters. Because rows appear incrementally while an evaluation runs, you can poll this client to surface partial progress. Every method is async.
For long-running evaluations, prefer adaline.init_evaluation_results() — it wraps this client in a self-refreshing cache.
Access
adaline_api:
list()
Fetch a page of evaluation results. Rows are returned as they become available, so you can poll this while an evaluation is still running.Parameters
| Name | Type | Required | Description |
|---|---|---|---|
prompt_id | str | Yes | Prompt the evaluation belongs to. |
evaluation_id | str | Yes | Evaluation to inspect. |
grade | Optional[str] | No | "pass", "fail", or "unknown". |
expand | Optional[str] | No | "row" to include the underlying dataset row inline. |
sort | Optional[str] | No | "createdAt:asc", "createdAt:desc", "score:asc", or "score:desc". |
limit | Optional[int] | No | Page size. |
cursor | Optional[str] | No | Cursor from a previous response. |
Returns
ListEvaluationResultsResponse with { data: list[EvaluationResult]; pagination: Pagination }.
Example
See Also
- PromptEvaluationsClient — parent client
- EvaluationResultsQuery
- Adaline class —
init_evaluation_results()polling helper - API reference: Get evaluation results