PromptEvaluationsClient
adaline.prompts.evaluations kicks off evaluation runs against a prompt, inspects their status, and cancels them. Per-row results are accessed through the nested .results sub-client. Every method is async.
For long-running evaluations, prefer adaline.init_evaluation_results() — it wraps .results.list in a self-refreshing cache.
Access
Sub-client
| Attribute | Client | Covers |
|---|---|---|
adaline.prompts.evaluations.results | EvaluationResultsClient | Paginated per-row evaluation results |
adaline_api:
list()
List evaluations for a prompt (paginated). Filter by status, evaluator, or dataset.Parameters
| Name | Type | Required | Description |
|---|---|---|---|
prompt_id | str | Yes | Prompt whose evaluations should be listed. |
status | Optional[EvaluationStatusInput] | No | Filter by lifecycle state. |
evaluator_id / dataset_id | Optional[str] | No | Narrow by evaluator or dataset. |
sort | Optional[SortOrderInput] | No | Sort order. |
created_after / created_before | Optional[int] | No | Unix millisecond bounds. |
limit | Optional[int] | No | Page size (default 50, max 200). |
cursor | Optional[str] | No | Cursor from a previous response. |
Example
create()
Start a new evaluation run. Runs asynchronously on the server.Example
get()
Fetch a single evaluation by ID.cancel()
Cancel an in-flight evaluation. In-progress rows keep running to completion, but no new rows will start.Example
See Also
- PromptsClient — parent client
- EvaluationResultsClient —
.resultssub-client - Adaline class —
init_evaluation_results()polling helper - PromptEvaluatorsClient
- API reference: List evaluations · Create · Get · Cancel