Skip to main content

EvaluationResultsQuery

Shape of the query arguments passed to EvaluationResultsClient.list and Adaline.init_evaluation_results. The Python SDK takes these as keyword arguments rather than a single object — the shape here is documentation for what the polling helper replays on every refresh.

Shape

{
    "prompt_id": str,
    "evaluation_id": str,
    "grade": Optional[str],     # "pass" | "fail" | "unknown"
    "expand": Optional[str],    # "row"
    "sort": Optional[str],      # "createdAt:asc" | "createdAt:desc" | "score:asc" | "score:desc"
    "limit": Optional[int],
    "cursor": Optional[str],
}

Fields

FieldTypeDescription
prompt_idstrPrompt the evaluation belongs to. Required.
evaluation_idstrEvaluation whose results you want. Required.
gradeOptional[str]Only return rows with this grade: "pass", "fail", or "unknown".
expandOptional[str]If "row", each result includes its underlying dataset row.
sortOptional[str]"createdAt:asc", "createdAt:desc", "score:asc", or "score:desc".
limitOptional[int]Page size.
cursorOptional[str]Opaque cursor returned by the previous response’s pagination.next_cursor.

Usage

# One-shot fetch
page = await adaline.prompts.evaluations.results.list(
    prompt_id="prompt_abc123",
    evaluation_id="eval_abc123",
    grade="fail",
    expand="row",
    sort="score:desc",
    limit=50,
)

# Or keep a polling cache refreshed in the background
results = await adaline.init_evaluation_results(
    prompt_id="prompt_abc123",
    evaluation_id="eval_abc123",
    grade="fail",
    expand="row",
    sort="score:desc",
    limit=50,
    refresh_interval=30,
)

page = await results.get()

See Also