PromptEvaluatorsClient
adaline.prompts.evaluators manages the evaluators attached to a prompt — LLM-as-a-judge graders, JavaScript checks, text matchers, cost, latency, and response-length guards. Evaluators are always scoped to a prompt; there is no workspace-level evaluators collection.
Access
@adaline/api:
type | What it measures |
|---|---|
llm-as-a-judge | Qualitative grading via an LLM rubric |
javascript | Arbitrary JS/TS sandboxed check |
text-matcher | String contains / regex / equality |
cost | Cost threshold per row |
latency | Response time threshold |
response-length | Token / character bounds |
list()
List evaluators attached to a prompt (paginated).Parameters
| Name | Type | Required | Description |
|---|---|---|---|
promptId | string | Yes | Prompt whose evaluators should be listed. |
limit | number | No | Page size (default 50, max 200). |
cursor | string | No | Cursor from a previous response. |
sort | SortOrder | No | "createdAt:asc" or "createdAt:desc". |
createdAfter | number | No | Unix milliseconds. |
createdBefore | number | No | Unix milliseconds. |
Returns
Promise<ListEvaluatorsResponse> with { data: Evaluator[]; pagination: Pagination }.
Example
create()
Attach a new evaluator to a prompt.Parameters
| Name | Type | Required | Description |
|---|---|---|---|
promptId | string | Yes | Prompt to attach the evaluator to. |
evaluator | CreateEvaluatorRequest | Yes | Evaluator definition — type (see table above), title, and settings specific to the type. |
Returns
Promise<Evaluator> — the newly attached evaluator with its server-assigned id.
Example — LLM-as-a-judge
Example — text matcher
get()
Fetch a single evaluator by ID.update()
Update an evaluator’s title, settings, or threshold.Example
delete()
Permanently delete an evaluator. Results from past evaluations that used it are preserved.See Also
- PromptsClient — parent client
- PromptEvaluationsClient — run evaluations with these evaluators
- API reference: List evaluators · Create · Update · Delete