PromptEvaluatorsClient
adaline.prompts.evaluators manages the evaluators attached to a prompt — LLM-as-a-judge graders, JavaScript checks, text matchers, cost, latency, and response-length guards. Evaluators are always scoped to a prompt; there is no workspace-level evaluators collection. Every method is async.
Access
adaline_api:
type | What it measures |
|---|---|
llm-as-a-judge | Qualitative grading via an LLM rubric |
javascript | Arbitrary JS/TS sandboxed check |
text-matcher | String contains / regex / equality |
cost | Cost threshold per row |
latency | Response time threshold |
response-length | Token / character bounds |
list()
List evaluators attached to a prompt (paginated).Example
create()
Attach a new evaluator to a prompt.Example — LLM-as-a-judge
get()
Fetch a single evaluator by ID.update()
Update an evaluator’s title, settings, or threshold.Example
delete()
Permanently delete an evaluator.See Also
- PromptsClient — parent client
- PromptEvaluationsClient
- API reference: List evaluators · Create · Update · Delete