Skip to main content

PromptEvaluatorsClient

adaline.prompts.evaluators manages the evaluators attached to a prompt — LLM-as-a-judge graders, JavaScript checks, text matchers, cost, latency, and response-length guards. Evaluators are always scoped to a prompt; there is no workspace-level evaluators collection.

Access

import { Adaline } from '@adaline/client';

const adaline = new Adaline();
const evaluators = adaline.prompts.evaluators; // PromptEvaluatorsClient
The class is also exported directly:
import { PromptEvaluatorsClient } from '@adaline/client';
Types from @adaline/api:
import type {
  Evaluator,
  CreateEvaluatorRequest,
  UpdateEvaluatorRequest,
  ListEvaluatorsResponse,
  SortOrder,
} from '@adaline/api';
Evaluator types at a glance:
typeWhat it measures
llm-as-a-judgeQualitative grading via an LLM rubric
javascriptArbitrary JS/TS sandboxed check
text-matcherString contains / regex / equality
costCost threshold per row
latencyResponse time threshold
response-lengthToken / character bounds

list()

List evaluators attached to a prompt (paginated).
list(options: {
  promptId: string;
  limit?: number;
  cursor?: string;
  sort?: SortOrder;
  createdAfter?: number;
  createdBefore?: number;
}): Promise<ListEvaluatorsResponse>

Parameters

NameTypeRequiredDescription
promptIdstringYesPrompt whose evaluators should be listed.
limitnumberNoPage size (default 50, max 200).
cursorstringNoCursor from a previous response.
sortSortOrderNo"createdAt:asc" or "createdAt:desc".
createdAfternumberNoUnix milliseconds.
createdBeforenumberNoUnix milliseconds.

Returns

Promise<ListEvaluatorsResponse> with { data: Evaluator[]; pagination: Pagination }.

Example

const { data } = await adaline.prompts.evaluators.list({
  promptId: 'prompt_abc123',
  limit: 50,
});

create()

Attach a new evaluator to a prompt.
create(options: {
  promptId: string;
  evaluator: CreateEvaluatorRequest;
}): Promise<Evaluator>

Parameters

NameTypeRequiredDescription
promptIdstringYesPrompt to attach the evaluator to.
evaluatorCreateEvaluatorRequestYesEvaluator definition — type (see table above), title, and settings specific to the type.

Returns

Promise<Evaluator> — the newly attached evaluator with its server-assigned id.

Example — LLM-as-a-judge

const evaluator = await adaline.prompts.evaluators.create({
  promptId: 'prompt_abc123',
  evaluator: {
    type: 'llm-as-a-judge',
    title: 'Factuality',
    settings: {
      model: 'gpt-4o',
      rubric: 'Rate 1-5 for factual accuracy against the reference answer.',
      threshold: 4,
    },
  },
});

Example — text matcher

const evaluator = await adaline.prompts.evaluators.create({
  promptId: 'prompt_abc123',
  evaluator: {
    type: 'text-matcher',
    title: 'Must mention policy link',
    settings: {
      operator: 'contains',
      value: 'https://example.com/policy',
    },
  },
});

get()

Fetch a single evaluator by ID.
get(options: {
  promptId: string;
  evaluatorId: string;
}): Promise<Evaluator>

update()

Update an evaluator’s title, settings, or threshold.
update(options: {
  promptId: string;
  evaluatorId: string;
  evaluator: UpdateEvaluatorRequest;
}): Promise<Evaluator>

Example

await adaline.prompts.evaluators.update({
  promptId: 'prompt_abc123',
  evaluatorId: 'evaluator_abc123',
  evaluator: { settings: { threshold: 3 } },
});

delete()

Permanently delete an evaluator. Results from past evaluations that used it are preserved.
delete(options: {
  promptId: string;
  evaluatorId: string;
}): Promise<void>

See Also