Skip to main content

PromptEvaluatorsClient

adaline.prompts.evaluators manages the evaluators attached to a prompt — LLM-as-a-judge graders, JavaScript checks, text matchers, cost, latency, and response-length guards. Evaluators are always scoped to a prompt; there is no workspace-level evaluators collection. Every method is async.

Access

from adaline.main import Adaline

adaline = Adaline()
evaluators = adaline.prompts.evaluators  # PromptEvaluatorsClient
The class is also exported directly:
from adaline.clients import PromptEvaluatorsClient
Types from adaline_api:
from adaline_api.models.evaluator import Evaluator
from adaline_api.models.create_evaluator_request import CreateEvaluatorRequest
from adaline_api.models.update_evaluator_request import UpdateEvaluatorRequest
from adaline_api.models.list_evaluators_response import ListEvaluatorsResponse
Evaluator types at a glance:
typeWhat it measures
llm-as-a-judgeQualitative grading via an LLM rubric
javascriptArbitrary JS/TS sandboxed check
text-matcherString contains / regex / equality
costCost threshold per row
latencyResponse time threshold
response-lengthToken / character bounds

list()

List evaluators attached to a prompt (paginated).
async def list(
    *,
    prompt_id: str,
    limit: Optional[int] = None,
    cursor: Optional[str] = None,
    sort: Optional[SortOrderInput] = None,
    created_after: Optional[int] = None,
    created_before: Optional[int] = None,
) -> ListEvaluatorsResponse

Example

response = await adaline.prompts.evaluators.list(
    prompt_id="prompt_abc123",
    limit=50,
)

create()

Attach a new evaluator to a prompt.
async def create(
    *,
    prompt_id: str,
    evaluator: CreateEvaluatorRequest,
) -> Evaluator

Example — LLM-as-a-judge

from adaline_api.models.create_evaluator_request import CreateEvaluatorRequest

evaluator = await adaline.prompts.evaluators.create(
    prompt_id="prompt_abc123",
    evaluator=CreateEvaluatorRequest(
        type="llm-as-a-judge",
        title="Factuality",
        settings={
            "model": "gpt-4o",
            "rubric": "Rate 1-5 for factual accuracy against the reference answer.",
            "threshold": 4,
        },
    )
)

get()

Fetch a single evaluator by ID.
async def get(*, prompt_id: str, evaluator_id: str) -> Evaluator

update()

Update an evaluator’s title, settings, or threshold.
async def update(
    *,
    prompt_id: str,
    evaluator_id: str,
    evaluator: UpdateEvaluatorRequest,
) -> Evaluator

Example

from adaline_api.models.update_evaluator_request import UpdateEvaluatorRequest

await adaline.prompts.evaluators.update(
    prompt_id="prompt_abc123",
    evaluator_id="evaluator_abc123",
    evaluator=UpdateEvaluatorRequest(settings={"threshold": 3}),
)

delete()

Permanently delete an evaluator.
async def delete(*, prompt_id: str, evaluator_id: str) -> None

See Also