Skip to main content

Documentation Index

Fetch the complete documentation index at: https://www.adaline.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

Adaline

The main client class for interacting with the Adaline platform. Provides methods to fetch deployments, manage cached deployment controllers with background refresh, and initialize monitoring.

Constructor

from adaline.main import Adaline

adaline = Adaline(
    api_key="your-api-key",        # optional, defaults to ADALINE_API_KEY env var
    host="https://api.adaline.ai/v2",  # optional, defaults to ADALINE_BASE_URL env var
    debug=False                     # optional, enables DEBUG logging
)
All constructor parameters are keyword-only.

Parameters

api_key
str | None
Your Adaline API key. Defaults to the ADALINE_API_KEY environment variable.
host
str | None
The base URL for the Adaline API. Falls back to the ADALINE_BASE_URL environment variable, then https://api.adaline.ai/v2.
debug
bool
default:"False"
If True, enables DEBUG-level logging on the adaline logger with a StreamHandler that outputs [Adaline] LEVEL: message.

Methods

get_deployment

Fetches a specific prompt deployment by prompt ID and deployment ID. This is an async method.
deployment = await adaline.get_deployment(
    prompt_id="your-prompt-id",
    deployment_id="your-deployment-id"
)

Parameters

prompt_id
str
required
The unique ID of the prompt.
deployment_id
str
required
The specific deployment ID.
Returns: A Deployment object. Raises: ApiException if the API call fails (4xx errors fail immediately, 5xx errors are retried).

get_latest_deployment

Fetches the latest deployment for a prompt in a specific environment. This is an async method.
deployment = await adaline.get_latest_deployment(
    prompt_id="your-prompt-id",
    deployment_environment_id="your-environment-id"
)

Parameters

prompt_id
str
required
The unique ID of the prompt.
deployment_environment_id
str
required
The deployment environment ID.
Returns: The latest Deployment object for the given environment. Raises: ApiException if the API call fails.

init_latest_deployment

Initializes a cached latest deployment with automatic background refresh. Fetches the latest deployment immediately, then starts a background loop that refreshes the cache at the given interval. This is an async method.
controller = await adaline.init_latest_deployment(
    prompt_id="your-prompt-id",
    deployment_environment_id="your-environment-id",
    refresh_interval=60,
    max_continuous_failures=3
)

# Retrieve cached deployment (no API call unless forced)
deployment = await controller.get()

# Force a refresh from the API
deployment = await controller.get(force_refresh=True)

# Check background refresh status
status = controller.get_background_status()
# {"stopped": False, "consecutive_failures": 0, "last_error": None, "last_refreshed": datetime}

# Stop the background refresh
await controller.stop()

Parameters

prompt_id
str
required
The unique ID of the prompt.
deployment_environment_id
str
required
The deployment environment ID.
refresh_interval
int
default:"60"
Seconds between background refreshes. Clamped to [1, 600].
max_continuous_failures
int
default:"3"
Number of consecutive failures before the background loop stops itself.
Returns: A Controller instance for retrieving and managing the cached deployment. Raises: ApiException if the initial fetch fails.

Controller

The Controller class returned by init_latest_deployment provides:
MethodSignatureDescription
getasync get(force_refresh: bool = False) -> Optional[Deployment]Returns the cached deployment. If force_refresh=True, bypasses the cache.
stopasync stop()Cancels the background refresh task and clears the cache entry.
get_background_statusget_background_status() -> dictReturns a snapshot: {"stopped", "consecutive_failures", "last_error", "last_refreshed"}.

init_evaluation_results

Initializes a cached, polling fetcher for evaluation results. Mirrors init_latest_deployment but targets EvaluationsClient.get_results. Useful when an evaluation is still running server-side and you want to surface partial results without writing your own polling loop — the same query (pagination + filters) is replayed on every refresh. This is an async method.
results = await adaline.init_evaluation_results(
    prompt_id="your-prompt-id",
    evaluation_id="your-evaluation-id",
    grade="fail",
    expand="row",
    sort="score:desc",
    limit=50,
    refresh_interval=30,
    max_continuous_failures=3,
)

# Retrieve cached page (no API call unless forced)
page = await results.get()

# Force a refresh
page = await results.get(force_refresh=True)

# Check background refresh status
status = results.get_background_status()

# Stop the background refresh when you're done
await results.stop()

Parameters

prompt_id
str
required
The unique ID of the prompt.
evaluation_id
str
required
The unique ID of the evaluation.
grade
str | None
Filter by grade: "pass", "fail", or "unknown".
expand
str | None
Pass "row" to include the underlying dataset row in each result.
sort
str | None
Sort order: "createdAt:asc", "createdAt:desc", "score:asc", or "score:desc".
limit
int | None
Page size.
cursor
str | None
Pagination cursor from a previous response.
refresh_interval
int
default:"60"
Seconds between background refreshes. Clamped to [1, 600].
max_continuous_failures
int
default:"3"
Consecutive failures before the background loop stops itself.
Returns: An EvaluationResultsController exposing get(force_refresh=False), get_background_status(), and stop() — identical in shape to the deployment controller. Raises: ApiException if the initial fetch fails.

init_monitor

Initializes and returns a new Monitor instance for logging traces and spans. This is a synchronous method.
monitor = adaline.init_monitor(
    project_id="your-project-id",
    flush_interval_seconds=1,
    max_buffer_size=1000,
    default_content=None
)

Parameters

project_id
str
required
Unique project identifier for grouping traces and spans.
flush_interval_seconds
int
default:"1"
Interval (in seconds) at which the buffer is automatically flushed.
max_buffer_size
int
default:"1000"
Maximum number of buffered entries before oldest items are dropped.
default_content
LogSpanContent | None
Default content attached to spans when none is provided. See LogSpanContent for available types. Defaults to an LogSpanOtherContent with empty input/output.

Namespace clients

Seven namespace clients are attached to every Adaline instance. Each wraps the corresponding autogen API with retry-on-5xx, abort-on-4xx, and keyword-only arguments:
AttributeClientCovers
adaline.datasetsDatasetsClientDatasets (+ .rows, .columns sub-clients)
adaline.promptsPromptsClientPrompts (+ .draft, .playgrounds, .evaluators, .evaluations sub-clients)
adaline.providersProvidersClientconfigured LLM providers
adaline.modelsModelsClientmodels available across providers
adaline.projectsProjectsClientlist / get / update workspace projects
adaline.logsLogsClientread-side log access (+ .traces, .spans sub-clients)
Raw escape hatches are also exposed:
  • adaline.deployments_api — raw DeploymentsApi from adaline_api
  • adaline.logs_api — raw LogsApi (used internally by Monitor)
If you need identical retry behavior on an arbitrary call, use the exported with_retry helper:
from adaline.main import Adaline
from adaline.clients import with_retry

adaline = Adaline()

response = await with_retry(lambda: adaline.projects.list())
Returns: A Monitor instance.