Documentation Index
Fetch the complete documentation index at: https://www.adaline.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
Adaline
The main client class for interacting with the Adaline platform. Provides methods to fetch deployments, manage cached deployment controllers with background refresh, and initialize monitoring.Constructor
Parameters
Your Adaline API key. Defaults to the
ADALINE_API_KEY environment variable.The base URL for the Adaline API. Falls back to the
ADALINE_BASE_URL environment variable, then https://api.adaline.ai/v2.If
True, enables DEBUG-level logging on the adaline logger with a StreamHandler that outputs [Adaline] LEVEL: message.Methods
get_deployment
Fetches a specific prompt deployment by prompt ID and deployment ID. This is an async method.Parameters
The unique ID of the prompt.
The specific deployment ID.
Deployment object.
Raises: ApiException if the API call fails (4xx errors fail immediately, 5xx errors are retried).
get_latest_deployment
Fetches the latest deployment for a prompt in a specific environment. This is an async method.Parameters
The unique ID of the prompt.
The deployment environment ID.
Deployment object for the given environment.
Raises: ApiException if the API call fails.
init_latest_deployment
Initializes a cached latest deployment with automatic background refresh. Fetches the latest deployment immediately, then starts a background loop that refreshes the cache at the given interval. This is an async method.Parameters
The unique ID of the prompt.
The deployment environment ID.
Seconds between background refreshes. Clamped to
[1, 600].Number of consecutive failures before the background loop stops itself.
Controller instance for retrieving and managing the cached deployment.
Raises: ApiException if the initial fetch fails.
Controller
TheController class returned by init_latest_deployment provides:
| Method | Signature | Description |
|---|---|---|
get | async get(force_refresh: bool = False) -> Optional[Deployment] | Returns the cached deployment. If force_refresh=True, bypasses the cache. |
stop | async stop() | Cancels the background refresh task and clears the cache entry. |
get_background_status | get_background_status() -> dict | Returns a snapshot: {"stopped", "consecutive_failures", "last_error", "last_refreshed"}. |
init_evaluation_results
Initializes a cached, polling fetcher for evaluation results. Mirrorsinit_latest_deployment but targets EvaluationsClient.get_results. Useful when an evaluation is still running server-side and you want to surface partial results without writing your own polling loop — the same query (pagination + filters) is replayed on every refresh. This is an async method.
Parameters
The unique ID of the prompt.
The unique ID of the evaluation.
Filter by grade:
"pass", "fail", or "unknown".Pass
"row" to include the underlying dataset row in each result.Sort order:
"createdAt:asc", "createdAt:desc", "score:asc", or "score:desc".Page size.
Pagination cursor from a previous response.
Seconds between background refreshes. Clamped to
[1, 600].Consecutive failures before the background loop stops itself.
EvaluationResultsController exposing get(force_refresh=False), get_background_status(), and stop() — identical in shape to the deployment controller.
Raises: ApiException if the initial fetch fails.
init_monitor
Initializes and returns a newMonitor instance for logging traces and spans. This is a synchronous method.
Parameters
Unique project identifier for grouping traces and spans.
Interval (in seconds) at which the buffer is automatically flushed.
Maximum number of buffered entries before oldest items are dropped.
Default content attached to spans when none is provided. See
LogSpanContent for available types. Defaults to an LogSpanOtherContent with empty input/output.Namespace clients
Seven namespace clients are attached to everyAdaline instance. Each wraps the corresponding autogen API with retry-on-5xx, abort-on-4xx, and keyword-only arguments:
| Attribute | Client | Covers |
|---|---|---|
adaline.datasets | DatasetsClient | Datasets (+ .rows, .columns sub-clients) |
adaline.prompts | PromptsClient | Prompts (+ .draft, .playgrounds, .evaluators, .evaluations sub-clients) |
adaline.providers | ProvidersClient | configured LLM providers |
adaline.models | ModelsClient | models available across providers |
adaline.projects | ProjectsClient | list / get / update workspace projects |
adaline.logs | LogsClient | read-side log access (+ .traces, .spans sub-clients) |
adaline.deployments_api— rawDeploymentsApifromadaline_apiadaline.logs_api— rawLogsApi(used internally byMonitor)
with_retry helper: