Documentation Index Fetch the complete documentation index at: https://www.adaline.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
Adaline Class
The Adaline class is the main entry point for the TypeScript SDK. It provides methods for fetching deployments and initializing monitors for observability.
Constructor
new Adaline ( options ?: AdalineOptions )
Parameters
Configuration options for the Adaline client API key for authentication. If omitted, reads from ADALINE_API_KEY environment variable.
baseURL
string
default: "https://api.adaline.ai/v2"
Base URL of the Adaline API. Defaults to production API.
Custom Logger object with debug, info, warn, error methods. Any console-compatible logger works (e.g., console, Winston, Pino). Defaults to silent no-op logger. If true and no logger is provided, uses console as the logger.
Example
import { Adaline } from '@adaline/client' ;
// Uses ADALINE_API_KEY environment variable
const adaline = new Adaline ();
Methods
getDeployment()
Fetch a specific deployment by its ID.
async getDeployment ( options : GetDeploymentOptions ): Promise < Deployment >
Parameters
The unique identifier of the prompt.
The unique identifier of the deployment.
Returns
Example
const deployment = await adaline . getDeployment ({
promptId: 'prompt_abc123' ,
deploymentId: 'deploy_xyz789'
});
console . log ( deployment . prompt . config . model ); // 'gpt-4o'
console . log ( deployment . prompt . messages ); // Array of messages
console . log ( deployment . prompt . tools ); // Array of tools
getLatestDeployment()
Fetch the latest deployment for a prompt in a specific environment.
async getLatestDeployment ( options : GetLatestDeploymentOptions ): Promise < Deployment >
Parameters
The unique identifier of the prompt.
The unique identifier of the deployment environment.
Returns
Example
const deployment = await adaline . getLatestDeployment ({
promptId: 'prompt_abc123' ,
deploymentEnvironmentId: 'environment_abc123'
});
// Use the deployment config with Adaline Gateway
import { Gateway } from '@adaline/gateway' ;
import { OpenAI } from '@adaline/openai' ;
const gateway = new Gateway ();
const openaiProvider = new OpenAI ();
const model = openaiProvider . chatModel ({
modelName: deployment . prompt . config . model ,
apiKey: process . env . OPENAI_API_KEY !
});
const gatewayResponse = await gateway . completeChat ({
model ,
config: deployment . prompt . config . settings ,
messages: deployment . prompt . messages ,
tools: deployment . prompt . tools
});
initLatestDeployment()
Initialize a cached latest deployment with automatic background refresh. This is the recommended approach for production applications.
async initLatestDeployment ( options : InitLatestDeploymentOptions ): Promise < DeploymentController >
Parameters
The unique identifier of the prompt.
The unique identifier of the deployment environment.
How often to refresh the cached deployment, in seconds. Valid range: 1-600 seconds.
Maximum consecutive failures before stopping background refresh.
Returns
A controller object with the following methods: get()
(forceRefresh?: boolean) => Promise<Deployment | undefined>
Get the cached deployment. Pass true to force a fresh fetch.
Stop the background refresh and clear the cache.
Example
Basic Usage
Production Setup
Health Monitoring
Force Refresh
const controller = await adaline . initLatestDeployment ({
promptId: 'prompt_abc123' ,
deploymentEnvironmentId: 'environment_abc123' ,
refreshInterval: 60 // refresh every 60 seconds
});
// Get cached deployment (instant, no API call)
const deployment = await controller . get ();
// Use the deployment
console . log ( deployment . prompt . config . model );
// Stop background refresh when done
controller . stop ();
initEvaluationResults()
Initialize a cached, polling fetcher for evaluation results. Mirrors initLatestDeployment() but targets EvaluationResultsClient.list() . Useful when an evaluation is still running server-side and you want to surface partial results without writing your own polling loop — the same query (pagination + filters) is replayed on every refresh.
async initEvaluationResults ( options : InitEvaluationResultsOptions ): Promise < EvaluationResultsController >
Parameters
query
EvaluationResultsQuery
required
Seconds between background refreshes. Clamped to [1, 600].
Consecutive failures before the background loop self-stops.
Returns
controller
EvaluationResultsController
A controller with the same shape as the deployment controller: get()
(forceRefresh?: boolean) => Promise<ListEvaluationResultsResponse | undefined>
Get the cached results. Pass true to force a fresh fetch.
Stop the background refresh and clear the cache entry.
Example
const results = await adaline . initEvaluationResults ({
query: {
promptId: 'prompt_abc123' ,
evaluationId: 'eval_abc123' ,
grade: 'fail' ,
expand: 'row' ,
sort: 'score:desc' ,
limit: 50 ,
},
refreshInterval: 30 ,
});
// Poll from your UI layer without additional HTTP
setInterval ( async () => {
const page = await results . get ();
renderEvaluationResults ( page );
}, 5000 );
// Stop when the user navigates away
onUnmount (() => {
results . stop ();
});
initMonitor()
Initialize a monitoring session for logging traces and spans.
initMonitor ( options : InitMonitorOptions ): Monitor
Parameters
Unique identifier for your project. All traces and spans will be associated with this project.
How often to flush buffered entries to the API, in seconds.
Maximum number of buffered entries before triggering an automatic flush.
Default LogSpanContent used when no explicit content is provided. Defaults to { type: 'Other', input: '{}', output: '{}' }.
Returns
A Monitor instance for creating traces and spans. See Monitor Class for details.
Example
Basic
Custom Configuration
Production Setup
const monitor = adaline . initMonitor ({
projectId: 'proj_abc123'
});
const trace = monitor . logTrace ({ name: 'User Request' });
// ... log spans ...
trace . end ();
Namespace clients
Seven namespace clients are attached to every Adaline instance. Each wraps the corresponding autogen API with retry-on-5xx, abort-on-4xx, and named-argument ergonomics:
Property Client Covers adaline.datasetsDatasetsClient Datasets (+ .rows, .columns sub-clients) adaline.promptsPromptsClient Prompts (+ .draft, .playgrounds, .evaluators, .evaluations sub-clients) adaline.providersProvidersClient configured LLM providers adaline.modelsModelsClient models available across providers adaline.projectsProjectsClient list / get / update workspace projects adaline.logsLogsClient read-side log access (+ .traces, .spans sub-clients)
Raw escape hatches are also exposed for cases that don’t have a first-class helper yet:
adaline.deploymentsApi — raw DeploymentsApi from @adaline/api
adaline.logsApi — raw LogsApi (used internally by Monitor )
If you need identical retry behavior on an arbitrary call, use the exported withRetry helper:
import { Adaline , withRetry } from '@adaline/client' ;
const adaline = new Adaline ();
const projects = await withRetry (() => {
return adaline . projects . list ();
});
Complete Example
Here’s a complete example showing all methods working together:
import { Adaline } from '@adaline/client' ;
import { Gateway } from '@adaline/gateway' ;
import { OpenAI } from '@adaline/openai' ;
const adaline = new Adaline ({
debug: true
});
const gateway = new Gateway ();
const openaiProvider = new OpenAI ();
// Initialize deployment controller
const deploymentController = await adaline . initLatestDeployment ({
promptId: 'chatbot-prompt' ,
deploymentEnvironmentId: 'environment_abc123' ,
refreshInterval: 60
});
// Initialize monitor
const monitor = adaline . initMonitor ({
projectId: 'chatbot-project'
});
// Handle chat request
async function handleChat ( userId : string , message : string ) {
// Get cached deployment (no API call)
const deployment = await deploymentController . get ();
// Create trace for this conversation
const trace = monitor . logTrace ({
name: 'Chat Turn' ,
sessionId: userId ,
tags: [ 'chat' , 'production' ],
attributes: { userId , messageLength: message . length }
});
// Log LLM call
const span = trace . logSpan ({
name: 'LLM Completion' ,
promptId: deployment . promptId ,
deploymentId: deployment . id ,
runEvaluation: true ,
tags: [ 'llm' , deployment . prompt . config . providerName ]
});
try {
const model = openaiProvider . chatModel ({
modelName: deployment . prompt . config . model ,
apiKey: process . env . OPENAI_API_KEY !
});
const gatewayResponse = await gateway . completeChat ({
model ,
config: deployment . prompt . config . settings ,
messages: [
... deployment . prompt . messages ,
{ role: 'user' , content: [{ modality: 'text' , value: message }] }
],
tools: deployment . prompt . tools
});
const reply = gatewayResponse . response . messages [ 0 ]. content [ 0 ]. value ;
// Update span with success
span . update ({
status: 'success' ,
content: {
type: 'Model' ,
provider: deployment . prompt . config . providerName ,
model: deployment . prompt . config . model ,
input: JSON . stringify ( gatewayResponse . provider . request ),
output: JSON . stringify ( gatewayResponse . provider . response )
}
});
trace . update ({ status: 'success' });
return reply ;
} catch ( error ) {
span . update ({
status: 'failure' ,
attributes: {
error: error instanceof Error ? error . message : String ( error )
}
});
trace . update ({ status: 'failure' });
throw error ;
} finally {
span . end ();
trace . end ();
}
}
// Graceful shutdown
process . on ( 'SIGTERM' , async () => {
await monitor . flush ();
monitor . stop ();
deploymentController . stop ();
console . log ( 'Shutdown complete' );
});
Type Definitions
interface AdalineOptions {
apiKey ?: string ;
baseURL ?: string ;
logger ?: Logger ;
debug ?: boolean ;
}
interface Logger {
debug ( message : string , ... args : unknown []) : void ;
info ( message : string , ... args : unknown []) : void ;
warn ( message : string , ... args : unknown []) : void ;
error ( message : string , ... args : unknown []) : void ;
}
interface GetDeploymentOptions {
promptId : string ;
deploymentId : string ;
}
interface GetLatestDeploymentOptions {
promptId : string ;
deploymentEnvironmentId : string ;
}
interface InitLatestDeploymentOptions {
promptId : string ;
deploymentEnvironmentId : string ;
refreshInterval ?: number ;
maxContinuousFailures ?: number ;
}
interface BackgroundStatus {
stopped : boolean ;
consecutiveFailures : number ;
lastError : Error | null ;
lastRefreshed : Date ;
}
// Return type of initLatestDeployment() (not an exported class)
interface DeploymentController {
get : ( forceRefresh ?: boolean ) => Promise < Deployment | undefined >;
backgroundStatus : () => BackgroundStatus ;
stop : () => void ;
}
interface InitMonitorOptions {
projectId : string ;
flushInterval ?: number ; // default: 1
maxBufferSize ?: number ; // default: 1000
defaultContent ?: LogSpanContent ;
}
Best Practices
Use Environment Variables
const adaline = new Adaline ({
apiKey: process . env . ADALINE_API_KEY ,
baseURL: process . env . ADALINE_API_URL || 'https://api.adaline.ai/v2'
});
Initialize Once, Use Everywhere
// Initialize at app startup
let deploymentController : DeploymentController ;
let monitor : Monitor ;
async function initialize () {
const adaline = new Adaline ();
deploymentController = await adaline . initLatestDeployment ({
promptId: process . env . PROMPT_ID ! ,
deploymentEnvironmentId: process . env . ENVIRONMENT !
});
monitor = adaline . initMonitor ({
projectId: process . env . PROJECT_ID !
});
}
// Export for use throughout your app
export { deploymentController , monitor };
Monitor Health in Production
const controller = await adaline . initLatestDeployment ({ /* ... */ });
// Set up health check
setInterval (() => {
const status = controller . backgroundStatus ();
if ( status . stopped ) {
// Alert: Background refresh stopped!
logger . error ( 'Deployment refresh stopped' , status . lastError );
}
if ( status . consecutiveFailures >= 2 ) {
// Warning: Having issues
logger . warn ( 'Deployment refresh failures' , {
failures: status . consecutiveFailures ,
lastError: status . lastError
});
}
}, 60000 ); // Check every minute
Handle Graceful Shutdown
async function gracefulShutdown () {
console . log ( 'Shutting down...' );
// Flush remaining logs
await monitor . flush ();
// Stop background processes
monitor . stop ();
deploymentController . stop ();
console . log ( 'Shutdown complete' );
process . exit ( 0 );
}
process . on ( 'SIGTERM' , gracefulShutdown );
process . on ( 'SIGINT' , gracefulShutdown );