Skip to main content

SDK Reference

The Adaline SDK enables you to build production-ready AI agentic applications with enterprise-grade observability and deployment management.

Installation

npm install @adaline/client @adaline/api

Overview

The Adaline SDK provides two core capabilities:

Deployment Management

Fetch and cache your deployed prompts with automatic background refresh:
  • getDeployment() / get_deployment() - Get a specific prompt deployment by ID
  • getLatestDeployment() / get_latest_deployment() - Get the latest prompt deployment by environment (e.g., production, staging)
  • initLatestDeployment() / init_latest_deployment() - Initialize cached prompt deployment with auto-refresh

Observability & Monitoring

Track every AI agentic application interaction with structured traces and spans:
  • Monitor - Buffer and batch log submissions with automatic retries and automatic flushing
  • Trace - High-level operation tracking (e.g., user request, workflow, agentic application interaction)
  • Span - Granular operation tracking (e.g., LLM call, tool execution, retrieval, embedding generation, function call, guardrail check, etc.)

Quick Start

import { Adaline } from '@adaline/client';
import { Gateway } from '@adaline/gateway';
import { OpenAI } from '@adaline/openai';

// Initialize the Adaline client (reads ADALINE_API_KEY from environment)
const adaline = new Adaline();
const gateway = new Gateway();
const openaiProvider = new OpenAI();

// Get your deployed prompt configuration
const deployment = await adaline.getLatestDeployment({
  promptId: 'your-prompt-id',
  deploymentEnvironmentId: 'your-deployment-environment-id'
});

// Initialize monitoring for your project
const monitor = adaline.initMonitor({
  projectId: 'your-project-id',
  flushInterval: 5,
  maxBufferSize: 100
});

// Create a trace for the entire user interaction
const trace = monitor.logTrace({
  name: 'Chat Completion',
  sessionId: 'user-session-123'
});

// Log the LLM call as a span
const llmSpan = trace.logSpan({
  name: 'OpenAI GPT-4 Call',
  promptId: deployment.promptId,
  deploymentId: deployment.id
});

try {
  // Create model from deployment config
  const model = openaiProvider.chatModel({
    modelName: deployment.prompt.config.model,
    apiKey: process.env.OPENAI_API_KEY!
  });

  // Make your LLM call with the deployed configuration using Adaline Gateway
  const gatewayResponse = await gateway.completeChat({
    model,
    config: deployment.prompt.config.settings,
    messages: deployment.prompt.messages,
    tools: deployment.prompt.tools
  });

  // Update span with successful result
  llmSpan.update({
    status: 'success',
    content: {
      type: 'Model',
      provider: deployment.prompt.config.providerName,
      model: deployment.prompt.config.model,
      input: JSON.stringify(gatewayResponse.provider.request),
      output: JSON.stringify(gatewayResponse.provider.response)
    }
  });

  trace.update({ status: 'success' });
} catch (error) {
  llmSpan.update({ status: 'failure' });
  trace.update({ status: 'failure' });
}

// End tracking
trace.end();
await monitor.flush();

Key Features

Automatic Background Refresh

Keep your prompts up-to-date without redeploying your application:
const controller = await adaline.initLatestDeployment({
  promptId: 'your-prompt-id',
  deploymentEnvironmentId: 'your-deployment-environment-id',
  refreshInterval: 60
});

const deployment = await controller.get();
const refreshedDeployment = await controller.get(true);
controller.stop();

Smart Buffering & Batching

Optimize performance with automatic batching and retry logic:
  • Logs are buffered in memory and flushed in batches
  • Automatic retry with exponential backoff on transient failures (5xx)
  • Configurable flush intervals and buffer sizes
  • Failed entries are counted and dropped, following the OpenTelemetry error handling principle

Comprehensive Observability

Track everything in your AI agentic application:
  • Model spans - LLM inference calls (streaming and non-streaming)
  • Tool spans - Function/API calls
  • Retrieval spans - RAG and vector database queries
  • Embeddings spans - Embedding generation
  • Function spans - Custom application logic
  • Guardrail spans - Safety and compliance checks

Rich Metadata

Attach detailed context to every operation and later search or filter on them.
  • Tags - Categorize and filter traces (e.g., ['production', 'high-priority'])
  • Attributes - Key-value metadata (e.g., { userId: '123', region: 'us-east' })
  • Sessions - Group related traces by session ID (e.g., user-session-123)
  • References - Link traces and spans with custom IDs (e.g., trace-ref-001)