Skip to main content
Instrumentation is how you connect your AI application to Adaline’s Monitor pillar. Once instrumented, every LLM call, tool execution, and workflow step is captured as structured telemetry — giving you visibility into latency, cost, token usage, quality scores, and more. Adaline offers three instrumentation approaches that you can use independently or combine:

Instrumentation methods

The fastest way to start. Route your AI provider calls through Adaline’s Proxy gateway by changing the base URL. Traces and spans are created automatically — no SDK required.
from openai import OpenAI

client = OpenAI(
    base_url="https://gateway.adaline.ai/v1/openai/",
    default_headers={
        "adaline-api-key": "your-adaline-key",
        "adaline-project-id": "your-project-id",
        "adaline-prompt-id": "your-prompt-id",
    },
)
Best for: Quick setup, simple workflows, teams that want observability without code changes.

Choosing an approach

ApproachSetup effortBest for
ProxyMinimal — change base URLQuick start, simple apps
SDKModerate — add SDK callsProduction apps, complex workflows
APIFlexible — HTTP callsAny language, low-level control
FrameworksVaries by frameworkLangChain, LangGraph, OpenTelemetry

Key concepts

ConceptDescription
TraceA complete end-to-end request flow through your application (e.g., a user query that triggers multiple operations).
SpanAn individual operation within a trace (e.g., one LLM call, one tool execution, one retrieval).
SessionA group of related traces (e.g., all turns in a conversation with one user).
Content typeThe category of operation a span represents: Model, Tool, Retrieval, Embeddings, Function, Guardrail.

What gets captured

Once instrumented, Adaline automatically tracks:
  • Request and response payloads for every LLM call
  • Token usage (input and output) and cost calculation
  • Latency for each operation and end-to-end
  • Model and provider information
  • Error details when operations fail
  • Custom metadata — tags, attributes, session IDs, and variables
  • User feedback signals tied to specific traces
  • Evaluation scores from continuous evaluations

With Adaline SDKs

Full SDK instrumentation with TypeScript and Python.

With Adaline Proxy

Zero-code instrumentation by changing the base URL.

With AI Frameworks

LangChain, Vercel AI SDK, LlamaIndex, and more.

Advanced Usage

Multi-step workflows and session tracking.