Instrumentation methods
- Proxy (zero-code)
- SDK (TypeScript, Python)
- API (REST API)
The fastest way to start. Route your AI provider calls through Adaline’s Proxy gateway by changing the base URL. Traces and spans are created automatically — no SDK required.Best for: Quick setup, simple workflows, teams that want observability without code changes.
Choosing an approach
| Approach | Setup effort | Best for |
|---|---|---|
| Proxy | Minimal — change base URL | Quick start, simple apps |
| SDK | Moderate — add SDK calls | Production apps, complex workflows |
| API | Flexible — HTTP calls | Any language, low-level control |
| Frameworks | Varies by framework | LangChain, LangGraph, OpenTelemetry |
Key concepts
| Concept | Description |
|---|---|
| Trace | A complete end-to-end request flow through your application (e.g., a user query that triggers multiple operations). |
| Span | An individual operation within a trace (e.g., one LLM call, one tool execution, one retrieval). |
| Session | A group of related traces (e.g., all turns in a conversation with one user). |
| Content type | The category of operation a span represents: Model, Tool, Retrieval, Embeddings, Function, Guardrail. |
What gets captured
Once instrumented, Adaline automatically tracks:- Request and response payloads for every LLM call
- Token usage (input and output) and cost calculation
- Latency for each operation and end-to-end
- Model and provider information
- Error details when operations fail
- Custom metadata — tags, attributes, session IDs, and variables
- User feedback signals tied to specific traces
- Evaluation scores from continuous evaluations
With Adaline SDKs
Full SDK instrumentation with TypeScript and Python.
With Adaline Proxy
Zero-code instrumentation by changing the base URL.
With AI Frameworks
LangChain, Vercel AI SDK, LlamaIndex, and more.
Advanced Usage
Multi-step workflows and session tracking.