Skip to main content
Proxy is a hosted proxy service that automatically captures telemetry, traces, and spans from your AI applications. Instead of calling AI provider APIs directly, your applications route requests through Adaline’s cloud infrastructure by simply modifying the baseUrl in your AI SDKs and adding required headers. This enables automatic observability without manual instrumentation. Proxy is based on the open source Adaline Gateway project. Proxy flow diagram

The flow

  1. SDK Configuration — Update your AI SDK’s baseUrl to point to Proxy
  2. Header Addition — Add required Adaline headers for authentication, project and prompt identification
  3. Transparent Proxying — Proxy forwards your requests to the actual AI provider
  4. Automatic Telemetry — Responses are captured and logged as traces and spans in your Adaline project and prompt
  5. Original Response — Your application receives the exact same response it would from the provider

Benefits

  • Minimal Code Changes — Works with existing AI SDK implementations by adding a couple of lines of code
  • Automatic Observability — Captures traces and spans without manual logging
  • Real-time Monitoring — Immediate visibility into AI application performance, including token usage and costs
  • Continuous Evaluations — Setup one-click continuous AI evaluations for your AI applications
  • Provider Agnostic — Supports all major AI providers
  • Production Ready — Built for scale with high availability and security
  • No Extra Costs — Proxy requests are billed as regular API Log requests to Adaline

Quick start

import os
from openai import OpenAI

client = OpenAI(
    api_key=os.getenv("OPENAI_API_KEY"),
    base_url="https://gateway.adaline.ai/v1/openai/",
)

response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello!"}],
    extra_headers={
        "adaline-api-key": os.getenv("ADALINE_API_KEY"),
        "adaline-project-id": os.getenv("ADALINE_PROJECT_ID"),
        "adaline-prompt-id": os.getenv("ADALINE_PROMPT_ID"),
        "adaline-trace-name": "my-workflow",
    },
)

Supported providers

ProviderBase URL
OpenAIhttps://gateway.adaline.ai/v1/openai/
Anthropichttps://gateway.adaline.ai/v1/anthropic/
Googlehttps://gateway.adaline.ai/v1/google
Azurehttps://gateway.adaline.ai/v1/azure/
Amazon Bedrockhttps://gateway.adaline.ai/v1/bedrock/
Groqhttps://gateway.adaline.ai/v1/groq/
Open Routerhttps://gateway.adaline.ai/v1/open-router/
Together AIhttps://gateway.adaline.ai/v1/together-ai/
xAIhttps://gateway.adaline.ai/v1/xai/
Vertex AIhttps://gateway.adaline.ai/v1/vertex

Required headers

HeaderDescription
adaline-api-keyYour workspace API key
adaline-project-idThe project to log traces to
adaline-prompt-idThe prompt to associate spans with

Optional trace headers

HeaderDescriptionDefault
adaline-trace-nameName for the trace"Proxy"
adaline-trace-statusTrace status: success, failure, pending, unknownAuto-detected
adaline-trace-reference-idCustom ID to group multiple requests into one traceAuto-generated
adaline-trace-session-idSession ID to group related traces
adaline-trace-attributesJSON array of attribute operations
adaline-trace-tagsJSON array of tag operations

Optional span headers

HeaderDescription
adaline-span-nameName for the span
adaline-span-reference-idCustom span ID
adaline-span-session-idSession ID on the span
adaline-span-variablesJSON object of variable values for evaluation
adaline-span-attributesCustom span attributes
adaline-span-tagsSpan tags
adaline-span-run-evaluationSet to "true" to trigger continuous evaluations
adaline-deployment-idDeployment ID to associate with the span
For the full header specification with validation rules and detailed examples, see the Proxy Headers Reference.

Group requests into a single trace

Use the same adaline-trace-reference-id across multiple requests to group them under one trace:
import uuid

trace_id = str(uuid.uuid4())

# First request
embedding = client.embeddings.create(
    model="text-embedding-3-small",
    input="User query",
    extra_headers={
        "adaline-api-key": os.getenv("ADALINE_API_KEY"),
        "adaline-project-id": os.getenv("ADALINE_PROJECT_ID"),
        "adaline-prompt-id": os.getenv("ADALINE_PROMPT_ID"),
        "adaline-trace-reference-id": trace_id,
        "adaline-span-name": "query-embedding",
    },
)

# Second request — same trace
response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "..."}],
    extra_headers={
        "adaline-api-key": os.getenv("ADALINE_API_KEY"),
        "adaline-project-id": os.getenv("ADALINE_PROJECT_ID"),
        "adaline-prompt-id": os.getenv("ADALINE_PROMPT_ID"),
        "adaline-trace-reference-id": trace_id,
        "adaline-span-name": "chat-completion",
    },
)

What gets captured automatically

When you route through the Proxy, Adaline automatically captures:
  • Request and response payloads
  • Token usage (input and output) and cost
  • Latency
  • Model and provider information
  • Errors and status codes

Next steps

Proxy Headers Reference

Complete reference for all required and optional Proxy headers.

Advanced Usage

Session tracking, multi-step traces, and error patterns.

Setup Continuous Evaluations

Run automated quality checks on live production data.

Log Attachments

Attach custom attributes, tags, and variables.

Integrations

Browse all supported AI providers and frameworks.

Adaline Gateway (Open Source)

The open source project that powers Proxy.