If you don’t have an Adaline account yet, you can create one by signing up at app.adaline.ai.
After creating an account, you will notice the following:
A Shared teamspace containing workspace-wide public projects and other entities.
A Private teamspace with a sample project, prompt and dataset.
If you use an AI coding agent such as Cursor, Windsurf, Cline, or any other agent that accepts context — you can hand it all the information it needs to integrate Adaline into your codebase automatically.Open the full integration context document below, then use the Copy page button (top-right of the page) or the ChatGPT / Claude buttons to send it directly to your AI agent.
TypeScript SDK Integration Context
Python SDK Integration Context
REST API Integration Context
MCP servers coming soon — We are building dedicated MCP (Model Context Protocol) servers that will allow AI coding agents to query Adaline’s documentation and APIs directly, making this integration even more seamless.
The fastest way to start sending data to Adaline. Instead of calling your AI provider directly, you route requests through Adaline’s gateway by changing the base URL. Adaline transparently forwards the request to your provider while recording the full trace — no SDK or additional instrumentation required.
Below are minimal examples showing how to route requests through the proxy. Replace the placeholder values with your actual credentials and IDs.
TypeScript (OpenAI)
Python (OpenAI)
Python (Anthropic)
import OpenAI from "openai";const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY, baseURL: "https://gateway.adaline.ai/v1/openai/", defaultHeaders: { "adaline-api-key": process.env.ADALINE_API_KEY, "adaline-project-id": process.env.ADALINE_PROJECT_ID, "adaline-prompt-id": process.env.ADALINE_PROMPT_ID, },});// First prompt — classify intentconst classification = await client.chat.completions.create({ model: "gpt-4o-mini", messages: [ { role: "system", content: "Classify the user's intent as one of: greeting, question, complaint, feedback.", }, { role: "user", content: "My order hasn't arrived in three days." }, ],});console.log("Intent:", classification.choices[0].message.content);// Second prompt — generate replyconst reply = await client.chat.completions.create({ model: "gpt-4o-mini", messages: [ { role: "system", content: `You are a support assistant. The user's intent is: ${classification.choices[0].message.content}. Respond helpfully.`, }, { role: "user", content: "My order hasn't arrived in three days." }, ],});console.log("Reply:", reply.choices[0].message.content);
import osfrom openai import OpenAIclient = OpenAI( api_key=os.getenv("OPENAI_API_KEY"), base_url="https://gateway.adaline.ai/v1/openai/",)headers = { "adaline-api-key": os.getenv("ADALINE_API_KEY"), "adaline-project-id": os.getenv("ADALINE_PROJECT_ID"), "adaline-prompt-id": os.getenv("ADALINE_PROMPT_ID"),}# First prompt — classify intentclassification = client.chat.completions.create( model="gpt-4o-mini", messages=[ { "role": "system", "content": "Classify the user's intent as one of: greeting, question, complaint, feedback.", }, {"role": "user", "content": "My order hasn't arrived in three days."}, ], extra_headers=headers,)intent = classification.choices[0].message.contentprint("Intent:", intent)# Second prompt — generate replyreply = client.chat.completions.create( model="gpt-4o-mini", messages=[ { "role": "system", "content": f"You are a support assistant. The user's intent is: {intent}. Respond helpfully.", }, {"role": "user", "content": "My order hasn't arrived in three days."}, ], extra_headers=headers,)print("Reply:", reply.choices[0].message.content)
import osfrom anthropic import Anthropicclient = Anthropic( api_key=os.getenv("ANTHROPIC_API_KEY"), base_url="https://gateway.adaline.ai/v1/anthropic/", default_headers={ "adaline-api-key": os.getenv("ADALINE_API_KEY"), "adaline-project-id": os.getenv("ADALINE_PROJECT_ID"), "adaline-prompt-id": os.getenv("ADALINE_PROMPT_ID"), },)# First prompt — classify intentclassification = client.messages.create( model="claude-sonnet-4-20250514", max_tokens=256, messages=[ {"role": "user", "content": "Classify this intent as greeting, question, complaint, or feedback: 'My order hasn't arrived in three days.'"}, ],)intent = classification.content[0].textprint("Intent:", intent)# Second prompt — generate replyreply = client.messages.create( model="claude-sonnet-4-20250514", max_tokens=1024, messages=[ {"role": "user", "content": f"You are a support assistant. The user's intent is: {intent}. Respond helpfully to: 'My order hasn't arrived in three days.'"}, ],)print("Reply:", reply.content[0].text)
Run your application. Both requests are automatically routed through Adaline’s gateway, and traces appear in the dashboard within seconds.
Use the Adaline SDK (TypeScript or Python) or the REST API directly for complete control over how traces and spans are structured. This approach lets you capture any operation — LLM calls, tool executions, retrievals, and custom functions.
Regardless of which integration method you chose, the dashboard experience is the same.
1
Select your Project from the sidebar, then navigate to the Monitor tab from the top bar. You will see a list of traces — one for each request your application made.
2
Click on any trace to expand it. Each trace contains one or more spans representing individual operations (e.g., an LLM call, a tool invocation, a retrieval operation, etc).By default, the trace view is in a tree view. You can switch to a waterfall view by clicking the Waterfall button (top right).
Each span represents an individual operation (e.g., an LLM call, a tool invocation) within a trace.
Span provides a detailed view of each operation, including the request and response payloads, latency metrics, and cost and token usage.Select your Prompt from the sidebar, then navigate to the Monitor tab from the top bar. You will see a list of spans — one for each invocation of the step in your application.
Click on any span to open its detailed view.You will see the full request and response payloads (including the system prompt, user message, and the model’s output), latency metrics (time-to-first-token and total response time), and cost and token usage (input tokens, output tokens, and total cost) — all in one place.
Adaline automatically calculates the cost of each request based on the model’s token pricing.
Charts provide aggregated, time-series views of your AI agent’s performance. They are automatically generated from the traces and spans flowing into Adaline, giving you a high-level operational dashboard without any additional configuration. Use charts to spot trends, detect anomalies, and then drill down into the underlying traces and spans for root cause analysis.Select your Project from the sidebar, then navigate to the Overview tab from the top bar.You will see a dashboard of metric charts, showing a time-series view of volume, latency, input tokens, output tokens, cost, and evaluation score.Together, these give you a complete picture of your AI agent’s behavior in production — from the structure of every request, to its cost and speed — all in one place.Congratulations! You have successfully integrated your AI agent with Adaline.Read more about monitoring and observability