1. Sign up
If you don’t have an Adaline account yet, you can create one by signing up at app.adaline.ai. After creating an account, you will notice the following:- A
Sharedteamspace containing workspace-wide public projects and other entities. - A
Privateteamspace with a sample project, prompt and dataset.
2. Set up your workspace API key
Create an API key that your application will use to authenticate with Adaline.- In the sidebar, click Settings → API keys.
- Click on Create API key.

- Rename the API key to something meaningful.
- Click on the generated API key to copy and paste in a secure location. It will not be visible again.
- Click on Create key.

3. Integrate your AI agent
Choose the integration method that best fits your workflow.- AI Agents
- Adaline Proxy
- SDK or API
Let an AI coding agent do it for you
If you use an AI coding agent such as Cursor, Windsurf, Cline, or any other agent that accepts context — you can hand it all the information it needs to integrate Adaline into your codebase automatically.Open the full integration context document below, then use the Copy page button (top-right of the page) or the ChatGPT / Claude buttons to send it directly to your AI agent.AI Agent TypeScript SDK Integration Context
Complete TypeScript SDK reference — ready to paste into your AI coding agent.
AI Agent Python SDK Integration Context
Complete Python SDK reference — ready to paste into your AI coding agent.
4. View your traces
Regardless of which integration method you chose, the dashboard experience is the same.Select your Project from the sidebar, then navigate to the Monitor tab from the top bar. You will see a list of traces — one for each request your application made.

Click on any trace to expand it. Each trace contains one or more spans representing individual operations (e.g., an LLM call, a tool invocation, a retrieval operation, etc).
By default, the trace view is in a tree view. You can switch to a waterfall view by clicking the Waterfall button (top right).


5. View your spans
Each span represents an individual operation (e.g., an LLM call, a tool invocation) within a trace. Span provides a detailed view of each operation, including the request and response payloads, latency metrics, and cost and token usage. Select your Prompt from the sidebar, then navigate to the Monitor tab from the top bar. You will see a list of spans — one for each invocation of the step in your application. Click on any span to open its detailed view.
6. View aggregated charts
Charts provide aggregated, time-series views of your AI agent’s performance. They are automatically generated from the traces and spans flowing into Adaline, giving you a high-level operational dashboard without any additional configuration. Use charts to spot trends, detect anomalies, and then drill down into the underlying traces and spans for root cause analysis. Select your Project from the sidebar, then navigate to the Overview tab from the top bar.

