Once your application is instrumented and logs are flowing into the Monitor pillar , you can export that data programmatically. Use the export endpoints to feed traces and spans into data warehouses, build custom dashboards, power alerting pipelines, or run offline analysis that goes beyond what the Monitor UI provides.
Adaline exposes three REST endpoints for reading log data. All three use cursor-based pagination and support the same filtering and response conventions used across the Adaline API.
Available endpoints
Endpoint Method When to use List traces GET /v2/logsSimple queries with query-string filters. Search traces POST /v2/logs/tracesComplex queries with body filters, range comparisons, and multi-condition filtering. Search spans POST /v2/logs/spansExport model-level span data for a specific prompt — cost, tokens, latency per LLM call.
Quick start
Fetch the most recent traces from a project:
curl "https://app.adaline.ai/api/v2/logs?projectId= $PROJECT_ID &limit=50" \
-H "Authorization: Bearer $ADALINE_API_KEY "
const response = await fetch (
`https://app.adaline.ai/api/v2/logs?projectId= ${ projectId } &limit=50` ,
{ headers: { Authorization: `Bearer ${ process . env . ADALINE_API_KEY } ` } }
);
const { data , pagination } = await response . json ();
console . log ( `Fetched ${ data . length } traces, hasMore: ${ pagination . hasMore } ` );
import httpx
response = httpx.get(
"https://app.adaline.ai/api/v2/logs" ,
params = { "projectId" : project_id, "limit" : 50 },
headers = { "Authorization" : f "Bearer { os.environ[ 'ADALINE_API_KEY' ] } " },
)
result = response.json()
print ( f "Fetched { len (result[ 'data' ]) } traces, hasMore: { result[ 'pagination' ][ 'hasMore' ] } " )
The response envelope is the same for all three endpoints:
{
"data" : [ ... ],
"pagination" : {
"limit" : 50 ,
"returned" : 50 ,
"hasMore" : true ,
"nextCursor" : "eyJ2IjoxLCJzdGFydGVkQXQiOi..."
}
}
All export endpoints use cursor-based pagination. Pass the nextCursor value from the previous response as cursor in the next request. When hasMore is false, you have reached the end.
Parameter Type Default Description limitinteger 50 Items per page (max 200). cursorstring — Opaque cursor from a previous response. Omit for the first page.
The cursor is opaque — do not parse or construct it. Always use the value returned by the API.
To export all matching data, loop until hasMore is false:
CURSOR = ""
while true ; do
BODY = '{"projectId":"'" $PROJECT_ID "'","limit":200'
if [ -n " $CURSOR " ]; then
BODY = " $BODY "',"cursor":"'" $CURSOR "'"'
fi
BODY = " $BODY }"
RESPONSE = $( curl -s -X POST "https://app.adaline.ai/api/v2/logs/traces" \
-H "Authorization: Bearer $ADALINE_API_KEY " \
-H "Content-Type: application/json" \
-d " $BODY " )
echo " $RESPONSE " | jq '.data[]'
HAS_MORE = $( echo " $RESPONSE " | jq -r '.pagination.hasMore' )
CURSOR = $( echo " $RESPONSE " | jq -r '.pagination.nextCursor' )
if [ " $HAS_MORE " != "true" ]; then break ; fi
done
let cursor : string | undefined ;
do {
const body : Record < string , unknown > = { projectId , limit: 200 };
if ( cursor ) body . cursor = cursor ;
const response = await fetch ( "https://app.adaline.ai/api/v2/logs/traces" , {
method: "POST" ,
headers: {
Authorization: `Bearer ${ process . env . ADALINE_API_KEY } ` ,
"Content-Type" : "application/json" ,
},
body: JSON . stringify ( body ),
});
const { data , pagination } = await response . json ();
for ( const trace of data ) {
// Process each trace
console . log ( trace . id , trace . name , trace . status );
}
cursor = pagination . hasMore ? pagination . nextCursor : undefined ;
} while ( cursor );
import httpx
cursor = None
while True :
body = { "projectId" : project_id, "limit" : 200 }
if cursor:
body[ "cursor" ] = cursor
response = httpx.post(
"https://app.adaline.ai/api/v2/logs/traces" ,
json = body,
headers = { "Authorization" : f "Bearer { os.environ[ 'ADALINE_API_KEY' ] } " },
)
result = response.json()
for trace in result[ "data" ]:
# Process each trace
print (trace[ "id" ], trace[ "name" ], trace[ "status" ])
if not result[ "pagination" ][ "hasMore" ]:
break
cursor = result[ "pagination" ][ "nextCursor" ]
Filtering
All endpoints support two complementary filtering mechanisms that are ANDed together: flat params for simple queries and a filters array for advanced comparisons.
Flat params
Top-level fields for common filters. Use these for straightforward queries:
Field Type Description projectIdstring Required. Project to query.statusstring success, failure, aborted, cancelled, pending, unknown.startedAfternumber Unix ms. Traces started at or after this time. startedBeforenumber Unix ms. Traces started at or before this time. namestring Case-insensitive substring match on trace or span name. referenceIdstring Exact match. sessionIdstring Exact match.
The span endpoint (POST /v2/logs/spans) additionally supports model and provider for case-insensitive substring matching.
Filters array
For range queries, numeric comparisons, and multi-condition filtering on the same column, use the filters array:
{
"projectId" : "project-abc12345678901" ,
"filters" : [
{ "type" : "number" , "column" : "latency" , "operator" : "gt" , "value" : 1000 },
{ "type" : "number" , "column" : "totalCost" , "operator" : "lt" , "value" : 0.05 },
{ "type" : "arrayContains" , "column" : "tags" , "operator" : "contains" , "value" : "production" }
]
}
Each filter object is a discriminated union on type:
Type Operators Value type Example columns stringeq, containsstring name, referenceId, sessionId, model, provider numbereq, gt, gte, lt, ltenumber latency, totalCost, cost, totalInputTokens, totalOutputTokens datetimegt, gte, lt, ltenumber (Unix ms) startedAt, endedAt arrayContainscontainsstring tags
Both flat params and the filters array can be used together in the same request. Unsupported columns are silently ignored.
Available columns
Trace endpoints (GET /v2/logs, POST /v2/logs/traces):
startedAt, endedAt, status, name, referenceId, sessionId, latency, tags, totalCost, totalInputTokens, totalOutputTokens, traceId
Span endpoint (POST /v2/logs/spans) — all trace columns plus:
model, provider, cost, promptTokens, completionTokens, totalTokens, logTraceId
Common export patterns
Goal Approach Nightly data sync Paginate through all traces with startedAfter / startedBefore set to the previous day’s window. Cost monitoring Use the filters array with totalCost > threshold to surface expensive traces for alerting. Model comparison Export spans for two different models using the model filter, then compare latency and cost offline. Session replay Filter by sessionId to reconstruct a full multi-turn conversation for debugging or QA. Compliance audit Export all traces for a time window and pipe them into your data warehouse for retention.
Search traces
POST /v2/logs/traces is the most flexible trace export endpoint. Use it when you need the filters array or complex multi-condition queries:
curl -X POST "https://app.adaline.ai/api/v2/logs/traces" \
-H "Authorization: Bearer $ADALINE_API_KEY " \
-H "Content-Type: application/json" \
-d '{
"projectId": "'" $PROJECT_ID "'",
"status": "success",
"startedAfter": 1742860800000,
"filters": [
{ "type": "number", "column": "latency", "operator": "gt", "value": 1000 },
{ "type": "number", "column": "totalCost", "operator": "gte", "value": 0.01 }
],
"sort": "startedAt:desc",
"limit": 100
}'
const response = await fetch ( "https://app.adaline.ai/api/v2/logs/traces" , {
method: "POST" ,
headers: {
Authorization: `Bearer ${ process . env . ADALINE_API_KEY } ` ,
"Content-Type" : "application/json" ,
},
body: JSON . stringify ({
projectId ,
status: "success" ,
startedAfter: 1742860800000 ,
filters: [
{ type: "number" , column: "latency" , operator: "gt" , value: 1000 },
{ type: "number" , column: "totalCost" , operator: "gte" , value: 0.01 },
],
sort: "startedAt:desc" ,
limit: 100 ,
}),
});
const { data : traces } = await response . json ();
response = httpx.post(
"https://app.adaline.ai/api/v2/logs/traces" ,
json = {
"projectId" : project_id,
"status" : "success" ,
"startedAfter" : 1742860800000 ,
"filters" : [
{ "type" : "number" , "column" : "latency" , "operator" : "gt" , "value" : 1000 },
{ "type" : "number" , "column" : "totalCost" , "operator" : "gte" , "value" : 0.01 },
],
"sort" : "startedAt:desc" ,
"limit" : 100 ,
},
headers = { "Authorization" : f "Bearer { os.environ[ 'ADALINE_API_KEY' ] } " },
)
traces = response.json()[ "data" ]
Search spans
POST /v2/logs/spans exports model-level span data for a specific prompt. Only spans with contentType of Model or ModelStream are returned — use this to analyze LLM performance at the individual-call level.
Field Required Description promptIdYes Prompt to search model spans from. modelNo Case-insensitive substring match on model name. providerNo Case-insensitive substring match on provider name.
curl -X POST "https://app.adaline.ai/api/v2/logs/spans" \
-H "Authorization: Bearer $ADALINE_API_KEY " \
-H "Content-Type: application/json" \
-d '{
"projectId": "'" $PROJECT_ID "'",
"promptId": "'" $PROMPT_ID "'",
"model": "gpt-4.1",
"filters": [
{ "type": "number", "column": "cost", "operator": "lt", "value": 0.05 }
],
"sort": "startedAt:desc",
"limit": 100
}'
const response = await fetch ( "https://app.adaline.ai/api/v2/logs/spans" , {
method: "POST" ,
headers: {
Authorization: `Bearer ${ process . env . ADALINE_API_KEY } ` ,
"Content-Type" : "application/json" ,
},
body: JSON . stringify ({
projectId ,
promptId ,
model: "gpt-4.1" ,
filters: [
{ type: "number" , column: "cost" , operator: "lt" , value: 0.05 },
],
sort: "startedAt:desc" ,
limit: 100 ,
}),
});
const { data : spans } = await response . json ();
response = httpx.post(
"https://app.adaline.ai/api/v2/logs/spans" ,
json = {
"projectId" : project_id,
"promptId" : prompt_id,
"model" : "gpt-4.1" ,
"filters" : [
{ "type" : "number" , "column" : "cost" , "operator" : "lt" , "value" : 0.05 },
],
"sort" : "startedAt:desc" ,
"limit" : 100 ,
},
headers = { "Authorization" : f "Bearer { os.environ[ 'ADALINE_API_KEY' ] } " },
)
spans = response.json()[ "data" ]
The content, parsedContent, attributes, tags, and events fields on span objects are serialized JSON strings. Parse them in your application code to access the structured data.
Error responses
Status Meaning 400Invalid parameters or malformed cursor. 401Missing or invalid API key. 403API key does not have access to the requested project. 404Project or prompt does not exist. 500Internal server error.
Next steps
Search Traces API Full OpenAPI reference for the trace search endpoint.
Search Spans API Full OpenAPI reference for the span search endpoint.
Filter and Search Logs Use the Monitor UI to filter logs visually before exporting.
Analyze Log Charts Spot trends with aggregated analytics, then export the underlying data.