Skip to main content
Once your application is instrumented and logs are flowing into the Monitor pillar, you can export that data programmatically. Use the export endpoints to feed traces and spans into data warehouses, build custom dashboards, power alerting pipelines, or run offline analysis that goes beyond what the Monitor UI provides. Adaline exposes three REST endpoints for reading log data. All three use cursor-based pagination and support the same filtering and response conventions used across the Adaline API.

Available endpoints

EndpointMethodWhen to use
List tracesGET /v2/logsSimple queries with query-string filters.
Search tracesPOST /v2/logs/tracesComplex queries with body filters, range comparisons, and multi-condition filtering.
Search spansPOST /v2/logs/spansExport model-level span data for a specific prompt — cost, tokens, latency per LLM call.

Quick start

Fetch the most recent traces from a project:
curl "https://app.adaline.ai/api/v2/logs?projectId=$PROJECT_ID&limit=50" \
  -H "Authorization: Bearer $ADALINE_API_KEY"
The response envelope is the same for all three endpoints:
{
  "data": [ ... ],
  "pagination": {
    "limit": 50,
    "returned": 50,
    "hasMore": true,
    "nextCursor": "eyJ2IjoxLCJzdGFydGVkQXQiOi..."
  }
}

Pagination

All export endpoints use cursor-based pagination. Pass the nextCursor value from the previous response as cursor in the next request. When hasMore is false, you have reached the end.
ParameterTypeDefaultDescription
limitinteger50Items per page (max 200).
cursorstringOpaque cursor from a previous response. Omit for the first page.
The cursor is opaque — do not parse or construct it. Always use the value returned by the API.

Pagination loop

To export all matching data, loop until hasMore is false:
CURSOR=""
while true; do
  BODY='{"projectId":"'"$PROJECT_ID"'","limit":200'
  if [ -n "$CURSOR" ]; then
    BODY="$BODY"',"cursor":"'"$CURSOR"'"'
  fi
  BODY="$BODY}"

  RESPONSE=$(curl -s -X POST "https://app.adaline.ai/api/v2/logs/traces" \
    -H "Authorization: Bearer $ADALINE_API_KEY" \
    -H "Content-Type: application/json" \
    -d "$BODY")

  echo "$RESPONSE" | jq '.data[]'

  HAS_MORE=$(echo "$RESPONSE" | jq -r '.pagination.hasMore')
  CURSOR=$(echo "$RESPONSE" | jq -r '.pagination.nextCursor')

  if [ "$HAS_MORE" != "true" ]; then break; fi
done

Filtering

All endpoints support two complementary filtering mechanisms that are ANDed together: flat params for simple queries and a filters array for advanced comparisons.

Flat params

Top-level fields for common filters. Use these for straightforward queries:
FieldTypeDescription
projectIdstringRequired. Project to query.
statusstringsuccess, failure, aborted, cancelled, pending, unknown.
startedAfternumberUnix ms. Traces started at or after this time.
startedBeforenumberUnix ms. Traces started at or before this time.
namestringCase-insensitive substring match on trace or span name.
referenceIdstringExact match.
sessionIdstringExact match.
The span endpoint (POST /v2/logs/spans) additionally supports model and provider for case-insensitive substring matching.

Filters array

For range queries, numeric comparisons, and multi-condition filtering on the same column, use the filters array:
{
  "projectId": "project-abc12345678901",
  "filters": [
    { "type": "number", "column": "latency", "operator": "gt", "value": 1000 },
    { "type": "number", "column": "totalCost", "operator": "lt", "value": 0.05 },
    { "type": "arrayContains", "column": "tags", "operator": "contains", "value": "production" }
  ]
}
Each filter object is a discriminated union on type:
TypeOperatorsValue typeExample columns
stringeq, containsstringname, referenceId, sessionId, model, provider
numbereq, gt, gte, lt, ltenumberlatency, totalCost, cost, totalInputTokens, totalOutputTokens
datetimegt, gte, lt, ltenumber (Unix ms)startedAt, endedAt
arrayContainscontainsstringtags
Both flat params and the filters array can be used together in the same request. Unsupported columns are silently ignored.

Available columns

Trace endpoints (GET /v2/logs, POST /v2/logs/traces): startedAt, endedAt, status, name, referenceId, sessionId, latency, tags, totalCost, totalInputTokens, totalOutputTokens, traceId Span endpoint (POST /v2/logs/spans) — all trace columns plus: model, provider, cost, promptTokens, completionTokens, totalTokens, logTraceId

Common export patterns

GoalApproach
Nightly data syncPaginate through all traces with startedAfter / startedBefore set to the previous day’s window.
Cost monitoringUse the filters array with totalCost > threshold to surface expensive traces for alerting.
Model comparisonExport spans for two different models using the model filter, then compare latency and cost offline.
Session replayFilter by sessionId to reconstruct a full multi-turn conversation for debugging or QA.
Compliance auditExport all traces for a time window and pipe them into your data warehouse for retention.

Search traces

POST /v2/logs/traces is the most flexible trace export endpoint. Use it when you need the filters array or complex multi-condition queries:
curl -X POST "https://app.adaline.ai/api/v2/logs/traces" \
  -H "Authorization: Bearer $ADALINE_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "projectId": "'"$PROJECT_ID"'",
    "status": "success",
    "startedAfter": 1742860800000,
    "filters": [
      { "type": "number", "column": "latency", "operator": "gt", "value": 1000 },
      { "type": "number", "column": "totalCost", "operator": "gte", "value": 0.01 }
    ],
    "sort": "startedAt:desc",
    "limit": 100
  }'

Search spans

POST /v2/logs/spans exports model-level span data for a specific prompt. Only spans with contentType of Model or ModelStream are returned — use this to analyze LLM performance at the individual-call level.
FieldRequiredDescription
promptIdYesPrompt to search model spans from.
modelNoCase-insensitive substring match on model name.
providerNoCase-insensitive substring match on provider name.
curl -X POST "https://app.adaline.ai/api/v2/logs/spans" \
  -H "Authorization: Bearer $ADALINE_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "projectId": "'"$PROJECT_ID"'",
    "promptId": "'"$PROMPT_ID"'",
    "model": "gpt-4.1",
    "filters": [
      { "type": "number", "column": "cost", "operator": "lt", "value": 0.05 }
    ],
    "sort": "startedAt:desc",
    "limit": 100
  }'
The content, parsedContent, attributes, tags, and events fields on span objects are serialized JSON strings. Parse them in your application code to access the structured data.

Error responses

StatusMeaning
400Invalid parameters or malformed cursor.
401Missing or invalid API key.
403API key does not have access to the requested project.
404Project or prompt does not exist.
500Internal server error.

Next steps

Search Traces API

Full OpenAPI reference for the trace search endpoint.

Search Spans API

Full OpenAPI reference for the span search endpoint.

Filter and Search Logs

Use the Monitor UI to filter logs visually before exporting.

Analyze Log Charts

Spot trends with aggregated analytics, then export the underlying data.