Skip to main content

Overview

Logging types enable comprehensive tracking of AI applications, from high-level traces to granular span content with various operation types.

Span Content Types

LogSpanContent (Union)

Polymorphic content type for different span categories.
type LogSpanContent =
  | LogSpanModelContent
  | LogSpanModelStreamContent
  | LogSpanEmbeddingsContent
  | LogSpanFunctionContent
  | LogSpanToolContent
  | LogSpanGuardrailContent
  | LogSpanRetrievalContent
  | LogSpanOtherContent;
Type Narrowing:
function handleSpanContent(content: LogSpanContent) {
  switch (content.type) {
    case 'Model':
      console.log(`LLM: ${content.provider}/${content.model}, Cost: $${content.cost}`);
      break;
    case 'ModelStream':
      console.log(`Streaming: ${content.provider}`);
      break;
    case 'Embeddings':
      console.log('Embedding generation');
      break;
    case 'Tool':
      console.log('Tool execution');
      break;
    case 'Retrieval':
      console.log('Vector search');
      break;
    case 'Function':
      console.log('Custom function');
      break;
    case 'Guardrail':
      console.log('Safety check');
      break;
    case 'Other':
      console.log('Other operation');
      break;
  }
}

LogSpanModelContent

See the dedicated LogSpanModelContent page for full documentation. LLM inference span content.
interface LogSpanModelContent {
  type?: 'Model';
  provider?: string;               // 1-512 chars
  model?: string;                  // 1-512 chars
  input?: LogSpanContentJson;
  output?: LogSpanContentJson;
  variables?: LogSpanVariable | null;
  cost?: number | null;            // minimum: 0
}
Both input and output are LogSpanContentJson — a string that must be valid, parseable JSON (i.e. the result of JSON.stringify()). Passing a plain string that isn’t valid JSON will cause the span to be rejected. For Model spans specifically, you get the most out of Adaline when you pass the exact request payload you send to your provider as input, and the full provider response object as output. When you do this with a supported provider (OpenAI, Anthropic, Google, etc.), Adaline automatically:
  • Calculates cost from token counts and the model’s pricing
  • Extracts token usage (prompt, completion, and total tokens)
  • Surfaces model metadata such as stop reason, tool calls, and function invocations
  • Powers continuous evaluations with structured input/output pairs
The recommended pattern is to build your request params as an object, pass that object to the provider SDK, and stringify the same object as input. For output, stringify the full response returned by the SDK. You can also set input and output to use Adaline’s own content schema, although this is more advanced and requires maintaining custom transformations to convert provider payloads into the Adaline format.
For a deeper walkthrough of this pattern and how it applies across providers, see Span content: input and output.

Examples

import OpenAI from 'openai';

const openai = new OpenAI();

const params = {
  model: 'gpt-4o',
  messages: [
    { role: 'system' as const, content: 'You are a helpful assistant.' },
    { role: 'user' as const, content: 'Explain quantum computing simply.' },
  ],
  temperature: 0.7,
};

const response = await openai.chat.completions.create(params);

span.update({
  content: {
    type: 'Model',
    provider: 'openai',
    model: 'gpt-4o',
    input: JSON.stringify(params),
    output: JSON.stringify(response),
    cost: 0.002,
  },
});
Avoid cherry-picking or reshaping the request/response before stringifying. Pass the raw objects — Adaline’s automatic parsing depends on seeing the provider’s native schema.

LogSpanModelStreamContent

See the dedicated LogSpanModelStreamContent page for full documentation. Streaming LLM inference content.
interface LogSpanModelStreamContent {
  type: 'ModelStream';
  provider: string;
  model: string;
  input: LogSpanContentJson;
  output: string;                  // Raw stream chunks
  aggregateOutput: LogSpanContentJson;
  variables?: LogSpanVariable | null;
  cost?: number | null;
}
Example:
let chunks = '';
for await (const chunk of stream) {
  chunks += chunk;
}

span.update({
  content: {
    type: 'ModelStream',
    provider: 'anthropic',
    model: 'claude-3-opus',
    input: JSON.stringify(messages),
    output: chunks,
    aggregateOutput: JSON.stringify({ role: 'assistant', content: fullResponse }),
    cost: 0.005
  }
});

LogSpanEmbeddingsContent

See the dedicated LogSpanEmbeddingsContent page for full documentation. Embedding generation span content.
interface LogSpanEmbeddingsContent {
  type: 'Embeddings';
  input: LogSpanContentJson;
  output: LogSpanContentJson;
}
Example:
span.update({
  content: {
    type: 'Embeddings',
    input: JSON.stringify({ texts: ['query'], model: 'text-embedding-3-large' }),
    output: JSON.stringify({ embeddings: [[0.1, 0.2, ...]], dimensions: 3072 })
  }
});

LogSpanFunctionContent

See the dedicated LogSpanFunctionContent page for full documentation. Custom function execution span content.
interface LogSpanFunctionContent {
  type: 'Function';
  input: LogSpanContentJson;
  output: LogSpanContentJson;
}
Example:
span.update({
  content: {
    type: 'Function',
    input: JSON.stringify({ operation: 'process', id: 123 }),
    output: JSON.stringify({ result: 'success', items: 42 })
  }
});

LogSpanToolContent

See the dedicated LogSpanToolContent page for full documentation. Tool execution span content.
interface LogSpanToolContent {
  type: 'Tool';
  input: LogSpanContentJson;
  output: LogSpanContentJson;
}
Example:
span.update({
  content: {
    type: 'Tool',
    input: JSON.stringify({ function: 'get_weather', city: 'Paris' }),
    output: JSON.stringify({ temp: 24, conditions: 'sunny' })
  }
});

LogSpanGuardrailContent

See the dedicated LogSpanGuardrailContent page for full documentation. Safety/compliance check span content.
interface LogSpanGuardrailContent {
  type: 'Guardrail';
  input: LogSpanContentJson;
  output: LogSpanContentJson;
}
Example:
span.update({
  content: {
    type: 'Guardrail',
    input: JSON.stringify({ text: 'User input', checks: ['toxicity', 'pii'] }),
    output: JSON.stringify({ safe: true, scores: { toxicity: 0.05 } })
  }
});

LogSpanRetrievalContent

See the dedicated LogSpanRetrievalContent page for full documentation. RAG/retrieval span content.
interface LogSpanRetrievalContent {
  type: 'Retrieval';
  input: LogSpanContentJson;
  output: LogSpanContentJson;
}
Example:
span.update({
  content: {
    type: 'Retrieval',
    input: JSON.stringify({ query: 'What is AI?', topK: 5 }),
    output: JSON.stringify({ documents: [{id: 'doc1', score: 0.95}] })
  }
});

LogSpanOtherContent

See the dedicated LogSpanOtherContent page for full documentation. Custom span content.
interface LogSpanOtherContent {
  type: 'Other';
  input: LogSpanContentJson;
  output: LogSpanContentJson;
}
Example:
span.update({
  content: {
    type: 'Other',
    input: JSON.stringify({ custom: 'data' }),
    output: JSON.stringify({ result: 'output' })
  }
});

Supporting Types

LogSpanContentJson

JSON string that must be valid JSON.
type LogSpanContentJson = string;
Example:
const input: LogSpanContentJson = JSON.stringify({ query: 'test' });
const output: LogSpanContentJson = JSON.stringify({ result: 'success' });

LogSpanVariable

See the dedicated LogSpanVariable page for full documentation. Variable attached to a Model or ModelStream span for evaluation tracking.
import type { LogSpanVariable } from '@adaline/api';

interface LogSpanVariable {
  name: string;                  // 1-200 chars
  value: LogSpanVariableValue;   // content union (TextContent, ImageContent, etc.)
}
Example:
const variable: LogSpanVariable = {
  name: 'user_question',
  value: { modality: 'text', value: 'What is quantum computing?' }
};

LogSpanVariableValue

Content value for a span variable. A discriminated union on modality:
import type { LogSpanVariableValue } from '@adaline/api';

type LogSpanVariableValue =
  | TextContent           // modality: 'text'
  | ImageContent          // modality: 'image'
  | PdfContent            // modality: 'pdf'
  | ReasoningContent      // modality: 'reasoning'
  | ToolCallContent       // modality: 'tool-call'
  | ToolResponseContent;  // modality: 'tool-response'

LogAttributesValue

See the dedicated LogAttributesValue page for full documentation. The allowed value types for trace/span attributes.
import type { LogAttributesValue } from '@adaline/api';

type LogAttributesValue = string | number | boolean;
Used in Record<string, LogAttributesValue> for the attributes parameter on traces and spans. Example:
const attributes: Record<string, LogAttributesValue> = {
  userId: 'user-123',
  latency: 1234,
  cached: false,
  region: 'us-east-1'
};

trace.update({ attributes });

TraceStatus

See the dedicated TraceStatus page for full documentation. Allowed status values for a trace.
import type { TraceStatus } from '@adaline/client';

type TraceStatus = 'success' | 'failure' | 'aborted' | 'cancelled' | 'pending' | 'unknown';

SpanStatus

See the dedicated SpanStatus page for full documentation. Allowed status values for a span.
import type { SpanStatus } from '@adaline/client';

type SpanStatus = 'success' | 'failure' | 'aborted' | 'cancelled' | 'unknown';
Span status does not include 'pending' — that value is only available for traces.

Complete Example

import { Adaline } from '@adaline/client';
import type { LogSpanContent, LogAttributes, LogTags } from '@adaline/api';
import { Gateway } from '@adaline/gateway';
import { OpenAI } from '@adaline/openai';

const adaline = new Adaline();
const gateway = new Gateway();
const openaiProvider = new OpenAI();
const monitor = adaline.initMonitor({ projectId: 'my-project' });

async function trackedLLMCall(userMessage: string) {
  const trace = monitor.logTrace({
    name: 'Chat Request',
    tags: ['chat', 'production'],
    attributes: { messageLength: userMessage.length }
  });

  const span = trace.logSpan({
    name: 'OpenAI Call',
    tags: ['llm', 'openai', 'gpt-4o']
  });

  try {
    const model = openaiProvider.chatModel({
      modelName: 'gpt-4o',
      apiKey: process.env.OPENAI_API_KEY!
    });

    const gatewayResponse = await gateway.completeChat({
      model,
      messages: [
        { role: 'user', content: [{ modality: 'text', value: userMessage }] }
      ]
    });

    const content: LogSpanContent = {
      type: 'Model',
      provider: 'openai',
      model: 'gpt-4o',
      input: JSON.stringify(gatewayResponse.provider.request),
      output: JSON.stringify(gatewayResponse.provider.response)
    };

    span.update({ status: 'success', content });
    trace.update({ status: 'success' });

    return gatewayResponse.response.messages[0].content[0].value;

  } catch (error) {
    span.update({ status: 'failure' });
    trace.update({ status: 'failure' });
    throw error;

  } finally {
    span.end();
    trace.end();
  }
}