Skip to main content

LogSpanModelStreamContent

Span content for streaming LLM inference calls. Unlike LogSpanModelContent, this type captures both the raw stream chunks and an aggregated final output, giving you full visibility into the streaming lifecycle.

Import

import type { LogSpanModelStreamContent } from '@adaline/api';
import { LogSpanModelStreamContentTypeEnum } from '@adaline/api';

Type Definition

interface LogSpanModelStreamContent {
  type: 'ModelStream';
  provider: string;                // 1-512 chars
  model: string;                   // 1-512 chars
  input: string;                   // JSON string (must be valid JSON)
  output: string;                  // Raw stream chunks concatenated
  aggregateOutput: string;         // JSON string (must be valid JSON)
  variables?: LogSpanVariable | null;
  cost?: number | null;            // USD, minimum: 0
}

Properties

  • type - Discriminator field, always 'ModelStream' for this content type
  • provider - Provider name (e.g. 'openai', 'anthropic', 'google')
  • model - Model identifier (e.g. 'gpt-4o', 'claude-sonnet-4-20250514')
  • input - The request payload as a JSON string (JSON.stringify() of the request object)
  • output - Raw stream chunks concatenated into a single string (does not need to be valid JSON)
  • aggregateOutput - The final assembled response as a JSON string (JSON.stringify() of the complete response)
  • variables - Variable attached for evaluation tracking (LogSpanVariable)
  • cost - Inference cost in USD
All fields except variables and cost are required. This differs from LogSpanModelContent where every field is optional.

Example

import OpenAI from 'openai';

const openai = new OpenAI();

const params = {
  model: 'gpt-4o',
  messages: [
    { role: 'system' as const, content: 'You are a helpful assistant.' },
    { role: 'user' as const, content: 'Write a haiku about observability.' },
  ],
  stream: true,
};

const stream = await openai.chat.completions.create(params);

let chunks = '';
let fullContent = '';
for await (const chunk of stream) {
  const raw = JSON.stringify(chunk);
  chunks += raw + '\n';
  fullContent += chunk.choices[0]?.delta?.content ?? '';
}

span.update({
  content: {
    type: 'ModelStream',
    provider: 'openai',
    model: 'gpt-4o',
    input: JSON.stringify(params),
    output: chunks,
    aggregateOutput: JSON.stringify({ role: 'assistant', content: fullContent }),
    cost: 0.0018,
  },
});

  • LogSpanContent — union type that includes LogSpanModelStreamContent
  • LogSpanModelContent — non-streaming variant for single LLM calls
  • LogSpanVariable — variable type used in the variables field
  • Span — class that accepts LogSpanContent via span.update()