Skip to main content

LogSpanModelStreamContent

Content type for streaming LLM inference spans.

Overview

LogSpanModelStreamContent captures streaming model calls where individual chunks are collected alongside an aggregated final output. It is wrapped in a LogSpanContent union via the actual_instance pattern.
from adaline_api.models.log_span_model_stream_content import LogSpanModelStreamContent

Fields

type
str
required
Must be "ModelStream".
provider
str
required
The provider name (e.g., "openai", "anthropic"). 1–512 characters.
model
str
required
The model identifier (e.g., "gpt-4o", "claude-sonnet-4-20250514"). 1–512 characters.
input
str
required
The input payload as a JSON string. Must be valid, parseable JSON (the result of json.dumps()).
output
str
required
The raw streamed output. Typically the concatenated chunk payloads collected during streaming.
aggregate_output
str
required
The aggregated final output as a JSON string. Must be valid, parseable JSON.
variables
LogSpanVariable | None
Variable associated with this span for evaluation tracking. See LogSpanVariable.
cost
float | None
Cost of inference in USD. Overrides the automatic cost calculated by Adaline. Minimum: 0.

Construction Pattern

All span content is wrapped in LogSpanContent using the actual_instance parameter:
from adaline_api.models.log_span_content import LogSpanContent
from adaline_api.models.log_span_model_stream_content import LogSpanModelStreamContent

content = LogSpanContent(
    actual_instance=LogSpanModelStreamContent(
        type="ModelStream",
        provider="anthropic",
        model="claude-sonnet-4-20250514",
        input=json.dumps(params),
        output=raw_chunks,
        aggregate_output=json.dumps(aggregated),
    )
)

Example

import json
from adaline_api.models.log_span_content import LogSpanContent
from adaline_api.models.log_span_model_stream_content import LogSpanModelStreamContent

params = {
    "model": "gpt-4o",
    "messages": [{"role": "user", "content": "Write a haiku."}],
    "stream": True,
}

span.update({
    "status": "success",
    "content": LogSpanContent(
        actual_instance=LogSpanModelStreamContent(
            type="ModelStream",
            provider="openai",
            model="gpt-4o",
            input=json.dumps(params),
            output=collected_chunks,
            aggregate_output=json.dumps({"role": "assistant", "content": full_text}),
            cost=0.003,
        )
    ),
})