Documentation Index
Fetch the complete documentation index at: https://www.adaline.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
LogSpanModelStreamContent
Content type for streaming LLM inference spans.Overview
LogSpanModelStreamContent captures streaming model calls where individual chunks are collected alongside an aggregated final output. It is wrapped in a LogSpanContent union via the actual_instance pattern.
Fields
Must be
"ModelStream".The provider name (e.g.,
"openai", "anthropic"). 1–512 characters.The model identifier (e.g.,
"gpt-4o", "claude-sonnet-4-20250514"). 1–512 characters.The input payload as a JSON string. Must be valid, parseable JSON (the result of
json.dumps()).The raw streamed output. Typically the concatenated chunk payloads collected during streaming.
The aggregated final output as a JSON string. Must be valid, parseable JSON.
Variable associated with this span for evaluation tracking. See LogSpanVariable.
Cost of inference in USD. Overrides the automatic cost calculated by Adaline. Minimum: 0.
Construction Pattern
All span content is wrapped in LogSpanContent using theactual_instance parameter: