LogSpanModelContent
Content type for standard LLM inference spans. All fields are optional.
Overview
LogSpanModelContent captures the input, output, and metadata of an LLM model call. It is wrapped in a LogSpanContent union via the actual_instance pattern. For the best observability experience, pass the raw provider request as input and the full provider response as output — Adaline will automatically extract cost, token usage, and model metadata.
from adaline_api.models.log_span_model_content import LogSpanModelContent
Fields
Must be "Model" when provided.
The provider name (e.g., "openai", "anthropic"). 1–512 characters.
The model identifier (e.g., "gpt-4o", "claude-sonnet-4-20250514"). 1–512 characters.
The input payload as a JSON string. Must be valid, parseable JSON (the result of json.dumps()).
The output payload as a JSON string. Must be valid, parseable JSON (the result of json.dumps()).
Variable associated with this span for evaluation tracking. See LogSpanVariable.
Cost of inference in USD. Overrides the automatic cost calculated by Adaline. Minimum: 0.
Construction Pattern
All span content is wrapped in LogSpanContent using the actual_instance parameter:
from adaline_api.models.log_span_content import LogSpanContent
from adaline_api.models.log_span_model_content import LogSpanModelContent
content = LogSpanContent(
actual_instance=LogSpanModelContent(
type="Model",
provider="openai",
model="gpt-4o",
input=json.dumps(params),
output=json.dumps(response.model_dump()),
)
)
Both input and output must be valid JSON strings. For Model spans, you get the most out of Adaline when you pass the exact request payload as input and the full provider response as output. This enables Adaline to automatically:
- Calculate cost from token counts and model pricing
- Extract token usage (prompt, completion, and total tokens)
- Surface metadata such as stop reason and tool calls
- Power continuous evaluations with structured I/O
Examples
import json
from openai import OpenAI
from adaline_api.models.log_span_content import LogSpanContent
from adaline_api.models.log_span_model_content import LogSpanModelContent
openai = OpenAI()
params = {
"model": "gpt-4o",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain quantum computing simply."},
],
"temperature": 0.7,
}
response = openai.chat.completions.create(**params)
span.update({
"status": "success",
"content": LogSpanContent(
actual_instance=LogSpanModelContent(
type="Model",
provider="openai",
model="gpt-4o",
input=json.dumps(params),
output=json.dumps(response.model_dump()),
)
),
})
import json
import anthropic
from adaline_api.models.log_span_content import LogSpanContent
from adaline_api.models.log_span_model_content import LogSpanModelContent
client = anthropic.Anthropic()
params = {
"model": "claude-sonnet-4-20250514",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": "Explain quantum computing simply."},
],
}
response = client.messages.create(**params)
span.update({
"status": "success",
"content": LogSpanContent(
actual_instance=LogSpanModelContent(
type="Model",
provider="anthropic",
model="claude-sonnet-4-20250514",
input=json.dumps(params),
output=json.dumps(response.model_dump()),
)
),
})
With Variables
import json
from adaline_api.models.log_span_content import LogSpanContent
from adaline_api.models.log_span_model_content import LogSpanModelContent
from adaline_api.models.log_span_variable import LogSpanVariable
from adaline_api.models.text_content import TextContent
content = LogSpanContent(
actual_instance=LogSpanModelContent(
type="Model",
provider="openai",
model="gpt-4o",
input=json.dumps(params),
output=json.dumps(response.model_dump()),
variables=LogSpanVariable(
name="user_question",
value=TextContent(modality="text", value="What is quantum computing?")
),
)
)
With Explicit Cost
content = LogSpanContent(
actual_instance=LogSpanModelContent(
type="Model",
provider="openai",
model="gpt-4o",
input=json.dumps(params),
output=json.dumps(response.model_dump()),
cost=0.0032,
)
)
Avoid cherry-picking or reshaping the request/response before serializing. Pass the raw objects — Adaline’s automatic parsing depends on seeing the provider’s native schema.
Serialization
from adaline_api.models.log_span_model_content import LogSpanModelContent
model_content = LogSpanModelContent(
type="Model",
provider="openai",
model="gpt-4o",
input='{"model":"gpt-4o","messages":[]}',
output='{"choices":[]}',
)
d = model_content.to_dict()
j = model_content.to_json()
restored = LogSpanModelContent.from_dict(d)
restored = LogSpanModelContent.from_json(j)