LogSpanModelContent
The most commonly used span content type, representing a single LLM inference call. All fields are optional, allowing you to log as much or as little detail as you have available.
Import
import type { LogSpanModelContent } from '@adaline/api';
import { LogSpanModelContentTypeEnum } from '@adaline/api';
Type Definition
interface LogSpanModelContent {
type?: 'Model';
provider?: string; // 1-512 chars
model?: string; // 1-512 chars
input?: string; // JSON string (must be valid JSON)
output?: string; // JSON string (must be valid JSON)
variables?: LogSpanVariable | null;
cost?: number | null; // USD, minimum: 0
}
Properties
type - Discriminator field, always 'Model' for this content type
provider - Provider name (e.g. 'openai', 'anthropic', 'google')
model - Model identifier (e.g. 'gpt-4o', 'claude-sonnet-4-20250514')
input - The request payload as a JSON string (JSON.stringify() of the request object)
output - The response payload as a JSON string (JSON.stringify() of the response object)
variables - Variable attached for evaluation tracking (LogSpanVariable)
cost - Inference cost in USD
Both input and output must be valid, parseable JSON strings (the result of JSON.stringify()). Passing a plain string that isn’t valid JSON will cause the span to be rejected.
For the best experience, pass the exact request payload you send to your provider as input, and the full provider response object as output. When you do this with a supported provider (OpenAI, Anthropic, Google, etc.), Adaline automatically:
- Calculates cost from token counts and the model’s pricing
- Extracts token usage (prompt, completion, and total tokens)
- Surfaces model metadata such as stop reason, tool calls, and function invocations
- Powers continuous evaluations with structured input/output pairs
Avoid cherry-picking or reshaping the request/response before stringifying. Pass the raw objects — Adaline’s automatic parsing depends on seeing the provider’s native schema.
Examples
OpenAI
import OpenAI from 'openai';
const openai = new OpenAI();
const params = {
model: 'gpt-4o',
messages: [
{ role: 'system' as const, content: 'You are a helpful assistant.' },
{ role: 'user' as const, content: 'Explain quantum computing simply.' },
],
temperature: 0.7,
};
const response = await openai.chat.completions.create(params);
span.update({
content: {
type: 'Model',
provider: 'openai',
model: 'gpt-4o',
input: JSON.stringify(params),
output: JSON.stringify(response),
},
});
Anthropic
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();
const params = {
model: 'claude-sonnet-4-20250514',
max_tokens: 1024,
messages: [
{ role: 'user' as const, content: 'Explain quantum computing simply.' },
],
};
const response = await anthropic.messages.create(params);
span.update({
content: {
type: 'Model',
provider: 'anthropic',
model: 'claude-sonnet-4-20250514',
input: JSON.stringify(params),
output: JSON.stringify(response),
},
});
With Explicit Cost
span.update({
content: {
type: 'Model',
provider: 'openai',
model: 'gpt-4o',
input: JSON.stringify(params),
output: JSON.stringify(response),
cost: 0.0023,
},
});
With Variables for Evaluation
span.update({
content: {
type: 'Model',
provider: 'openai',
model: 'gpt-4o',
input: JSON.stringify(params),
output: JSON.stringify(response),
variables: {
name: 'user_question',
value: { modality: 'text', value: 'Explain quantum computing simply.' }
},
},
});
Minimal (All Fields Optional)
span.update({
content: {
type: 'Model',
provider: 'openai',
model: 'gpt-4o',
},
});
- LogSpanContent — union type that includes
LogSpanModelContent
- LogSpanVariable — variable type used in the
variables field
- Span — class that accepts
LogSpanContent via span.update()