Skip to main content

LogSpanContent

Types used for log span content, traces, and spans in the Python SDK.

TraceStatus

Valid status values for traces.
ValueDescription
"success"The trace completed successfully.
"failure"The trace failed.
"aborted"The trace was aborted.
"cancelled"The trace was cancelled.
"pending"The trace is still in progress.
"unknown"Status is unknown (default).

SpanStatus

Valid status values for spans.
ValueDescription
"success"The span completed successfully.
"failure"The span failed.
"aborted"The span was aborted.
"cancelled"The span was cancelled.
"unknown"Status is unknown (default).

LogSpanContent

All span content is wrapped in LogSpanContent using the actual_instance parameter. This is the discriminated union wrapper used by the Python SDK.
from adaline_api.models.log_span_content import LogSpanContent

content = LogSpanContent(actual_instance=...)
The actual_instance must be one of the 8 content types documented below.

Content Types

LogSpanModelContent

See the dedicated LogSpanModelContent page for full documentation. Standard LLM inference calls.
from adaline_api.models.log_span_content import LogSpanContent
from adaline_api.models.log_span_model_content import LogSpanModelContent

Fields

type
str | None
Must be "Model" when provided.
provider
str | None
The provider name (e.g., "openai", "anthropic"). 1–512 characters.
model
str | None
The model identifier (e.g., "gpt-4o", "claude-sonnet-4-20250514"). 1–512 characters.
input
str | None
The input payload as a JSON string. See input and output constraints.
output
str | None
The output payload as a JSON string. See input and output constraints.
variables
LogSpanVariable | None
Variables associated with this span.
cost
float | None
The cost of the operation. Minimum: 0.

input and output constraints

Both input and output must be valid, parseable JSON strings (i.e. the result of json.dumps()). Passing a plain string that isn’t valid JSON will cause the span to be rejected. For Model spans specifically, you get the most out of Adaline when you pass the exact request payload you send to your provider as input, and the full provider response object as output. When you do this with a supported provider (OpenAI, Anthropic, Google, etc.), Adaline automatically:
  • Calculates cost from token counts and the model’s pricing
  • Extracts token usage (prompt, completion, and total tokens)
  • Surfaces model metadata such as stop reason, tool calls, and function invocations
  • Powers continuous evaluations with structured input/output pairs
The recommended pattern is to build your request params as a dict, pass that dict to the provider SDK, and json.dumps() the same dict as input. For output, call json.dumps() on the full response (using .model_dump() for Pydantic-based SDKs). You can also set input and output to use Adaline’s own content schema, although this is more advanced and requires maintaining custom transformations to convert provider payloads into the Adaline format.
For a deeper walkthrough of this pattern and how it applies across providers, see Span content: input and output.

Examples

import json
from openai import OpenAI
from adaline_api.models.log_span_content import LogSpanContent
from adaline_api.models.log_span_model_content import LogSpanModelContent

openai = OpenAI()

params = {
    "model": "gpt-4o",
    "messages": [
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Explain quantum computing simply."},
    ],
    "temperature": 0.7,
}

response = openai.chat.completions.create(**params)

span.update({
    "status": "success",
    "content": LogSpanContent(
        actual_instance=LogSpanModelContent(
            type="Model",
            provider="openai",
            model="gpt-4o",
            input=json.dumps(params),
            output=json.dumps(response.model_dump()),
        )
    ),
})
Avoid cherry-picking or reshaping the request/response before serializing. Pass the raw objects — Adaline’s automatic parsing depends on seeing the provider’s native schema.

LogSpanModelStreamContent

See the dedicated LogSpanModelStreamContent page for full documentation. Streaming LLM inference calls.
from adaline_api.models.log_span_content import LogSpanContent
from adaline_api.models.log_span_model_stream_content import LogSpanModelStreamContent

Fields

type
str
required
Must be "ModelStream".
provider
str
required
The provider name. 1–512 characters.
model
str
required
The model identifier. 1–512 characters.
input
str
required
The input payload as a JSON string.
output
str
required
Raw streamed output chunks.
aggregate_output
str
required
The aggregated final output as a JSON string.
cost
float | None
The cost of the operation. Minimum: 0.

Example

import json
from adaline_api.models.log_span_content import LogSpanContent
from adaline_api.models.log_span_model_stream_content import LogSpanModelStreamContent

content = LogSpanContent(
    actual_instance=LogSpanModelStreamContent(
        type="ModelStream",
        provider="anthropic",
        model="claude-sonnet-4-20250514",
        input=json.dumps(params),
        output=chunks,
        aggregate_output=json.dumps({"role": "assistant", "content": full_response}),
        cost=0.005,
    )
)

LogSpanEmbeddingsContent

See the dedicated LogSpanEmbeddingsContent page for full documentation. Embedding generation calls.
from adaline_api.models.log_span_content import LogSpanContent
from adaline_api.models.log_span_embeddings_content import LogSpanEmbeddingsContent

Fields

type
str
required
Must be "Embeddings".
input
str
required
The input payload as a JSON string.
output
str
required
The output payload as a JSON string.

Example

import json
from adaline_api.models.log_span_content import LogSpanContent
from adaline_api.models.log_span_embeddings_content import LogSpanEmbeddingsContent

content = LogSpanContent(
    actual_instance=LogSpanEmbeddingsContent(
        type="Embeddings",
        input=json.dumps({"model": "text-embedding-3-large", "input": "search query"}),
        output=json.dumps({"dimensions": 3072}),
    )
)

LogSpanFunctionContent

See the dedicated LogSpanFunctionContent page for full documentation. Custom application logic and function calls.
from adaline_api.models.log_span_content import LogSpanContent
from adaline_api.models.log_span_function_content import LogSpanFunctionContent

Fields

type
str
required
Must be "Function".
input
str
required
The input payload as a JSON string.
output
str
required
The output payload as a JSON string.

Example

import json
from adaline_api.models.log_span_content import LogSpanContent
from adaline_api.models.log_span_function_content import LogSpanFunctionContent

content = LogSpanContent(
    actual_instance=LogSpanFunctionContent(
        type="Function",
        input=json.dumps({"arg1": "value1"}),
        output=json.dumps({"result": "value2"}),
    )
)

LogSpanToolContent

See the dedicated LogSpanToolContent page for full documentation. Tool and API invocations.
from adaline_api.models.log_span_content import LogSpanContent
from adaline_api.models.log_span_tool_content import LogSpanToolContent

Fields

type
str
required
Must be "Tool".
input
str
required
The input payload as a JSON string.
output
str
required
The output payload as a JSON string.

Example

import json
from adaline_api.models.log_span_content import LogSpanContent
from adaline_api.models.log_span_tool_content import LogSpanToolContent

content = LogSpanContent(
    actual_instance=LogSpanToolContent(
        type="Tool",
        input=json.dumps({"function": "search", "args": {"query": "weather"}}),
        output=json.dumps({"results": ["sunny", "72F"]}),
    )
)

LogSpanGuardrailContent

See the dedicated LogSpanGuardrailContent page for full documentation. Safety and compliance checks.
from adaline_api.models.log_span_content import LogSpanContent
from adaline_api.models.log_span_guardrail_content import LogSpanGuardrailContent

Fields

type
str
required
Must be "Guardrail".
input
str
required
The input payload as a JSON string.
output
str
required
The output payload as a JSON string.

Example

import json
from adaline_api.models.log_span_content import LogSpanContent
from adaline_api.models.log_span_guardrail_content import LogSpanGuardrailContent

content = LogSpanContent(
    actual_instance=LogSpanGuardrailContent(
        type="Guardrail",
        input=json.dumps({"text": "user message"}),
        output=json.dumps({"safe": True, "categories": []}),
    )
)

LogSpanRetrievalContent

See the dedicated LogSpanRetrievalContent page for full documentation. RAG and vector database queries.
from adaline_api.models.log_span_content import LogSpanContent
from adaline_api.models.log_span_retrieval_content import LogSpanRetrievalContent

Fields

type
str
required
Must be "Retrieval".
input
str
required
The input payload as a JSON string.
output
str
required
The output payload as a JSON string.

Example

import json
from adaline_api.models.log_span_content import LogSpanContent
from adaline_api.models.log_span_retrieval_content import LogSpanRetrievalContent

content = LogSpanContent(
    actual_instance=LogSpanRetrievalContent(
        type="Retrieval",
        input=json.dumps({"query": "How does auth work?", "top_k": 5}),
        output=json.dumps({"documents": [{"id": "doc1", "score": 0.95}]}),
    )
)

LogSpanOtherContent

See the dedicated LogSpanOtherContent page for full documentation. Catch-all for any other operation type.
from adaline_api.models.log_span_content import LogSpanContent
from adaline_api.models.log_span_other_content import LogSpanOtherContent

Fields

type
str
required
Must be "Other".
input
str
required
The input payload as a JSON string.
output
str
required
The output payload as a JSON string.

Example

from adaline_api.models.log_span_content import LogSpanContent
from adaline_api.models.log_span_other_content import LogSpanOtherContent

content = LogSpanContent(
    actual_instance=LogSpanOtherContent(
        type="Other",
        input="{}",
        output="{}",
    )
)

Supporting Types

LogSpanVariable

See the dedicated LogSpanVariable page for full documentation. Variable attached to a Model or ModelStream span for evaluation tracking.
from adaline_api.models.log_span_variable import LogSpanVariable

variable = LogSpanVariable(
    name="user_question",
    value=TextContent(modality="text", value="What is quantum computing?")
)

LogSpanVariableValue

Content value for a span variable. A discriminated union on modality — can be TextContent, ImageContent, PdfContent, ReasoningContent, ToolCallContent, or ToolResponseContent.

LogAttributesValue

See the dedicated LogAttributesValue page for full documentation. The allowed value types for trace/span attributes: str, int, float, or bool.
from adaline_api.models.log_attributes_value import LogAttributesValue
Used in Dict[str, LogAttributesValue] for the attributes parameter on traces and spans.

TraceStatus

See the dedicated TraceStatus page for full documentation. Allowed status values for a trace: "success", "failure", "aborted", "cancelled", "pending", "unknown".

SpanStatus

See the dedicated SpanStatus page for full documentation. Allowed status values for a span: "success", "failure", "aborted", "cancelled", "unknown".
Span status does not include "pending" — that value is only available for traces.