Skip to main content

Span

The Span class represents a granular operation within a trace, such as an LLM call, tool execution, embedding generation, or retrieval operation. Spans can be nested to represent sub-operations. Create a Span via trace.log_span() or span.log_span() for nested spans.

Properties

PropertyTypeDescription
spanCreateLogSpanRequestThe underlying API payload containing the span data. Access nested fields via span.span (e.g. span.span.name, span.span.reference_id).
traceTraceThe parent Trace this span belongs to.

Status Values

Span status (SpanStatus) must be one of: "success", "failure", "aborted", "cancelled", "unknown".

Methods

update

Updates span fields in place. Takes a dict with the fields to update. Only the keys "name", "status", "tags", "attributes", "run_evaluation", and "content" are applied; all other keys are silently ignored. Returns self for method chaining.
from adaline_api.models.log_span_content import LogSpanContent
from adaline_api.models.log_span_model_content import LogSpanModelContent

span.update({
    "status": "success",
    "tags": ["llm", "gpt-4"],
    "attributes": {"tokens": 1500},
    "run_evaluation": True,
    "content": LogSpanContent(
        actual_instance=LogSpanModelContent(
            type="Model",
            provider="openai",
            model="gpt-4",
            input='[{"role": "user", "content": "Hello"}]',
            output='{"role": "assistant", "content": "Hi there!"}'
        )
    )
})

# Method chaining
span.update({"status": "success"}).update({"tags": ["final"]})

Parameters

updates
dict
required
Dictionary of fields to update.
KeyTypeDescription
"name"strUpdate the span display name.
"status"strUpdate the span status (SpanStatus).
"tags"list[str]Replace the span tags.
"attributes"dict[str, Any]Replace the span attributes. Values are auto-wrapped in LogAttributesValue.
"run_evaluation"boolWhether to run evaluation on this span.
"content"LogSpanContentReplace the span content payload.
Returns: Span (self, for chaining).

log_span

Creates a child Span nested under this span and adds it to the monitor buffer. The child span’s parent_reference_id is automatically set to this span’s reference_id. This is a synchronous method.
child_span = span.log_span(
    name="Tool Execution",
    status="unknown",
    prompt_id=None,
    deployment_id=None,
    run_evaluation=False,
    tags=["tool"],
    attributes={"tool_name": "search"},
    content=None
)

Parameters

name
str
required
Display name for the child span.
status
str
default:"unknown"
Span status (SpanStatus). One of: "success", "failure", "aborted", "cancelled", "unknown".
reference_id
str | None
Client-side unique identifier. If omitted, a UUID is auto-generated.
prompt_id
str | None
Prompt identifier to associate with this span.
deployment_id
str | None
Deployment identifier to associate with this span.
run_evaluation
bool | None
Whether to run evaluation on this span.
tags
list[str] | None
Optional list of string tags.
attributes
dict[str, Any] | None
Optional key-value metadata. Values are wrapped in LogAttributesValue automatically.
content
LogSpanContent | None
Span content payload (LogSpanContent). Falls back to the monitor’s default_content if not provided.
Returns: A new child Span instance.

end

Marks the span as complete and ready for flushing. Automatically ends all child spans whose parent_reference_id matches this span. Idempotent: subsequent calls return the reference ID without side effects.
reference_id = span.end()
Returns: str | None — the span’s reference_id.

Span Content Types

Content is wrapped using LogSpanContent with one of the following 8 types:

LogSpanModelContent

Standard LLM inference calls.
from adaline_api.models.log_span_model_content import LogSpanModelContent

content = LogSpanContent(
    actual_instance=LogSpanModelContent(
        type="Model",
        provider="openai",
        model="gpt-4",
        input='[{"role": "user", "content": "Hello"}]',
        output='{"role": "assistant", "content": "Hi!"}'
    )
)

LogSpanModelStreamContent

Streaming LLM inference calls.
from adaline_api.models.log_span_model_stream_content import LogSpanModelStreamContent

content = LogSpanContent(
    actual_instance=LogSpanModelStreamContent(
        type="ModelStream",
        provider="openai",
        model="gpt-4",
        input='[{"role": "user", "content": "Hello"}]',
        output='{"role": "assistant", "content": "Hi!"}'
    )
)

LogSpanEmbeddingsContent

Embedding generation calls.
from adaline_api.models.log_span_embeddings_content import LogSpanEmbeddingsContent

content = LogSpanContent(
    actual_instance=LogSpanEmbeddingsContent(
        type="Embeddings",
        input='{"model": "text-embedding-3-large", "input": "search query"}',
        output='{"dimensions": 3072}'
    )
)

LogSpanFunctionContent

Custom application logic and function calls.
from adaline_api.models.log_span_function_content import LogSpanFunctionContent

content = LogSpanContent(
    actual_instance=LogSpanFunctionContent(
        type="Function",
        input='{"arg1": "value1"}',
        output='{"result": "value2"}'
    )
)

LogSpanToolContent

Tool/API invocations.
from adaline_api.models.log_span_tool_content import LogSpanToolContent

content = LogSpanContent(
    actual_instance=LogSpanToolContent(
        type="Tool",
        input='{"function": "search", "args": {"query": "weather"}}',
        output='{"results": ["sunny", "72F"]}'
    )
)

LogSpanGuardrailContent

Safety and compliance checks.
from adaline_api.models.log_span_guardrail_content import LogSpanGuardrailContent

content = LogSpanContent(
    actual_instance=LogSpanGuardrailContent(
        type="Guardrail",
        input='{"text": "user message"}',
        output='{"safe": true, "categories": []}'
    )
)

LogSpanRetrievalContent

RAG and vector database queries.
from adaline_api.models.log_span_retrieval_content import LogSpanRetrievalContent

content = LogSpanContent(
    actual_instance=LogSpanRetrievalContent(
        type="Retrieval",
        input='{"query": "How does auth work?", "top_k": 5}',
        output='{"documents": [{"id": "doc1", "score": 0.95}]}'
    )
)

LogSpanOtherContent

Catch-all for any other operation type.
from adaline_api.models.log_span_other_content import LogSpanOtherContent

content = LogSpanContent(
    actual_instance=LogSpanOtherContent(
        type="Other",
        input="{}",
        output="{}"
    )
)

Example: Nested Spans

llm_span = trace.log_span(
    name="Agent Orchestrator",
    tags=["agent"]
)

tool_span = llm_span.log_span(
    name="Web Search Tool",
    tags=["tool"]
)

tool_span.update({
    "status": "success",
    "content": LogSpanContent(
        actual_instance=LogSpanToolContent(
            type="Tool",
            input='{"query": "latest news"}',
            output='{"results": ["Article 1", "Article 2"]}'
        )
    )
})
tool_span.end()

llm_span.update({"status": "success"})
llm_span.end()  # also ends any un-ended child spans