Skip to main content

Trace

The Trace class represents a high-level operation in your AI application, such as a user request or workflow execution. Traces group related spans together and are buffered in the Monitor until flushed. Create a Trace via monitor.log_trace().

Properties

PropertyTypeDescription
traceCreateLogTraceRequestThe underlying API payload containing the trace data. Access nested fields via trace.trace (e.g. trace.trace.name, trace.trace.reference_id).
trace_id`strNone`Server-assigned trace ID, populated after the trace is flushed to the API. None until then.

Status Values

Trace status (TraceStatus) must be one of: "success", "failure", "aborted", "cancelled", "pending", "unknown".

Methods

update

Updates trace fields in place. Takes a dict with the fields to update. Only the keys "name", "status", "tags", and "attributes" are applied; all other keys are silently ignored. Returns self for method chaining.
trace.update({
    "status": "success",
    "tags": ["completed", "v2"],
    "attributes": {"total_tokens": 1500, "model": "gpt-4"}
})

# Method chaining
trace.update({"status": "success"}).update({"tags": ["final"]})

Parameters

updates
dict
required
Dictionary of fields to update.
KeyTypeDescription
"name"strUpdate the trace display name.
"status"strUpdate the trace status (TraceStatus).
"tags"list[str]Replace the trace tags.
"attributes"dict[str, Any]Replace the trace attributes. Values are auto-wrapped in LogAttributesValue.
Returns: Trace (self, for chaining).

log_span

Creates a child Span under this trace and adds it to the monitor buffer. This is a synchronous method.
span = trace.log_span(
    name="OpenAI GPT-4 Call",
    status="unknown",
    reference_id=None,
    prompt_id="prompt-123",
    deployment_id="deployment-456",
    run_evaluation=True,
    tags=["llm", "gpt-4"],
    attributes={"temperature": 0.7},
    content=None
)

Parameters

name
str
required
Display name for the span.
status
str
default:"unknown"
Span status (SpanStatus). One of: "success", "failure", "aborted", "cancelled", "unknown".
reference_id
str | None
Client-side unique identifier. If omitted, a UUID is auto-generated.
prompt_id
str | None
Prompt identifier to associate with this span.
deployment_id
str | None
Deployment identifier to associate with this span.
run_evaluation
bool | None
Whether to run evaluation on this span.
tags
list[str] | None
Optional list of string tags.
attributes
dict[str, Any] | None
Optional key-value metadata. Values are wrapped in LogAttributesValue automatically.
content
LogSpanContent | None
Span content payload (LogSpanContent). Falls back to the monitor’s default_content if not provided.
Returns: A new Span instance.

end

Marks the trace as complete and ready for flushing. Automatically ends all child spans belonging to this trace. Idempotent: subsequent calls return the reference ID without side effects.
reference_id = trace.end()
Returns: str | None — the trace’s reference_id.

Example

trace = monitor.log_trace(
    name="User Question",
    session_id="session-abc",
    tags=["chat"]
)

retrieval_span = trace.log_span(name="Document Retrieval")
# ... perform retrieval ...
retrieval_span.update({"status": "success"})
retrieval_span.end()

llm_span = trace.log_span(
    name="LLM Generation",
    prompt_id="prompt-123",
    deployment_id="deploy-456",
    run_evaluation=True
)
# ... call LLM ...
llm_span.update({"status": "success"})
# No need to call llm_span.end() — trace.end() handles it

trace.update({"status": "success"})
ref_id = trace.end()  # auto-ends llm_span