Trace Class
The Trace class represents a high-level operation or workflow in your LLM application. Traces capture the entire lifecycle of a request, batch job, or user interaction, and can contain multiple child spans for granular tracking.
Overview
A trace is the top-level unit of observability that represents:
A single user request to your API
A background job or workflow
A conversation turn in a chatbot
A complete RAG pipeline execution
Any end-to-end operation you want to track
Traces contain:
Metadata : name, status, timestamps, tags, attributes
Context : session ID, reference ID
Children : one or more spans representing sub-operations
Creation
Create a trace using the Monitor.log_trace() method:
trace = monitor.log_trace(
name = "User Login" ,
session_id = "session-abc-123" ,
tags = [ "auth" , "production" ],
attributes = { "user_id" : "user-456" }
)
Properties
trace
trace: CreateLogTraceRequest
The underlying trace request object that will be sent to the API. Contains all trace metadata.
trace_id
The server-assigned trace ID, set after the trace is successfully flushed to the API.
trace = monitor.log_trace( name = "Operation" )
print (trace.trace_id) # None (not flushed yet)
trace.end()
await monitor.flush()
print (trace.trace_id) # "trace_abc123xyz" (assigned by server)
Methods
log_span()
Create a child span within this trace.
log_span(
* ,
name: str ,
status: str = "unknown" ,
reference_id: Optional[ str ] = None ,
prompt_id: Optional[ str ] = None ,
deployment_id: Optional[ str ] = None ,
run_evaluation: Optional[ bool ] = None ,
tags: Optional[List[ str ]] = None ,
attributes: Optional[Dict[ str , Any]] = None ,
content: Optional[LogSpanContent] = None
) -> Span
Parameters
Human-readable name for the span (e.g., “LLM Call”, “Database Query”).
Initial status: 'success' | 'failure' | 'aborted' | 'cancelled' | 'unknown'
Custom reference ID. Auto-generated UUID if not provided.
ID of the prompt used in this span (for LLM calls).
ID of the deployment used in this span.
Whether to run evaluators on this span after completion.
Additional metadata. Values must be str, int, float, or bool.
Span content (input/output). Defaults to monitor.default_content.
Returns
Examples
Basic Span
LLM Call Span
Nested Spans
trace = monitor.log_trace( name = "API Request" )
span = trace.log_span( name = "Process Data" )
# Do work...
await process_data()
span.update({ "status" : "success" })
span.end()
trace.end()
update()
Update trace metadata in place.
update(updates: dict ) -> Trace
Parameters
Dictionary of fields to update. Only the keys "name", "status", "tags", and "attributes" are applied; all other keys are silently ignored.
Returns
Returns self for method chaining.
Examples
Update Status
Add Attributes
Method Chaining
trace = monitor.log_trace( name = "Process Order" , status = "pending" )
try :
await process_order()
trace.update({ "status" : "success" })
except Exception :
trace.update({ "status" : "failure" })
trace.end()
end()
Mark the trace as complete and ready to be flushed.
Behavior
Sets ended_at timestamp if not already set
Marks the trace as ready in the monitor’s buffer
Recursively ends all child spans belonging to this trace
Returns the trace’s reference ID
Idempotent: subsequent calls return the reference ID without side effects
Always call end() on your traces! Traces that are never ended will never be flushed to the API.
Returns
The trace’s reference ID for correlation with external systems.
Examples
Basic
Try-Finally Pattern
Auto-End Children
trace = monitor.log_trace( name = "Operation" )
# Do work...
trace.end() # Required!
Complete Examples
Simple API Request
import json
from openai import AsyncOpenAI
from adaline.main import Adaline
from adaline_api.models.log_span_content import LogSpanContent
from adaline_api.models.log_span_model_content import LogSpanModelContent
adaline = Adaline()
openai = AsyncOpenAI()
monitor = adaline.init_monitor( project_id = "my-api" )
async def handle_chat_request ( user_id : str , message : str ):
trace = monitor.log_trace(
name = "Chat Request" ,
session_id = user_id,
tags = [ "chat" , "api" ],
attributes = { "user_id" : user_id, "message_length" : len (message)}
)
try :
llm_span = trace.log_span( name = "OpenAI Completion" , tags = [ "llm" ])
response = await openai.chat.completions.create(
model = "gpt-4o" ,
messages = [{ "role" : "user" , "content" : message}]
)
llm_span.update({
"status" : "success" ,
"content" : LogSpanContent(
actual_instance = LogSpanModelContent(
type = "Model" ,
provider = "openai" ,
model = "gpt-4o" ,
input = json.dumps([{ "role" : "user" , "content" : message}]),
output = json.dumps(response.choices[ 0 ].message.model_dump())
)
)
})
llm_span.end()
trace.update({ "status" : "success" })
return response.choices[ 0 ].message.content
except Exception as error:
trace.update({
"status" : "failure" ,
"attributes" : { "error" : str (error)}
})
raise
finally :
trace.end()
Multi-Step Workflow
async def process_user_onboarding ( user_id : str ):
trace = monitor.log_trace(
name = "User Onboarding" ,
session_id = user_id,
status = "pending" ,
tags = [ "onboarding" , "workflow" ],
attributes = { "user_id" : user_id}
)
try :
# Step 1: Create account
create_span = trace.log_span( name = "Create Account" , tags = [ "database" ])
await create_account(user_id)
create_span.update({ "status" : "success" })
create_span.end()
# Step 2: Send welcome email
email_span = trace.log_span( name = "Send Welcome Email" , tags = [ "email" ])
await send_welcome_email(user_id)
email_span.update({ "status" : "success" })
email_span.end()
# Step 3: Generate personalized content
llm_span = trace.log_span(
name = "Generate Welcome Message" ,
tags = [ "llm" , "personalization" ]
)
welcome_msg = await generate_welcome_message(user_id)
llm_span.update({
"status" : "success" ,
"content" : LogSpanContent(
actual_instance = LogSpanModelContent(
type = "Model" ,
provider = "openai" ,
model = "gpt-4o" ,
input = json.dumps({ "user_id" : user_id}),
output = json.dumps({ "message" : welcome_msg})
)
)
})
llm_span.end()
trace.update({ "status" : "success" })
except Exception as error:
trace.update({ "status" : "failure" , "attributes" : { "error" : str (error)}})
raise
finally :
trace.end()
Best Practices
1. Always Use Try-Finally
# Good: trace.end() always called
trace = monitor.log_trace( name = "Operation" )
try :
await do_work()
trace.update({ "status" : "success" })
finally :
trace.end()
2. Use Meaningful Names
# Good: Descriptive names
trace = monitor.log_trace( name = "User Registration Flow" )
trace = monitor.log_trace( name = "PDF Processing Pipeline" )
trace = monitor.log_trace( name = "RAG Question Answering" )
3. Add Context with Attributes
trace = monitor.log_trace(
name = "API Request" ,
session_id = user_id,
tags = [ "api" , "production" , "premium-tier" ],
attributes = {
"user_id" : user_id,
"endpoint" : "/api/chat" ,
"method" : "POST" ,
"region" : "us-east-1"
}
)
session_id = f "user- { user_id } - { int (time.time()) } "
trace1 = monitor.log_trace( name = "Login" , session_id = session_id)
trace2 = monitor.log_trace( name = "Chat Message 1" , session_id = session_id)
trace3 = monitor.log_trace( name = "Chat Message 2" , session_id = session_id)