Skip to main content

Monitor

The Monitor class buffers traces and spans and flushes them to the Adaline API in the background. It follows the OpenTelemetry error handling principle: telemetry failures never propagate to your application. Items that fail after retries are dropped and counted, not stored. Create a Monitor via adaline.init_monitor().

Properties

PropertyTypeDescription
bufferlistEntries waiting to be flushed. Each item is a BufferedEntry with ready, data, and category keys.
sent_countintTotal number of items successfully sent to the API. Starts at 0.
dropped_countintTotal number of items dropped due to errors or buffer overflow. Starts at 0.
default_contentLogSpanContentFallback span content when none is provided. Defaults to LogSpanOtherContent(type="Other", input="{}", output="{}").
flush_interval_secondsintSeconds between automatic background flushes.
max_buffer_sizeintMaximum buffered items before oldest entries are dropped.
project_idstrThe project ID associated with this monitor.

Methods

log_trace

Creates a new Trace and appends it to the buffer. This is a synchronous method.
trace = monitor.log_trace(
    name="Chat Completion",
    status="unknown",
    session_id="user-session-123",
    reference_id=None,
    tags=["production", "v2"],
    attributes={"user_id": "123", "model": "gpt-4"}
)

Parameters

name
str
required
Display name for the trace.
status
str
default:"unknown"
Trace status (TraceStatus). One of: "success", "failure", "aborted", "cancelled", "pending", "unknown".
session_id
str | None
Optional session identifier for grouping related traces.
reference_id
str | None
Client-side unique identifier. If omitted, a UUID is auto-generated.
tags
list[str] | None
Optional list of string tags.
attributes
dict[str, Any] | None
Optional key-value metadata. Values are wrapped in LogAttributesValue automatically.
Returns: A new Trace instance.

flush

Manually flushes all ready items from the buffer to the API. Sends each ready trace or span concurrently. Successfully sent items are removed from the buffer. Failed items are dropped and counted via dropped_count. Skips if a flush is already in progress. This is an async method.
await monitor.flush()

stop

Stops the background flush loop and cancels the flush task. This is a synchronous method. After calling stop(), no more automatic flushes occur. Call flush() before stop() if you need to send remaining items.
monitor.stop()

Usage Pattern

import asyncio
from adaline.main import Adaline

async def main():
    adaline = Adaline()
    monitor = adaline.init_monitor(project_id="my-project")

    trace = monitor.log_trace(name="Request Handler")

    span = trace.log_span(name="LLM Call")
    # ... perform work ...
    span.update({"status": "success"})
    span.end()

    trace.update({"status": "success"})
    trace.end()

    # Flush remaining items before stopping
    await monitor.flush()
    monitor.stop()

    print(f"Sent: {monitor.sent_count}, Dropped: {monitor.dropped_count}")

asyncio.run(main())