Analytics

Bring clarity to your LLM workflows with instant dashboards, trend analysis, advanced filtering, and full trace-level detail—all in one place.

Get Started
Product Layout Image

Trusted by

Monitor, Analyze, and Debug Your Model at Every Level

Reports at a Glance

Reports at a Glance

Performance Snapshot

Get all your essential LLM KPIs in one view—total requests, average eval score, latency, cost per call, and token usage—so you can instantly gauge overall health and spot anomalies.

Historical Trends

Historical Trends

Trend Over Time

Track how key metrics—like average input tokens and cost per request—evolve over days or weeks. Spot rising trends and sudden shifts so you can optimize prompt design and budget allocation proactively.

Advanced Filtering

Advanced Filtering

Advanced Query Filters

Drill down into your data by matching on completion text, session IDs, or trace IDs. Then apply conditions—latency, cost, token count—to instantly surface the exact calls you need.

Traces and Spans

Traces and Spans

Detailed Session Replay

See your entire request broken down as a collapsible tree—each API call, function, and model inference tagged with its execution time. Quickly spot the deepest branches where latency hides.

FAQs