Skip to main content
Alerts are currently in beta. Contact support@adaline.ai for private preview access.
Alerts close the loop between observability and action by notifying your team the moment a condition becomes true in production — whether that’s a behavioral pattern in user conversations, a drop in evaluation scores, a spike in latency, or an unexpected cost increase.

Alert sources

Alerts can be triggered by semantic conditions or structured filters across your production data. Define triggers using natural language statements and semantic questions that describe the condition you care about:
  • “The user is expressing frustration or dissatisfaction with the agent’s responses”
  • “The conversation was abandoned before the user’s issue was resolved”
  • “The user is attempting to manipulate or socially engineer the agent”
  • “The agent hallucinated or provided information that contradicts the source material”
  • “The agent failed to follow the system prompt’s guardrails”
Adaline evaluates these semantic conditions against your production logs and fires alerts when matches are detected — giving you the ability to monitor for behavioral and qualitative issues that structured filtering alone cannot catch. Alerts also support the structured filtering engine that powers the Monitor section, so you can set triggers on:
  • Continuous evaluation scores — fire when an evaluator’s average score drops below a threshold, wired directly into your continuous evaluation pipeline
  • Error rates — detect when error volume or percentage exceeds normal levels
  • Latency — catch when response times exceed acceptable bounds for specific prompts or models
  • Cost — monitor per-request or aggregate cost against budget thresholds
  • Token usage — track when token consumption patterns shift unexpectedly
  • Custom filters — combine metadata, scores, tags, and operational metrics into any condition

Delivery channels

When an alert fires, notifications are delivered to the channels your team already uses:
  • Slack — post to a channel for team-wide visibility
  • Webhooks — send structured JSON payloads to any HTTP endpoint for integration with PagerDuty, Opsgenie, Datadog, or your own internal tooling
  • Email — deliver summaries directly to team members or distribution lists
  • AWS SNS — publish to an SNS topic for integration with AWS-native workflows, Lambda functions, or SQS queues
Multiple delivery channels can be configured per alert, so a critical quality regression can simultaneously ping Slack, page the on-call engineer, and trigger an automated remediation workflow.

Additional capabilities

  • Configurable intervals — choose how frequently logs are analyzed for alerts, from every 1 minute to once every 24 hours
  • Test alerts — validate any alert against recent log data before going live, confirming both the filter logic and the delivery integration without waiting for a real event

Next steps

Setup continuous evaluations

Automatically assess prompt quality on live traffic.

Analyze log charts

Visualize trends that inform your alert thresholds.