Skip to main content
Use the built-in search (Cmd+K or Ctrl+K) to quickly find terms, or browse alphabetically using the tabs below.
A local SDK that provides a unified interface for calling 200+ LLMs with built-in batching, retries, caching, callbacks, and OpenTelemetry support. Also referred to as the “Super SDK” for its cross-provider compatibility.
The Adaline log proxy that intercepts LLM API calls and sends them to Monitor without modifying application code.
An automated evaluation process that runs on live data in the Monitor pillar. Samples incoming requests and scores them using configured evaluators to track quality over time.
A collection of test cases stored in Adaline used for evaluations. Each dataset consists of rows (test cases) and columns (variables) that map to prompt variables.
A versioned snapshot of a prompt configuration that has been published to a specific environment. Deployments are accessible via API or SDK and can be rolled back to previous versions.
A dataset column that fetches its value from an external API or another prompt at runtime, rather than storing static values. Enables live data in evaluations.
The process of running a prompt against a dataset and scoring the responses using one or more evaluators. Evaluations produce reports with pass/fail rates and detailed metrics.
A configured metric or judge used to assess prompt performance during evaluations. Types include LLM-as-a-Judge, JavaScript, Cost, Latency, Text Matcher, and more.
An evaluation pattern where an LLM scores the output of another LLM based on configurable criteria. Enables subjective quality assessment at scale.
The interactive testing environment in Adaline’s Iterate pillar. Allows running prompts with different inputs and comparing outputs side-by-side.
A container in Adaline that holds prompts, datasets, evaluations, and deployments. A project can be considered equivalent to an AI Agent / Application / Workflow.
A configured instruction template in Adaline consisting of messages, model settings, variables, and optional tools. Prompts are versioned and can be deployed to environments.
The technique of connecting multiple prompts where the output of one becomes the input of another. Enables complex workflows with sequential LLM calls.
A saved snapshot of a prompt’s configuration at a specific point in time. Versions are auto-incremented and can be deployed or compared in evaluations.
An LLM service provider (e.g., OpenAI, Anthropic, Google, Azure). Each provider offers different models with varying capabilities and pricing.
A model configuration that constrains output structure (e.g., JSON mode, JSON schema). Ensures consistent, parseable responses from the LLM.
A single operation within a trace representing one LLM call or logical step. Spans have start/end times, attributes, and can be nested.
A function that an LLM can call during generation. Tools have schemas defining their parameters and are used for retrieval, calculations, and external actions.
A structured request from the LLM to execute a specific tool with provided arguments. Tool calls must be processed by the application and results returned to the LLM.
A complete record of a user request flow containing multiple spans. Traces enable end-to-end visibility of LLM operations in the Monitor pillar.
A placeholder in prompt templates (e.g., {{user_input}}) that is replaced with actual values at runtime. Variables are mapped to dataset columns in evaluations.
An HTTP callback triggered by Adaline events such as deployments or evaluation completions. Enables integration with external systems and CI/CD pipelines.
The top-level organizational unit in Adaline containing teamspaces, projects, members, and API keys. Workspaces define billing boundaries and access control.