# Adaline > Adaline is the single platform for product and engineering teams to iterate, evaluate, deploy, and monitor prompts. ## Docs - [Access control](https://www.adaline.ai/docs/admin/access-control.md): Manage roles, permissions, and access policies across your Adaline workspace - [Anthropic](https://www.adaline.ai/docs/admin/configure-ai-provider/anthropic.md): Configure Anthropic as an AI provider in your Adaline workspace - [Azure](https://www.adaline.ai/docs/admin/configure-ai-provider/azure.md): Configure Azure as an AI provider in your Adaline workspace - [Bedrock](https://www.adaline.ai/docs/admin/configure-ai-provider/bedrock.md): Configure Amazon Bedrock as an AI provider in your Adaline workspace - [Custom](https://www.adaline.ai/docs/admin/configure-ai-provider/custom.md): Configure a custom AI provider in your Adaline workspace - [Google](https://www.adaline.ai/docs/admin/configure-ai-provider/google.md): Configure Google as an AI provider in your Adaline workspace - [Groq](https://www.adaline.ai/docs/admin/configure-ai-provider/groq.md): Configure Groq as an AI provider in your Adaline workspace - [Open Router](https://www.adaline.ai/docs/admin/configure-ai-provider/open-router.md): Configure Open Router as an AI provider in your Adaline workspace - [OpenAI](https://www.adaline.ai/docs/admin/configure-ai-provider/openai.md): Configure OpenAI as an AI provider in your Adaline workspace - [Overview](https://www.adaline.ai/docs/admin/configure-ai-provider/overview.md): Configure and manage API credentials for AI model providers at the workspace level. - [Together AI](https://www.adaline.ai/docs/admin/configure-ai-provider/togetherai.md): Configure Together AI as an AI provider in your Adaline workspace - [Vertex](https://www.adaline.ai/docs/admin/configure-ai-provider/vertex.md): Configure Google Vertex AI as an AI provider in your Adaline workspace - [xAI](https://www.adaline.ai/docs/admin/configure-ai-provider/xai.md): Configure xAI as an AI provider in your Adaline workspace - [Create API keys](https://www.adaline.ai/docs/admin/create-api-keys.md): Generate API keys to access Adaline's API and SDK from your applications - [Data management](https://www.adaline.ai/docs/admin/data-management.md): Export, retain, and delete data across your Adaline workspace - [Security](https://www.adaline.ai/docs/admin/security.md): Authentication, encryption, and audit logging in Adaline - [View API usage](https://www.adaline.ai/docs/admin/view-api-usage.md): Monitor your workspace's API usage, costs, and rate limits - [Compare your deployments](https://www.adaline.ai/docs/deploy/compare-your-deployments.md): Review diffs between deployment versions and sync environments with confidence - [Configure environments](https://www.adaline.ai/docs/deploy/configure-environments.md): Create and manage isolated deployment environments for your AI applications - [Configure webhooks](https://www.adaline.ai/docs/deploy/configure-webhooks.md): Receive real-time deployment events via signed webhooks to update your AI applications - [Deploy your prompt](https://www.adaline.ai/docs/deploy/deploy-your-prompt.md): Ship your prompts from Adaline to your AI applications in real time - [Integrate your CI/CD](https://www.adaline.ai/docs/deploy/integrate-your-ci-cd.md): Run evaluations in your pipeline and gate prompt deployments on quality, cost, and latency thresholds - [Overview](https://www.adaline.ai/docs/deploy/overview.md): Ship your prompts to your AI applications in real time with versioned deployments - [Rollback your prompt](https://www.adaline.ai/docs/deploy/rollback-your-prompt.md): Instantly revert to any previous prompt deployment when issues arise - [Analyze evaluation reports](https://www.adaline.ai/docs/evaluate/analyze-evaluation-reports.md): Review, compare, and act on detailed evaluation results to improve your prompts - [Cost](https://www.adaline.ai/docs/evaluate/cost.md): Track and evaluate token costs to keep your prompts within budget - [Different modalities in dataset](https://www.adaline.ai/docs/evaluate/different-modalities-in-dataset.md): Use text, images, and PDFs in your dataset cells to evaluate multimodal prompts - [Dynamic columns in dataset](https://www.adaline.ai/docs/evaluate/dynamic-columns-in-dataset.md): Configure columns that fetch live data from APIs or execute other prompts at runtime - [Evaluate multi-turn chat](https://www.adaline.ai/docs/evaluate/evaluate-multi-turn-chat.md): Run evaluations on conversational prompts where context builds across multiple exchanges - [Evaluate prompts](https://www.adaline.ai/docs/evaluate/evaluate-prompts.md): Run batch evaluations on single prompts and multi-step prompt chains - [Import CSV into dataset](https://www.adaline.ai/docs/evaluate/import-csv-into-dataset.md): Bulk-import test cases from CSV files to quickly populate your evaluation datasets - [JavaScript](https://www.adaline.ai/docs/evaluate/javascript.md): Write custom JavaScript code to programmatically validate prompt responses - [Latency](https://www.adaline.ai/docs/evaluate/latency.md): Measure and evaluate response time to ensure your prompts meet performance requirements - [LLM-as-a-Judge](https://www.adaline.ai/docs/evaluate/llm-as-a-judge.md): Use an LLM to evaluate your prompt responses against a custom rubric - [Overview](https://www.adaline.ai/docs/evaluate/overview.md): Scientifically validate your prompts with datasets, evaluators, and detailed reports - [Response length](https://www.adaline.ai/docs/evaluate/response-length.md): Validate that prompt responses meet length requirements with flexible measurement units - [Setup dataset](https://www.adaline.ai/docs/evaluate/setup-dataset.md): Create and configure datasets with test cases for evaluating your prompts - [Text matcher](https://www.adaline.ai/docs/evaluate/text-matcher.md): Validate prompt responses with precise text pattern matching and keyword detection - [Integrate your AI agent](https://www.adaline.ai/docs/get-started/integrate-your-ai-agent.md): Connect your AI application to Adaline for full observability over traces, costs, and evaluations - [Introduction](https://www.adaline.ai/docs/get-started/introduction.md) - [Run your first evaluation](https://www.adaline.ai/docs/get-started/run-your-first-evaluation.md): Evaluate your prompt's quality and performance - [Run your first prompt](https://www.adaline.ai/docs/get-started/run-your-first-prompt.md): Create and run your first prompt in the Adaline Playground - [The Adaline Method](https://www.adaline.ai/docs/get-started/the-adaline-method.md): Understand The Adaline Method for building reliable AI agents. - [Advanced usage](https://www.adaline.ai/docs/instrument/advanced-usage.md): Multi-step workflows, session tracking, and production tracing patterns - [Export Logs](https://www.adaline.ai/docs/instrument/export-logs.md): Programmatically export traces and spans from Adaline for analysis, pipelines, and custom tooling - [Log attachments](https://www.adaline.ai/docs/instrument/log-attachments.md): Attach images, PDFs, and text to your traces and spans as variable values - [Log user feedback](https://www.adaline.ai/docs/instrument/log-user-feedback.md): Attach user feedback signals to traces and spans for quality monitoring - [Overview](https://www.adaline.ai/docs/instrument/overview.md): Connect your AI application to Adaline for complete observability - [With Adaline API](https://www.adaline.ai/docs/instrument/with-adaline-api.md): Send traces and spans directly to Adaline using the REST API - [With Adaline Proxy](https://www.adaline.ai/docs/instrument/with-adaline-proxy.md): Instrument your AI application with zero code changes by routing requests through the Adaline Proxy - [With Adaline SDKs](https://www.adaline.ai/docs/instrument/with-adaline-sdks.md): Instrument your AI application with full tracing control using the TypeScript or Python SDK - [Anthropic](https://www.adaline.ai/docs/integrations/ai-providers/anthropic.md): Integrate Anthropic Claude models through the Adaline Proxy for automatic telemetry and observability. - [Azure OpenAI](https://www.adaline.ai/docs/integrations/ai-providers/azure.md): Integrate Azure OpenAI models through the Adaline Proxy for automatic telemetry and observability. - [AWS Bedrock](https://www.adaline.ai/docs/integrations/ai-providers/bedrock.md): Integrate AWS Bedrock models through the Adaline Proxy for automatic telemetry and observability. - [Google](https://www.adaline.ai/docs/integrations/ai-providers/google.md): Integrate Google Gemini models through the Adaline Proxy for automatic telemetry and observability. - [Groq](https://www.adaline.ai/docs/integrations/ai-providers/groq.md): Integrate Groq LPU models through the Adaline Proxy for automatic telemetry and observability. - [OpenAI](https://www.adaline.ai/docs/integrations/ai-providers/openai.md): Integrate OpenAI models through the Adaline Proxy for automatic telemetry and observability. - [OpenRouter](https://www.adaline.ai/docs/integrations/ai-providers/openrouter.md): Integrate OpenRouter models through the Adaline Proxy for automatic telemetry and observability. - [Together AI](https://www.adaline.ai/docs/integrations/ai-providers/togetherai.md): Integrate Together AI models through the Adaline Proxy for automatic telemetry and observability. - [Google Vertex AI](https://www.adaline.ai/docs/integrations/ai-providers/vertex.md): Integrate Google Vertex AI models through the Adaline Proxy for automatic telemetry and observability. - [xAI](https://www.adaline.ai/docs/integrations/ai-providers/xai.md): Integrate xAI Grok models through the Adaline Proxy for automatic telemetry and observability. - [Multi-step workflows](https://www.adaline.ai/docs/integrations/examples/multi-step-workflows.md): Real-world examples from simple single-span workflows to complex multi-span applications - [CrewAI](https://www.adaline.ai/docs/integrations/frameworks/crewai.md): Orchestrate AI agent teams with CrewAI and Adaline. - [LangChain](https://www.adaline.ai/docs/integrations/frameworks/langchain.md): Build LLM applications with LangChain and Adaline. - [LangGraph](https://www.adaline.ai/docs/integrations/frameworks/langgraph.md): Build stateful AI applications with LangGraph and Adaline. - [Mastra](https://www.adaline.ai/docs/integrations/frameworks/mastra.md): Build AI agents with Mastra and Adaline. - [OpenTelemetry](https://www.adaline.ai/docs/integrations/frameworks/opentelemetry.md): Export Adaline traces using OpenTelemetry standard. - [Introduction](https://www.adaline.ai/docs/integrations/introduction.md): Introduction to Adaline integrations with AI providers and frameworks. - [Link datasets in Playground](https://www.adaline.ai/docs/iterate/link-datasets-in-playground.md): Connect datasets to test prompts with structured variable samples at scale - [Multi-shot prompting](https://www.adaline.ai/docs/iterate/multi-shot-prompting.md): Teach the model how to respond by providing example input/output pairs in your prompt - [Overview](https://www.adaline.ai/docs/iterate/overview.md): A creative space within Adaline where you build, test, and refine your prompts - [Run prompts in Playground](https://www.adaline.ai/docs/iterate/run-prompts-in-playground.md): Test and iterate on your prompts in the interactive Playground environment - [Tool calls in Playground](https://www.adaline.ai/docs/iterate/tool-calls-in-playground.md): Execute and test tool calls with manual responses or automatic execution in the Playground - [Use APIs in prompts](https://www.adaline.ai/docs/iterate/use-apis-in-prompt.md): Fetch live data from external HTTP endpoints at runtime to inject real-time context into your prompts - [Use images in prompts](https://www.adaline.ai/docs/iterate/use-images-in-prompt.md): Add image inputs to create multi-modal prompts for vision-capable models - [Use MCP servers in prompts](https://www.adaline.ai/docs/iterate/use-mcp-server-in-prompt.md): Connect to Model Context Protocol servers to access external tools and data sources from your prompts - [Use other prompts in prompts](https://www.adaline.ai/docs/iterate/use-other-prompts-in-prompt.md): Chain prompts together by using the output of one prompt as input to another - [Use parameters in prompts](https://www.adaline.ai/docs/iterate/use-parameters-in-prompt.md): Select an LLM and fine-tune generation settings like temperature, tokens, and response format - [Use PDFs in prompts](https://www.adaline.ai/docs/iterate/use-pdfs-in-prompt.md): Include PDF documents as context in your prompts for document analysis and extraction - [Use roles in prompts](https://www.adaline.ai/docs/iterate/use-roles-in-prompt.md): Structure prompts using role-based messages with system, user, assistant, and tool roles - [Use text in prompts](https://www.adaline.ai/docs/iterate/use-text-in-prompt.md): Compose text content with variables, comments, and structured formatting - [Use tools in prompts](https://www.adaline.ai/docs/iterate/use-tools-in-prompt.md): Enable function calling and tool use to extend your LLM's capabilities with external services - [Use variables in prompts](https://www.adaline.ai/docs/iterate/use-variables-in-prompt.md): Create dynamic, reusable prompt templates with text, image, and PDF variables - [View past prompt runs](https://www.adaline.ai/docs/iterate/view-past-prompt-runs.md): Access, compare, and restore the complete history of your prompt executions - [Alerts](https://www.adaline.ai/docs/monitor/alerts.md): Get notified when quality degrades, costs spike, or deployments change in production - [Analyze log charts](https://www.adaline.ai/docs/monitor/analyze-log-charts.md): Monitor trends and patterns with aggregated analytics charts - [Analyze log spans](https://www.adaline.ai/docs/monitor/analyze-log-spans.md): Inspect individual operations within a trace — LLM calls, tool executions, retrievals, and more - [Analyze log traces](https://www.adaline.ai/docs/monitor/analyze-log-traces.md) - [Annotate logs](https://www.adaline.ai/docs/monitor/annotate-logs.md): Build a human review queue in your dataset so every production log gets annotated - [Build datasets from logs](https://www.adaline.ai/docs/monitor/build-logs-from-dataset.md): Turn production logs into evaluation datasets to improve your prompts with real-world data - [Filter and search logs](https://www.adaline.ai/docs/monitor/filter-and-search-logs.md): Find the exact traces and spans you need using filters, metadata, and search - [Overview](https://www.adaline.ai/docs/monitor/overview.md): Complete visibility into your AI agents with traces, spans, charts, and automated quality checks - [Setup continuous evaluations](https://www.adaline.ai/docs/monitor/setup-continuous-evaluations.md): Run automated quality checks on live production data - [Use logs to improve prompts](https://www.adaline.ai/docs/monitor/use-logs-to-improve-prompts.md): Reproduce production issues in the Playground, iterate on fixes, and close the loop with datasets and evaluations - [Glossary](https://www.adaline.ai/docs/others/glossary.md): Definitions of common terms and concepts in Adaline - [Competitive intelligence analysis](https://www.adaline.ai/docs/others/prompt-library/competitive-intelligence-analysis.md): Analyze market intelligence data and generate strategic insights. - [Customer review analysis template](https://www.adaline.ai/docs/others/prompt-library/customer-review-analysis-template.md): Transform Customer Feedback into Actionable Product Insights. - [Drafting product specifications](https://www.adaline.ai/docs/others/prompt-library/drafting-product-specification.md): Generate comprehensive, team-ready product specs from just key inputs — in minutes. - [Generating user research questions](https://www.adaline.ai/docs/others/prompt-library/generate-user-research-question.md): Craft Insightful User Research Questions with LLM. - [Overview](https://www.adaline.ai/docs/others/prompt-library/overview.md): Find a collection prompt templates that suits your needs. - [Product strategy consultant](https://www.adaline.ai/docs/others/prompt-library/product-strategy-consultant.md): Accelerate Strategic Planning with AI-Powered Roadmaps. - [Refining internal communications](https://www.adaline.ai/docs/others/prompt-library/refining-internal-communications.md): Transform Technical Messages into Clear, Engaging Communication. - [Add Dataset Columns](https://www.adaline.ai/docs/reference/api/v2/openapi/add-dataset-columns.md): Add one or more columns to an existing dataset. - [Add Dataset Rows](https://www.adaline.ai/docs/reference/api/v2/openapi/add-dataset-rows.md): Add up to 100 rows in a single request. - [Cancel Evaluation](https://www.adaline.ai/docs/reference/api/v2/openapi/cancel-evaluation.md): Cancel a running or queued evaluation. - [Create Dataset](https://www.adaline.ai/docs/reference/api/v2/openapi/create-dataset.md): Create a new dataset in a project. - [Create Evaluation](https://www.adaline.ai/docs/reference/api/v2/openapi/create-evaluation.md): Creates and queues an evaluation run. If `evaluatorId` is provided, runs that single evaluator. Otherwise, runs all active evaluators configured for the prompt. Returns 202 since execution is asynchronous. - [Create Evaluator](https://www.adaline.ai/docs/reference/api/v2/openapi/create-evaluator.md): Create a new evaluator for a prompt. - [Create Log Span](https://www.adaline.ai/docs/reference/api/v2/openapi/create-log-span.md): Log individual spans within traces. - [Create Log Trace](https://www.adaline.ai/docs/reference/api/v2/openapi/create-log-trace.md): Create detailed execution traces for monitoring. - [Create Prompt](https://www.adaline.ai/docs/reference/api/v2/openapi/create-prompt.md): Create a new prompt in a project. - [Delete Dataset](https://www.adaline.ai/docs/reference/api/v2/openapi/delete-dataset.md): Delete a dataset and all its columns and rows. - [Delete Dataset Column](https://www.adaline.ai/docs/reference/api/v2/openapi/delete-dataset-column.md): Delete a column and all its cell values. - [Delete Dataset Row](https://www.adaline.ai/docs/reference/api/v2/openapi/delete-dataset-row.md): Delete a single row from a dataset. - [Delete Evaluator](https://www.adaline.ai/docs/reference/api/v2/openapi/delete-evaluator.md): Delete an evaluator. - [Delete Prompt](https://www.adaline.ai/docs/reference/api/v2/openapi/delete-prompt.md): Delete a prompt and all associated resources. - [Search Spans](https://www.adaline.ai/docs/reference/api/v2/openapi/export-log-spans.md): Returns a cursor-paginated page of filtered span rows. When `promptId` is provided in the request body, results are scoped to spans that were ingested with that `promptId` — regardless of content type. In practice this is most often `Model` / `ModelStream` spans, but any span type the caller attache… - [Search Traces](https://www.adaline.ai/docs/reference/api/v2/openapi/export-log-traces.md): Returns a cursor-paginated page of filtered trace rows for a project. Pass `nextCursor` from the response as `cursor` to fetch the next page. - [Fetch Dynamic Columns](https://www.adaline.ai/docs/reference/api/v2/openapi/fetch-dynamic-columns.md): Trigger a fetch for dynamic (prompt/API) columns on selected rows. - [Get Dataset](https://www.adaline.ai/docs/reference/api/v2/openapi/get-dataset.md): Retrieve a dataset with its columns (no rows). - [Get Dataset Rows](https://www.adaline.ai/docs/reference/api/v2/openapi/get-dataset-rows.md): Retrieve paginated rows. Supports column projection and sorting. - [Get Deployment](https://www.adaline.ai/docs/reference/api/v2/openapi/get-deployment.md): Retrieve a specific deployed prompt or the latest deployment in a specific deployment environment. - [Get Evaluation](https://www.adaline.ai/docs/reference/api/v2/openapi/get-evaluation.md): Get evaluation run details. - [Get Evaluation Results](https://www.adaline.ai/docs/reference/api/v2/openapi/get-evaluation-results.md): Get paginated results for an evaluation run. - [Get Evaluator](https://www.adaline.ai/docs/reference/api/v2/openapi/get-evaluator.md): Get evaluator details. - [Get Playground](https://www.adaline.ai/docs/reference/api/v2/openapi/get-playground.md) - [Get Project](https://www.adaline.ai/docs/reference/api/v2/openapi/get-project.md): Retrieve a single project by its ID. - [Get Prompt](https://www.adaline.ai/docs/reference/api/v2/openapi/get-prompt.md): Retrieve full prompt detail. Use `expand=playground` to include the default playground. - [Get Prompt Draft](https://www.adaline.ai/docs/reference/api/v2/openapi/get-prompt-draft.md): Get the current draft of a prompt. - [Get Provider](https://www.adaline.ai/docs/reference/api/v2/openapi/get-provider.md): Get provider details, optionally including available models. - [List Datasets](https://www.adaline.ai/docs/reference/api/v2/openapi/list-datasets.md): List all datasets in a project. Results are paginated. - [List Evaluations](https://www.adaline.ai/docs/reference/api/v2/openapi/list-evaluations.md): List all evaluation runs for a prompt. Paginated. - [List Evaluators](https://www.adaline.ai/docs/reference/api/v2/openapi/list-evaluators.md): List all evaluators for a prompt. Paginated. - [List Log Traces](https://www.adaline.ai/docs/reference/api/v2/openapi/list-logs.md): List log traces for a project with optional filters. Results are paginated. - [List Models](https://www.adaline.ai/docs/reference/api/v2/openapi/list-models.md): List all available models, optionally filtered by provider. - [List Playgrounds](https://www.adaline.ai/docs/reference/api/v2/openapi/list-playgrounds.md) - [List Projects](https://www.adaline.ai/docs/reference/api/v2/openapi/list-projects.md): List all projects accessible by the API key. - [List Prompts](https://www.adaline.ai/docs/reference/api/v2/openapi/list-prompts.md): List all prompts, optionally filtered by project. Paginated. - [List Providers](https://www.adaline.ai/docs/reference/api/v2/openapi/list-providers.md): List all configured providers. - [Update Dataset](https://www.adaline.ai/docs/reference/api/v2/openapi/update-dataset.md): Update a dataset's title, description, or icon. - [Update Dataset Column](https://www.adaline.ai/docs/reference/api/v2/openapi/update-dataset-column.md): Update a column's name or settings. - [Update Dataset Row](https://www.adaline.ai/docs/reference/api/v2/openapi/update-dataset-row.md): Patch cell values for a single row. - [Update Evaluator](https://www.adaline.ai/docs/reference/api/v2/openapi/update-evaluator.md): Update evaluator configuration. - [Update Log Trace](https://www.adaline.ai/docs/reference/api/v2/openapi/update-log-trace.md): Update existing trace information. - [Update Project](https://www.adaline.ai/docs/reference/api/v2/openapi/update-project.md): Update a project's title and/or icon. At least one field must be provided. - [Update Prompt](https://www.adaline.ai/docs/reference/api/v2/openapi/update-prompt.md): Partially update a prompt's title, icon, config, messages, tools, or playgrounds. - [Auth](https://www.adaline.ai/docs/reference/auth.md) - [Gateway](https://www.adaline.ai/docs/reference/gateway/v2/classes/gateway.md) - [Complete chat](https://www.adaline.ai/docs/reference/gateway/v2/examples/complete-chat.md) - [Embeddings](https://www.adaline.ai/docs/reference/gateway/v2/examples/embeddings.md) - [Stream chat](https://www.adaline.ai/docs/reference/gateway/v2/examples/stream-chat.md) - [Tool calls](https://www.adaline.ai/docs/reference/gateway/v2/examples/tool-calls.md) - [Overview](https://www.adaline.ai/docs/reference/gateway/v2/overview.md) - [Config](https://www.adaline.ai/docs/reference/gateway/v2/types/config.md) - [Messages](https://www.adaline.ai/docs/reference/gateway/v2/types/messages.md) - [Responses](https://www.adaline.ai/docs/reference/gateway/v2/types/responses.md) - [Tools](https://www.adaline.ai/docs/reference/gateway/v2/types/tools.md) - [Reference](https://www.adaline.ai/docs/reference/introduction.md): API endpoints, SDKs, and Gateway reference for integrating Adaline into your applications. - [Rate Limits](https://www.adaline.ai/docs/reference/limits.md): Request limits and quotas for the Adaline API. - [Headers](https://www.adaline.ai/docs/reference/proxy/headers.md): Complete reference for all Proxy headers - [Adaline](https://www.adaline.ai/docs/reference/sdk/v2/python/classes/adaline.md) - [Dataset columns](https://www.adaline.ai/docs/reference/sdk/v2/python/classes/dataset-columns.md) - [Dataset rows](https://www.adaline.ai/docs/reference/sdk/v2/python/classes/dataset-rows.md) - [Datasets](https://www.adaline.ai/docs/reference/sdk/v2/python/classes/datasets.md) - [Evaluation results](https://www.adaline.ai/docs/reference/sdk/v2/python/classes/evaluation-results.md) - [Log spans](https://www.adaline.ai/docs/reference/sdk/v2/python/classes/log-spans.md) - [Log traces](https://www.adaline.ai/docs/reference/sdk/v2/python/classes/log-traces.md) - [Logs](https://www.adaline.ai/docs/reference/sdk/v2/python/classes/logs.md) - [Models](https://www.adaline.ai/docs/reference/sdk/v2/python/classes/models.md) - [Monitor](https://www.adaline.ai/docs/reference/sdk/v2/python/classes/monitor.md) - [Projects](https://www.adaline.ai/docs/reference/sdk/v2/python/classes/projects.md) - [Prompt draft](https://www.adaline.ai/docs/reference/sdk/v2/python/classes/prompt-draft.md) - [Prompt evaluations](https://www.adaline.ai/docs/reference/sdk/v2/python/classes/prompt-evaluations.md) - [Prompt evaluators](https://www.adaline.ai/docs/reference/sdk/v2/python/classes/prompt-evaluators.md) - [Prompt playgrounds](https://www.adaline.ai/docs/reference/sdk/v2/python/classes/prompt-playgrounds.md) - [Prompts](https://www.adaline.ai/docs/reference/sdk/v2/python/classes/prompts.md) - [Providers](https://www.adaline.ai/docs/reference/sdk/v2/python/classes/providers.md) - [Span](https://www.adaline.ai/docs/reference/sdk/v2/python/classes/span.md) - [Trace](https://www.adaline.ai/docs/reference/sdk/v2/python/classes/trace.md) - [Overview](https://www.adaline.ai/docs/reference/sdk/v2/python/overview.md) - [BackgroundStatus](https://www.adaline.ai/docs/reference/sdk/v2/python/types/BackgroundStatus.md) - [BufferedEntry](https://www.adaline.ai/docs/reference/sdk/v2/python/types/BufferedEntry.md) - [ErrorContent](https://www.adaline.ai/docs/reference/sdk/v2/python/types/ErrorContent.md) - [EvaluationResultsQuery](https://www.adaline.ai/docs/reference/sdk/v2/python/types/EvaluationResultsQuery.md) - [FunctionSchema](https://www.adaline.ai/docs/reference/sdk/v2/python/types/FunctionSchema.md) - [ImageContent](https://www.adaline.ai/docs/reference/sdk/v2/python/types/ImageContent.md) - [LogAttributesValue](https://www.adaline.ai/docs/reference/sdk/v2/python/types/LogAttributesValue.md) - [LogSpanContent](https://www.adaline.ai/docs/reference/sdk/v2/python/types/LogSpanContent.md) - [LogSpanEmbeddingsContent](https://www.adaline.ai/docs/reference/sdk/v2/python/types/LogSpanEmbeddingsContent.md) - [LogSpanFunctionContent](https://www.adaline.ai/docs/reference/sdk/v2/python/types/LogSpanFunctionContent.md) - [LogSpanGuardrailContent](https://www.adaline.ai/docs/reference/sdk/v2/python/types/LogSpanGuardrailContent.md) - [LogSpanModelContent](https://www.adaline.ai/docs/reference/sdk/v2/python/types/LogSpanModelContent.md) - [LogSpanModelStreamContent](https://www.adaline.ai/docs/reference/sdk/v2/python/types/LogSpanModelStreamContent.md) - [LogSpanOtherContent](https://www.adaline.ai/docs/reference/sdk/v2/python/types/LogSpanOtherContent.md) - [LogSpanRetrievalContent](https://www.adaline.ai/docs/reference/sdk/v2/python/types/LogSpanRetrievalContent.md) - [LogSpanToolContent](https://www.adaline.ai/docs/reference/sdk/v2/python/types/LogSpanToolContent.md) - [LogSpanVariable](https://www.adaline.ai/docs/reference/sdk/v2/python/types/LogSpanVariable.md) - [Logger](https://www.adaline.ai/docs/reference/sdk/v2/python/types/Logger.md) - [MessageContent](https://www.adaline.ai/docs/reference/sdk/v2/python/types/MessageContent.md) - [MessageRole](https://www.adaline.ai/docs/reference/sdk/v2/python/types/MessageRole.md) - [PdfContent](https://www.adaline.ai/docs/reference/sdk/v2/python/types/PdfContent.md) - [PromptMessage](https://www.adaline.ai/docs/reference/sdk/v2/python/types/PromptMessage.md) - [PromptSnapshot](https://www.adaline.ai/docs/reference/sdk/v2/python/types/PromptSnapshot.md) - [PromptSnapshotConfig](https://www.adaline.ai/docs/reference/sdk/v2/python/types/PromptSnapshotConfig.md) - [PromptVariable](https://www.adaline.ai/docs/reference/sdk/v2/python/types/PromptVariable.md) - [ReasoningContent](https://www.adaline.ai/docs/reference/sdk/v2/python/types/ReasoningContent.md) - [SearchResultContent](https://www.adaline.ai/docs/reference/sdk/v2/python/types/SearchResultContent.md) - [SpanStatus](https://www.adaline.ai/docs/reference/sdk/v2/python/types/SpanStatus.md) - [TextContent](https://www.adaline.ai/docs/reference/sdk/v2/python/types/TextContent.md) - [ToolCallContent](https://www.adaline.ai/docs/reference/sdk/v2/python/types/ToolCallContent.md) - [ToolFunction](https://www.adaline.ai/docs/reference/sdk/v2/python/types/ToolFunction.md) - [ToolFunctionDefinition](https://www.adaline.ai/docs/reference/sdk/v2/python/types/ToolFunctionDefinition.md) - [ToolResponseContent](https://www.adaline.ai/docs/reference/sdk/v2/python/types/ToolResponseContent.md) - [TraceStatus](https://www.adaline.ai/docs/reference/sdk/v2/python/types/TraceStatus.md) - [VariableModality](https://www.adaline.ai/docs/reference/sdk/v2/python/types/VariableModality.md) - [VariableValue](https://www.adaline.ai/docs/reference/sdk/v2/python/types/VariableValue.md) - [Deployment](https://www.adaline.ai/docs/reference/sdk/v2/python/types/deployment.md) - [Adaline](https://www.adaline.ai/docs/reference/sdk/v2/typescript/classes/adaline.md) - [Dataset columns](https://www.adaline.ai/docs/reference/sdk/v2/typescript/classes/dataset-columns.md) - [Dataset rows](https://www.adaline.ai/docs/reference/sdk/v2/typescript/classes/dataset-rows.md) - [Datasets](https://www.adaline.ai/docs/reference/sdk/v2/typescript/classes/datasets.md) - [Evaluation results](https://www.adaline.ai/docs/reference/sdk/v2/typescript/classes/evaluation-results.md) - [Log spans](https://www.adaline.ai/docs/reference/sdk/v2/typescript/classes/log-spans.md) - [Log traces](https://www.adaline.ai/docs/reference/sdk/v2/typescript/classes/log-traces.md) - [Logs](https://www.adaline.ai/docs/reference/sdk/v2/typescript/classes/logs.md) - [Models](https://www.adaline.ai/docs/reference/sdk/v2/typescript/classes/models.md) - [Monitor](https://www.adaline.ai/docs/reference/sdk/v2/typescript/classes/monitor.md) - [Projects](https://www.adaline.ai/docs/reference/sdk/v2/typescript/classes/projects.md) - [Prompt draft](https://www.adaline.ai/docs/reference/sdk/v2/typescript/classes/prompt-draft.md) - [Prompt evaluations](https://www.adaline.ai/docs/reference/sdk/v2/typescript/classes/prompt-evaluations.md) - [Prompt evaluators](https://www.adaline.ai/docs/reference/sdk/v2/typescript/classes/prompt-evaluators.md) - [Prompt playgrounds](https://www.adaline.ai/docs/reference/sdk/v2/typescript/classes/prompt-playgrounds.md) - [Prompts](https://www.adaline.ai/docs/reference/sdk/v2/typescript/classes/prompts.md) - [Providers](https://www.adaline.ai/docs/reference/sdk/v2/typescript/classes/providers.md) - [Span](https://www.adaline.ai/docs/reference/sdk/v2/typescript/classes/span.md) - [Trace](https://www.adaline.ai/docs/reference/sdk/v2/typescript/classes/trace.md) - [Overview](https://www.adaline.ai/docs/reference/sdk/v2/typescript/overview.md) - [BackgroundStatus](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/BackgroundStatus.md) - [BufferedEntry](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/BufferedEntry.md) - [ErrorContent](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/ErrorContent.md): Error content type for LLM safety and content filtering errors. - [EvaluationResultsQuery](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/EvaluationResultsQuery.md) - [FunctionSchema](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/FunctionSchema.md) - [ImageContent](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/ImageContent.md): Image content type for prompt messages with detail level specification. - [LogAttributesValue](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/LogAttributesValue.md): Allowed value type for trace and span attributes. - [LogSpanContent](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/LogSpanContent.md): Types for log span content, traces, spans, and observability tracking. - [LogSpanEmbeddingsContent](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/LogSpanEmbeddingsContent.md): Span content type for embedding generation operations. - [LogSpanFunctionContent](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/LogSpanFunctionContent.md): Span content type for custom application logic and function executions. - [LogSpanGuardrailContent](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/LogSpanGuardrailContent.md): Span content type for safety and compliance checks. - [LogSpanModelContent](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/LogSpanModelContent.md): Span content type for LLM inference operations with provider, model, input/output, and cost tracking. - [LogSpanModelStreamContent](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/LogSpanModelStreamContent.md): Span content type for streaming LLM inference operations with raw chunks and aggregated output. - [LogSpanOtherContent](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/LogSpanOtherContent.md): Span content type for any custom operation that doesn't fit the predefined categories. - [LogSpanRetrievalContent](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/LogSpanRetrievalContent.md): Span content type for RAG retrieval and vector search operations. - [LogSpanToolContent](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/LogSpanToolContent.md): Span content type for tool and function-call executions triggered by an LLM. - [LogSpanVariable](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/LogSpanVariable.md): Variable types for evaluation tracking on Model and ModelStream spans. - [Logger](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/Logger.md) - [MessageContent](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/MessageContent.md) - [MessageRole](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/MessageRole.md): Enum for chat message roles in LLM conversations. - [PdfContent](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/PdfContent.md): PDF document content type for prompt messages with file metadata. - [PromptMessage](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/PromptMessage.md) - [PromptSnapshot](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/PromptSnapshot.md): Prompt configuration object within a Deployment, containing config, messages, tools, and variables. - [PromptSnapshotConfig](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/PromptSnapshotConfig.md) - [PromptVariable](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/PromptVariable.md) - [ReasoningContent](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/ReasoningContent.md): Reasoning content type for LLM chain-of-thought and extended thinking responses. - [ResponseSchema](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/ResponseSchema.md): JSON schema types for constraining and validating LLM structured outputs. - [RetryOptions](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/RetryOptions.md) - [SearchResultContent](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/SearchResultContent.md): Search result content type for grounding LLM responses with web search data. - [SpanStatus](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/SpanStatus.md) - [TextContent](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/TextContent.md): Plain text content type for prompt messages. - [ToolCallContent](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/ToolCallContent.md): Tool/function call request from LLM with arguments and metadata. - [ToolFunction](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/ToolFunction.md) - [ToolFunctionDefinition](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/ToolFunctionDefinition.md): Wrapper for a function schema within a tool definition. - [ToolResponseContent](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/ToolResponseContent.md): Tool/function execution response returned to LLM with result data and optional API metadata. - [TraceStatus](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/TraceStatus.md): Allowed status values for a trace. - [VariableModality](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/VariableModality.md) - [VariableValue](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/VariableValue.md): Discriminated union of variable content types, keyed by modality. - [Deployment](https://www.adaline.ai/docs/reference/sdk/v2/typescript/types/deployment.md): Types for managing prompt deployments and configurations. ## OpenAPI Specs - [adaline.openapi](https://www.adaline.ai/docs/reference/api/v2/openapi/adaline.openapi.json)