Skip to main content

PromptSnapshotConfig

Configuration types for prompt deployments in the Python SDK.

PromptSnapshotConfig

The model and provider configuration returned as part of a deployment’s prompt snapshot.
from adaline_api.models.prompt_snapshot_config import PromptSnapshotConfig

Fields

provider_name
str
required
The AI provider name (e.g., "openai", "anthropic", "google", "azure", "bedrock").
provider_id
str
required
Adaline’s internal provider identifier.
model
str
required
The model identifier (e.g., "gpt-4o", "claude-sonnet-4-20250514").
settings
dict
required
Normalized model settings. Adaline uses provider-agnostic parameter names that map to each provider’s native parameters.

Settings

The settings dict uses Adaline’s normalized parameter names. When you configure a prompt in Adaline, these are automatically mapped to the correct provider-specific parameter names.
Adaline KeyOpenAIAnthropicGoogleDescription
temperaturetemperaturetemperaturetemperatureSampling temperature
maxTokensmax_tokensmax_tokensmaxOutputTokensMaximum output tokens
topPtop_ptop_ptopPNucleus sampling threshold
stopSequencesstopstop_sequencesstopSequencesStop sequences

Examples

Accessing Config from a Deployment

from adaline.main import Adaline

adaline = Adaline()

deployment = await adaline.get_latest_deployment(
    prompt_id="prompt_123",
    deployment_environment_id="environment_123"
)

config = deployment.prompt.config

print(f"Provider: {config.provider_name}")   # e.g. "openai"
print(f"Provider ID: {config.provider_id}")  # e.g. "provider_abc123"
print(f"Model: {config.model}")              # e.g. "gpt-4o"
print(f"Settings: {config.settings}")        # e.g. {"temperature": 0.7, "maxTokens": 1000}

temperature = config.settings.get("temperature")
max_tokens = config.settings.get("maxTokens")

Using Config with a Provider SDK

from adaline.main import Adaline
from openai import OpenAI

adaline = Adaline()
openai = OpenAI()

deployment = await adaline.get_latest_deployment(
    prompt_id="prompt_123",
    deployment_environment_id="environment_123"
)

config = deployment.prompt.config
settings = config.settings

response = openai.chat.completions.create(
    model=config.model,
    messages=messages,
    temperature=settings.get("temperature"),
    max_tokens=settings.get("maxTokens"),
    top_p=settings.get("topP"),
)