Documentation Index
Fetch the complete documentation index at: https://www.adaline.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
PromptSnapshot
The complete prompt configuration captured at deployment time. Contains model config, messages, tools, and variable definitions.
Overview
PromptSnapshot is the prompt payload inside a Deployment. When you call get_deployment() or get_latest_deployment(), the returned Deployment.prompt field is a PromptSnapshot.
from adaline_api.models.prompt_snapshot import PromptSnapshot
Fields
config
PromptSnapshotConfig
required
messages
list[PromptMessage]
required
tools
list[ToolFunction]
required
Array of tool/function definitions available to the model. See ToolFunction.
variables
list[PromptVariable]
required
Array of variable definitions used in the prompt template. See PromptVariable.
Examples
Accessing from a Deployment
from adaline.main import Adaline
adaline = Adaline()
deployment = await adaline.get_latest_deployment(
prompt_id="prompt_abc123",
deployment_environment_id="environment_abc123"
)
prompt: PromptSnapshot = deployment.prompt
print(f"Provider: {prompt.config.provider_name}")
print(f"Model: {prompt.config.model}")
print(f"Messages: {len(prompt.messages)}")
print(f"Tools: {len(prompt.tools)}")
print(f"Variables: {[v.name for v in prompt.variables]}")
from adaline_api.models.prompt_message import PromptMessage
from adaline_api.models.text_content import TextContent
prompt = deployment.prompt
for msg in prompt.messages:
for c in msg.content:
if isinstance(c.actual_instance, TextContent):
print(f"[{msg.role}] {c.actual_instance.value}")
for tool in prompt.tools:
fn = tool.definition.var_schema
print(f"Tool: {fn.name} — {fn.description}")
Using with a Provider SDK
import re
from adaline.main import Adaline
from openai import OpenAI
adaline = Adaline()
openai_client = OpenAI()
deployment = await adaline.get_latest_deployment(
prompt_id="prompt_abc123",
deployment_environment_id="environment_abc123",
)
# Replace {{variable_name}} placeholders with your runtime values.
# See PromptVariable for the full pattern (text + image + PDF).
variables = {"user_name": "Alice"}
pattern = re.compile(r"\{\{(\w+)\}\}")
for message in deployment.prompt.messages:
for c in message.content:
if c.modality == "text":
c.value = pattern.sub(
lambda m: str(variables.get(m.group(1), m.group(0))),
c.value,
)
config = deployment.prompt.config
response = openai_client.chat.completions.create(
model=config.model,
messages=[...], # Build from deployment.prompt.messages
temperature=config.settings.get("temperature"),
max_tokens=config.settings.get("maxTokens"),
)
Serialization
from adaline_api.models.prompt_snapshot import PromptSnapshot
prompt = deployment.prompt
d = prompt.to_dict()
j = prompt.to_json()
restored = PromptSnapshot.from_dict(d)
restored = PromptSnapshot.from_json(j)
JSON Example
{
"config": {
"provider_name": "openai",
"provider_id": "provider_abc123",
"model": "gpt-4o",
"settings": { "temperature": 0.7, "maxTokens": 1000 }
},
"messages": [
{
"role": "system",
"content": [
{ "modality": "text", "value": "You are a helpful assistant." }
]
}
],
"tools": [],
"variables": [
{ "name": "user_name", "modality": "text" }
]
}