Documentation Index
Fetch the complete documentation index at: https://www.adaline.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
PromptSnapshotConfig
Model provider and settings configuration for prompt deployments.
Overview
The PromptSnapshotConfig type defines the model provider, model name, and runtime settings for a deployed prompt snapshot. All fields are optional because a deployment snapshot may have an incomplete configuration.
PromptSnapshotConfig
interface PromptSnapshotConfig {
providerName?: string;
providerId?: string;
model?: string;
settings?: any;
}
Properties:
providerName - Provider name in lowercase (e.g., ‘openai’, ‘anthropic’, ‘google’)
providerId - Adaline internal provider UUID
model - Model name as defined in the provider’s API (e.g., ‘gpt-4o’, ‘claude-3-opus’)
settings - Runtime configuration settings passed to the model provider, flexible key-value pairs
Examples
Basic Configuration
import type { PromptSnapshotConfig } from '@adaline/api';
const config: PromptSnapshotConfig = {
providerName: 'openai',
providerId: 'provider_abc123',
model: 'gpt-4o',
settings: {
temperature: 0.7,
maxTokens: 1000,
topP: 0.9
}
};
Same parameters across different LLM Providers
// Adaline transforms the parameters to the provider-specific parameters.
const config: PromptSnapshotConfig = {
providerName: 'openai',
model: 'gpt-4o',
settings: {
temperature: 0.7,
maxTokens: 1000, // 'max_tokens' in OpenAI, 'max_tokens' in Anthropic, 'maxOutputTokens' in Google, etc.
topP: 0.9, // 'top_p' in OpenAI, 'top_p' in Anthropic, 'topP' in Google, etc.
stopSequences: ['\n\n', 'END'] // 'stop' in OpenAI, 'stop' in Anthropic, 'stopSequences' in Google, etc.
}
};
Using with Deployments
import { Adaline } from '@adaline/client';
import type { Deployment, PromptSnapshotConfig } from '@adaline/api';
const adaline = new Adaline();
const deployment: Deployment = await adaline.getLatestDeployment({
promptId: 'prompt_123',
deploymentEnvironmentId: 'environment_123'
});
// Access the prompt snapshot config
const config: PromptSnapshotConfig = deployment.prompt.config;
const temperature = config.settings?.temperature;
const maxTokens = config.settings?.maxTokens;
console.log(`Provider: ${config.providerName}`);
console.log(`Model: ${config.model}`);
console.log(`Temperature: ${temperature}`);
console.log(`Max Tokens: ${maxTokens}`);
// Use with Adaline Gateway (automatically transforms parameters per provider)
import { Gateway } from '@adaline/gateway';
import { OpenAI } from '@adaline/openai';
const gateway = new Gateway();
const openaiProvider = new OpenAI();
const model = openaiProvider.chatModel({
modelName: config.model!,
apiKey: process.env.OPENAI_API_KEY!
});
const gatewayResponse = await gateway.completeChat({
model,
config: config.settings,
messages: deployment.prompt.messages,
tools: deployment.prompt.tools
});
console.log(gatewayResponse.response.messages[0].content[0].value);