Gateway Class
The main class for interacting with LLMs through the Adaline Gateway.
Constructor
import { Gateway } from "@adaline/gateway";
const gateway = new Gateway({
cache: myCache, // optional: pluggable cache backend
httpClient: myClient, // optional: custom HTTP client
logger: myLogger, // optional: custom logger
});
Parameters
Optional configuration for plugins.
A pluggable cache backend implementing the Cache interface.
A custom HTTP client implementing the HttpClient interface.
A custom logger implementing the Logger interface.
Methods
completeChat
Non-streaming chat completion. Sends messages to an LLM and returns the full response.
const response = await gateway.completeChat({
model: gpt4o,
config: Config().parse({ temperature: 0.7, maxTokens: 500 }),
messages: [
{ role: "system", content: [{ modality: "text", value: "You are a helpful assistant." }] },
{ role: "user", content: [{ modality: "text", value: "Explain quantum computing." }] },
],
tools: [],
});
request
GatewayCompleteChatRequestType
required
The model instance from a provider (e.g., openai.chatModel({ modelName: "gpt-4o" })).
Model configuration parsed via Config().parse({...}). See Config Type. Array of tool definitions (can be empty). See Tool Types.
Returns: CompleteChatHandlerResponseType
streamChat
Streaming chat completion. Returns an async generator that yields response chunks.
for await (const chunk of gateway.streamChat({
model: gpt4o,
config: Config().parse({ temperature: 0.7, maxTokens: 500 }),
messages: [
{ role: "user", content: [{ modality: "text", value: "Write a poem." }] },
],
tools: [],
})) {
process.stdout.write(chunk.response);
}
request
GatewayStreamChatRequestType
required
Same shape as completeChat request. See above.
Returns: AsyncGenerator<StreamChatHandlerResponseType>
getEmbeddings
Generate embeddings for text or other modalities.
const embeddings = await gateway.getEmbeddings({
model: textEmbedding3Large,
config: Config().parse({ encodingFormat: "float", dimensions: 256 }),
embeddingRequests: {
modality: "text",
requests: ["Hello world", "How are you?"],
},
});
request
GatewayGetEmbeddingsRequestType
required
The embedding model instance from a provider.
Embedding model configuration.
embeddingRequests
EmbeddingRequestType
required
The input data to embed.
Returns: GetEmbeddingsHandlerResponseType
Execute tool calls returned by an LLM.
const toolResponses = await gateway.getToolResponses({
tools: myToolDefinitions,
toolCalls: response.toolCalls,
});
request
GatewayGetToolResponsesRequestType
required
The tool definitions that match the tool calls.
Tool calls returned from a chat completion response.
Returns: GetToolResponsesHandlerResponseType
getChatUsageCost (static)
Calculate the cost of a chat completion based on token usage.
const cost = Gateway.getChatUsageCost({
model: gpt4o,
usage: response.usage,
});
Returns: GetChatUsageCostHandlerResponseType