Skip to main content

Gateway Class

The main class for interacting with LLMs through the Adaline Gateway.

Constructor

import { Gateway } from "@adaline/gateway";

const gateway = new Gateway({
  cache: myCache,        // optional: pluggable cache backend
  httpClient: myClient,  // optional: custom HTTP client
  logger: myLogger,      // optional: custom logger
});

Parameters

options
GatewayOptionsType
Optional configuration for plugins.

Methods

completeChat

Non-streaming chat completion. Sends messages to an LLM and returns the full response.
const response = await gateway.completeChat({
  model: gpt4o,
  config: Config().parse({ temperature: 0.7, maxTokens: 500 }),
  messages: [
    { role: "system", content: [{ modality: "text", value: "You are a helpful assistant." }] },
    { role: "user", content: [{ modality: "text", value: "Explain quantum computing." }] },
  ],
  tools: [],
});
request
GatewayCompleteChatRequestType
required
Returns: CompleteChatHandlerResponseType

streamChat

Streaming chat completion. Returns an async generator that yields response chunks.
for await (const chunk of gateway.streamChat({
  model: gpt4o,
  config: Config().parse({ temperature: 0.7, maxTokens: 500 }),
  messages: [
    { role: "user", content: [{ modality: "text", value: "Write a poem." }] },
  ],
  tools: [],
})) {
  process.stdout.write(chunk.response);
}
request
GatewayStreamChatRequestType
required
Same shape as completeChat request. See above.
Returns: AsyncGenerator<StreamChatHandlerResponseType>

getEmbeddings

Generate embeddings for text or other modalities.
const embeddings = await gateway.getEmbeddings({
  model: textEmbedding3Large,
  config: Config().parse({ encodingFormat: "float", dimensions: 256 }),
  embeddingRequests: {
    modality: "text",
    requests: ["Hello world", "How are you?"],
  },
});
request
GatewayGetEmbeddingsRequestType
required
Returns: GetEmbeddingsHandlerResponseType

getToolResponses

Execute tool calls returned by an LLM.
const toolResponses = await gateway.getToolResponses({
  tools: myToolDefinitions,
  toolCalls: response.toolCalls,
});
request
GatewayGetToolResponsesRequestType
required
Returns: GetToolResponsesHandlerResponseType

getChatUsageCost (static)

Calculate the cost of a chat completion based on token usage.
const cost = Gateway.getChatUsageCost({
  model: gpt4o,
  usage: response.usage,
});
Returns: GetChatUsageCostHandlerResponseType