Skip to main content

Azure OpenAI

Integrate Azure-hosted OpenAI models through the Adaline Proxy to automatically capture telemetry — requests, responses, token usage, latency, and costs — with minimal code changes.

Supported Models

Azure OpenAI supports any OpenAI model deployed in your Azure resource. Chat Models
ModelDescription
gpt-4oMultimodal with vision support
gpt-4o-miniFast and cost-effective multimodal
gpt-4-turboHigh capability with 128k context
gpt-4Original GPT-4 model
gpt-3.5-turboFast and cost-effective
Embedding Models
ModelDescription
text-embedding-3-largeHighest quality embeddings
text-embedding-3-smallBalanced quality and cost
text-embedding-ada-002Legacy embedding model
Models must be deployed in your Azure OpenAI resource before use. The model name corresponds to your Azure deployment name.

Proxy Base URL

https://gateway.adaline.ai/v1/azure/

Prerequisites

  1. An Azure OpenAI resource with deployed models
  2. Your Azure OpenAI API key, resource name, and deployment name
  3. An Adaline API key, project ID, and prompt ID

Chat Completions

Complete Chat

from openai import AzureOpenAI

client = AzureOpenAI(
    api_key="your-azure-openai-api-key",
    api_version="2024-02-01",
    azure_endpoint="https://gateway.adaline.ai/v1/azure/",
    azure_deployment="your-deployment-name"
)

headers = {
    "adaline-api-key": "your-adaline-api-key",
    "adaline-project-id": "your-project-id",
    "adaline-prompt-id": "your-prompt-id",
    "adaline-azure-resource-name": "your-resource-name"
}

response = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What is cloud computing?"}
    ],
    extra_headers=headers
)

print(response.choices[0].message.content)
The adaline-azure-resource-name header is required for Azure OpenAI. This is the name of your Azure OpenAI resource (not the deployment name).

Stream Chat

from openai import AzureOpenAI

client = AzureOpenAI(
    api_key="your-azure-openai-api-key",
    api_version="2024-02-01",
    azure_endpoint="https://gateway.adaline.ai/v1/azure/",
    azure_deployment="your-deployment-name"
)

headers = {
    "adaline-api-key": "your-adaline-api-key",
    "adaline-project-id": "your-project-id",
    "adaline-prompt-id": "your-prompt-id",
    "adaline-azure-resource-name": "your-resource-name"
}

stream = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Explain Azure services in detail."}
    ],
    stream=True,
    extra_headers=headers
)

for chunk in stream:
    if chunk.choices[0].delta.content is not None:
        print(chunk.choices[0].delta.content, end="")

Embeddings

from openai import AzureOpenAI

client = AzureOpenAI(
    api_key="your-azure-openai-api-key",
    api_version="2024-02-01",
    azure_endpoint="https://gateway.adaline.ai/v1/azure/",
    azure_deployment="your-embedding-deployment-name"
)

headers = {
    "adaline-api-key": "your-adaline-api-key",
    "adaline-project-id": "your-project-id",
    "adaline-prompt-id": "your-prompt-id",
    "adaline-azure-resource-name": "your-resource-name"
}

response = client.embeddings.create(
    model="text-embedding-ada-002",
    input="The quick brown fox jumps over the lazy dog",
    extra_headers=headers
)

embedding = response.data[0].embedding
print(f"Embedding dimension: {len(embedding)}")

Next Steps


Back to Integrations

Browse all integrations