Groq
Integrate Groq models through the Adaline Proxy to automatically capture telemetry — requests, responses, token usage, latency, and costs — with minimal code changes. Groq uses an OpenAI-compatible API.Supported Models
Chat Models| Model | Description |
|---|---|
openai/gpt-oss-120b | OpenAI GPT OSS 120B on Groq LPU |
openai/gpt-oss-20b | OpenAI GPT OSS 20B on Groq LPU |
openai/gpt-oss-safeguard-20b | OpenAI GPT OSS Safeguard 20B on Groq LPU |
moonshotai/kimi-k2-instruct | Moonshot Kimi K2 |
moonshotai/kimi-k2-instruct-0905 | Moonshot Kimi K2 September 2025 snapshot |
meta-llama/llama-4-maverick-17b-128e-instruct | Llama 4 Maverick |
meta-llama/llama-4-scout-17b-16e-instruct | Llama 4 Scout |
meta-llama/llama-guard-4-12b | Llama Guard 4 (safety) |
qwen/qwen3-32b | Qwen 3 32B |
deepseek-r1-distill-llama-70b | DeepSeek R1 Distill 70B |
llama-3.3-70b-versatile | Llama 3.3 70B |
llama-3.1-8b-instant | Llama 3.1 8B, ultra-fast inference |
gemma2-9b-it | Google Gemma 2 9B |
Proxy Base URL
Prerequisites
- A Groq API key
- An Adaline API key, project ID, and prompt ID
Chat Completions
Complete Chat
Stream Chat
Next Steps
- Multi-Step Workflows — RAG pipelines, multi-step generation, and conversational agents
- Headers Reference — Complete header documentation
Back to Integrations
Browse all integrations