Skip to main content
Tools let your LLM interact with external services, databases, and APIs during a conversation. When a model determines it needs external data or actions, it generates a structured tool call request that can be executed to fetch results and continue the conversation. Adaline supports the full tool calling workflow — from defining tool schemas to configuring automatic execution with custom backends. Tool configuration in Adaline

Add tools to a prompt

1

Select a compatible model

Choose an LLM that supports tool calling (function calling). Most modern models from OpenAI, Anthropic, and Google support this feature.Selecting an LLM in Adaline
2

Write your prompt

Compose a prompt that may require external data or actions. For example, a prompt asking about current weather conditions would benefit from a weather API tool.A prompt that needs tool calling
3

Enable tool choice

Enable the tool choice feature in the model settings to allow the LLM to generate tool calls.Enabling tool choice
4

Configure tool choice mode

Set the tool choice mode to control how the model uses your tools.Configuring tool choice modeChoose from the following modes (availability varies by model):
ModeBehavior
noneThe model will not invoke any tools.
autoThe model decides which tools to use and when, based on the conversation context.
requiredThe model must invoke at least one tool in its response.
anyThe model can call any of the available tools.
5

Define or link a tool

Click Add Tool to create a new inline tool definition, or link to an already defined tool in the project.Adding a tool in Adaline

Tool schema definition

Each tool is defined using a JSON schema that tells the model what the tool does and what parameters it accepts. Click Add Tool to open the schema editor: Defining the schema for a tool Here is the complete JSON structure for a tool definition:
{
  "type": "function",
  "definition": {
    "schema": {
      "name": "get_weather",
      "description": "Get the current weather in a given location",
      "parameters": {
        "type": "object",
        "properties": {
          "location": {
            "type": "string",
            "description": "The city and state, e.g. San Francisco, CA"
          },
          "unit": {
            "type": "string",
            "enum": ["celsius", "fahrenheit"]
          }
        },
        "required": ["location"]
      }
    }
  }
}

Schema reference

FieldTypeRequiredDescription
typeStringYesAlways set to "function".
nameStringYesA unique identifier for the tool.
descriptionStringYesDescribes what the tool does. Helps the model decide when to use it.
parameters.typeStringNoThe parameter structure type. Typically "object".
parameters.propertiesObjectNoDefines individual parameters with their types and descriptions.
requiredArrayNoLists the parameter names that are mandatory.
Refer to the OpenAI function calling documentation for more schema examples and best practices.

Add tool calls and responses to messages

Beyond defining tools, you can add tool call and tool response messages directly in the Editor. This is useful for building multi-shot prompts that demonstrate how the model should interact with tools. Adding tool calls in the Editor A tool call (in an Assistant message) represents the model invoking a specific tool. A tool response (in a Tool message) shows the data returned by the external service. Together, they teach the model the expected interaction pattern.

Configure auto tool calls

For tools that connect to a live backend, you can configure an HTTP request endpoint so the Playground automatically executes the tool and continues the conversation. Add a request object to your tool definition:
{
  "type": "function",
  "definition": {
    "schema": {
      "name": "get_weather",
      "description": "Get the current weather in a given location",
      "parameters": {
        "type": "object",
        "properties": {
          "location": {
            "type": "string",
            "description": "The city and state, e.g. San Francisco, CA"
          }
        },
        "required": ["location"]
      }
    }
  },
  "request": {
    "type": "http",
    "method": "get",
    "url": "https://api.my-tool-backend.com/get_weather",
    "headers": {
      "Content-Type": "application/json",
      "Authorization": "Bearer <your-api-key>"
    },
    "retry": {
      "maxAttempts": 3,
      "initialDelay": 1000,
      "exponentialFactor": 2
    }
  }
}

Request configuration reference

FieldTypeRequiredDescription
typeStringYesAlways set to "http".
methodStringYesThe HTTP method (GET or POST).
urlStringYesThe endpoint URL to call.
headersObjectNoHTTP headers to include in the request.
retryObjectNoRetry configuration for failed requests.
retry.maxAttemptsNumberNoMaximum number of retry attempts.
retry.initialDelayNumberNoInitial delay in milliseconds before the first retry.
retry.exponentialFactorNumberNoMultiplier for exponential backoff between retries.
When auto tool calls are enabled and a tool has a configured request endpoint, the Playground will automatically invoke the tool, inject the response, and continue the conversation — enabling fully automated multi-turn interactions.

Next steps

Tool Calls in Playground

Test tool interactions in the Playground sandbox.

Use MCP Servers in Prompts

Connect to MCP servers for standardized tool access.