Adding Tools
Adaline enables you to add tools to LLM that support function calling. Tools are part of the prompt and are sent to the LLM alongside your messages. You can add multiple tools in your prompt, each representing a unique external function and service that the LLM can invoke. To add tools, follow these steps:1
Choose supported model
Select an appropriate model from dropdown that supports tool calling.

2
Access Tool Configuration
Navigate to the add tool button to open the tool editor.

3
Add your tool definition

- Enter the function name (e.g., get_windspeed)
- Define variables with appropriate data types
- Add descriptions for both the function and its parameters
- Add optional requestobject to configure the tool’s HTTP request endpoint, headers, etc. When configured, the tool will be automatically invoked when the LLM generates a tool call in Playground.
4
Enable Tool Choice


- noneinstructs the LLM to not invoke any tools even though they’re available in the prompt
- autolets the LLM decide which tools to use and how many based on the conversation context
- requiredforces the LLM to invoke at least one tool in its response.
Tool Definition
Adaline supports tools with JSON schema format, giving you precise control over tool definitions and parameter validation.JSON Schema Structure
Define your tool’s schema within theschema property of the tool definition. Refer to the OpenAI JSON schema for more examples.
JSON Schema Structure
- Function Definition
- type: Always set to “function” for tool definitions
- schema: Contains the complete tool specification
 
- Tool Metadata
- name: The function name that the LLM will call
- description: Clear explanation of what the tool does
 
- Parameter Configuration
- parameters.type: Defines the parameter structure (typically “object”)
- properties: Individual parameter definitions with types and descriptions
- required: Array of mandatory parameter names
- additionalProperties: Set to false to restrict parameters to defined ones only
- strict: Enables strict schema validation
 
Request Structure
Optionally, you can configure the tool’s HTTP request endpoint, headers, etc. When configured, the tool will be automatically invoked when the LLM generates a tool call in Playground. This automatically continues the conversation with the tool’s response in Playground until there are no more tool calls.JSON Schema Structure
- Request Configuration
- type: (required) ‘http’
- method: (required) The HTTP method to use (GET or POST)
- url: (required) The API endpoint to call
- headers: (optional) The headers to send with the request
- retry: (optional) The retry configuration- maxAttempts: (required if retry is enabled) The maximum number of retry attempts
- initialDelay: (required if retry is enabled) The initial delay between retry attempts
- exponentialFactor: (required if retry is enabled) The exponential factor for the retry delay