Tool Use (Function Calling)

Tool calling via the public API (/v1/chat/completions) is under development. The tools and tool_choice parameters will be available in upcoming releases.

Tools allow the model to call external functions to retrieve data or perform actions. You describe available functions in your request, the model decides when to call them, and returns a structured call that you execute on your side.

How It Works

The process consists of three steps:

  • 1. Define toolsPass a tools array in your request describing each function and its parameters using JSON Schema.
  • 2. Model returns tool_callsIf the model decides to call a function, it returns finish_reason: "tool_calls" and a tool_calls array with the function name and arguments.
  • 3. Send results backYou execute the function, append the result to messages with role: "tool", and send a follow-up request.

Defining Tools

Each tool is described by an object with type: "function" and a function block containing the name, description, and parameters.

Tool schema
{
  "type": "function",
  "function": {
    "name": "get_weather",
    "description": "Get current weather for a given city",
    "parameters": {
      "type": "object",
      "properties": {
        "city": {
          "type": "string",
          "description": "City name, e.g. London"
        },
        "units": {
          "type": "string",
          "enum": ["celsius", "fahrenheit"],
          "description": "Temperature units"
        }
      },
      "required": ["city"]
    }
  }
}

Full Example

Step 1: Request with tools

Python
import requests, json

API_KEY = "sk-mira-YOUR_KEY"

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get current weather for a city",
            "parameters": {
                "type": "object",
                "properties": {
                    "city": {"type": "string", "description": "City name"},
                    "units": {"type": "string", "enum": ["celsius", "fahrenheit"]}
                },
                "required": ["city"]
            }
        }
    }
]

messages = [{"role": "user", "content": "What is the weather in London?"}]

response = requests.post(
    "https://api.vmira.ai/v1/chat/completions",
    headers={"Authorization": f"Bearer {API_KEY}"},
    json={"model": "mira", "messages": messages, "tools": tools}
)

result = response.json()
choice = result["choices"][0]
print(choice["finish_reason"])  # "tool_calls"
print(choice["message"]["tool_calls"])

Step 2: Model response with tool_calls

JSON
{
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": null,
        "tool_calls": [
          {
            "id": "call_abc123",
            "type": "function",
            "function": {
              "name": "get_weather",
              "arguments": "{\"city\": \"London\", \"units\": \"celsius\"}"
            }
          }
        ]
      },
      "finish_reason": "tool_calls"
    }
  ]
}

Step 3: Send function result

Python
# Execute the function on your side
weather_result = {"temperature": 18, "condition": "cloudy", "humidity": 72}

# Append the assistant message with tool_calls
messages.append(choice["message"])

# Append the tool result
messages.append({
    "role": "tool",
    "tool_call_id": "call_abc123",
    "content": json.dumps(weather_result)
})

# Send the follow-up request
response2 = requests.post(
    "https://api.vmira.ai/v1/chat/completions",
    headers={"Authorization": f"Bearer {API_KEY}"},
    json={"model": "mira", "messages": messages, "tools": tools}
)

final_answer = response2.json()["choices"][0]["message"]["content"]
print(final_answer)
# "It's currently 18°C in London, cloudy with 72% humidity."

Parallel Tool Calls

The model can return multiple tool_calls at once if it needs data from several sources. In that case, the tool_calls array will contain multiple items. You must execute all calls and send all results back.

JSON
{
  "tool_calls": [
    {
      "id": "call_001",
      "type": "function",
      "function": { "name": "get_weather", "arguments": "{\"city\": \"London\"}" }
    },
    {
      "id": "call_002",
      "type": "function",
      "function": { "name": "get_weather", "arguments": "{\"city\": \"Paris\"}" }
    }
  ]
}
Execute parallel tool calls concurrently (Promise.all in JS, asyncio.gather in Python) to reduce latency.

The tool_choice Parameter

Control how the model uses tools with the tool_choice parameter:

ValueBehavior
"auto"Model decides whether to call a tool (default)
"none"Model will not call tools, even if they are defined
"required"Model must call at least one tool
{"type":"function","function":{"name":"get_weather"}}Force a specific function to be called
Do not define too many tools (more than 20) — this increases token consumption and can reduce the model's tool selection quality.

Next Steps