Models list

The models list endpoint returns all available Mira models with their specifications. This endpoint does not require authentication and can be used to dynamically fetch the model catalog.

Endpoint

GET/api/v1/modelsList available models
This endpoint does not require authentication. You can call it without an API key.

Request parameters

This endpoint accepts no parameters. Simply send a GET request.

Request examples

cURL

cURL
curl https://api.vmira.ai/api/v1/models

Python

Python
import requests

response = requests.get("https://api.vmira.ai/api/v1/models")
models = response.json()

for model in models["models"]:
    print(f"{model['id']:20} context={model['context_window']:>7}  max_output={model['max_output_tokens']}")

JavaScript

JavaScript
const response = await fetch("https://api.vmira.ai/api/v1/models");
const models = await response.json();

for (const model of models.models) {
  console.log(`${model.id} — context: ${model.context_window}, max output: ${model.max_output_tokens}`);
}
console.log("Default model:", models.default);

Response format

JSON
{
  "default": "mira",
  "models": [
    {
      "id": "mira",
      "name": "Mira",
      "description": "General-purpose assistant",
      "context_window": 32768,
      "max_output_tokens": 4096
    },
    {
      "id": "mira-pro",
      "name": "Mira Pro",
      "description": "Advanced model for professional use",
      "context_window": 65536,
      "max_output_tokens": 8192,
      "requires_plan": "pro"
    },
    {
      "id": "mira-max",
      "name": "Mira Max",
      "description": "Most capable model with maximum context",
      "context_window": 131072,
      "max_output_tokens": 16384,
      "requires_plan": "max"
    }
  ]
}

Model object fields

idstringRequiredUnique model identifier used in API requests
namestringRequiredDisplay name of the model
descriptionstringRequiredBrief description of the model and its capabilities
context_windowintegerRequiredMaximum number of tokens in the context window
max_output_tokensintegerRequiredMaximum number of tokens in the model's response
requires_planstringOptionalMinimum plan required to access the model ("pro" or "max")

Model comparison

ModelContextMax outputBest for
mira32K4KGeneral tasks, chat, summarization
mira-pro64K8KProfessional tasks, reasoning, long documents
mira-max128K16KMaximum context, codebase analysis
Use this endpoint to dynamically fetch the model list in your application instead of hardcoding model IDs. This way you'll automatically get access to new models as they become available.

OpenAI compatibility

The endpoint response is compatible with the OpenAI /v1/models format. If you use the OpenAI SDK library, simply replace the base URL:

Python (OpenAI SDK)
from openai import OpenAI

client = OpenAI(
    base_url="https://api.vmira.ai/v1",
    api_key="sk-mira-your-key-here",
)

models = client.models.list()
for model in models.models:
    print(model.id)