Prompt engineering

Prompt engineering is the art and science of crafting prompts (instructions) that help the model produce the best results. Prompt quality directly impacts response quality. This guide covers proven techniques adapted for Mira models.

For tasks requiring deep analysis, use the mira-pro or mira-max models — they automatically apply chain-of-thought reasoning.

1. Be specific

The more precisely you describe the task, the better the result. Avoid vague wording. Specify format, length, style, and context.

Bad — vague
Tell me about Python.
Good — specific
Write a concise guide (500 words) on core Python data structures (lists, dicts, sets, tuples) for beginner developers. Include one code example for each structure.

2. Use examples (few-shot)

Show the model a few examples of desired input and output. This is one of the most effective ways to guide the model, especially for classification, formatting, and data transformation tasks.

Few-shot: sentiment classification
Classify the review sentiment as "positive", "negative", or "neutral".

Review: "Great product, very happy with my purchase!"
Sentiment: positive

Review: "Delivery was a week late and the item arrived damaged."
Sentiment: negative

Review: "Product matches the description, works fine."
Sentiment: neutral

Review: "Best purchase I've made all year! Highly recommend!"
Sentiment:

3. Structure with XML tags

XML tags help clearly delineate different parts of the prompt: instructions, data, examples, and expected output. Mira models understand this structure well.

Structuring with XML tags
<instructions>
You are a code review expert. Analyze the provided code and give improvement recommendations.
Output format: bulleted list with priority (high/medium/low).
</instructions>

<code>
def calc(x,y):
    r=x+y
    return r
</code>

<output_format>
- [priority] problem description and recommendation
</output_format>

4. Chain of thought

Ask the model to "think step by step." This forces the model to break a complex task into intermediate steps, significantly improving accuracy, especially for mathematical, logical, and analytical tasks.

Chain of thought
A store had 85 apples. In the morning they sold 23 apples, at noon 40 more were delivered, and in the evening they sold 31.
How many apples are left?

Please solve this step by step, explaining each reasoning step before giving the final answer.
The mira-pro and mira-max models do this automatically — they always reason step by step. For the mira model, you need to ask explicitly.

5. Role prompting

Assign the model a specific role or persona. This sets the context and style of responses. Use the system prompt to define the role.

Role prompting
{
  "model": "mira",
  "messages": [
    {
      "role": "system",
      "content": "You are an experienced senior backend developer with 15 years of experience. You specialize in REST API design and microservice architecture. Answer technically accurately, concisely, and with Python code examples. If you see an anti-pattern, point it out immediately."
    },
    {
      "role": "user",
      "content": "How should I handle errors in a REST API?"
    }
  ]
}

6. Break complex tasks into steps

Instead of one massive prompt, break a complex task into a sequence of simple ones. The result of each step feeds into the next. This improves quality and gives you control at every stage.

Step-by-step processing
# Step 1: Extract key facts from text
prompt_1 = """
Extract all key facts from the following text as a list:
<text>
{long article text}
</text>
"""

# Step 2: Group facts by category
prompt_2 = """
Group the following facts by thematic categories:
<facts>
{result from step 1}
</facts>
"""

# Step 3: Write a summary based on the grouping
prompt_3 = """
Based on the following grouped facts, write a summary
in 3 paragraphs (introduction, body, conclusion):
<grouped_facts>
{result from step 2}
</grouped_facts>
"""

7. Use system prompts

The system prompt is a powerful tool for setting global context. Use it to define the role, constraints, output format, and communication style. See the "System prompts" section for details.

8. Specify the output format

Explicitly state the format you want the response in. Mira models excel at following format instructions: JSON, Markdown, list, table, code.

Requesting JSON format
{
  "model": "mira",
  "response_format": { "type": "json_object" },
  "messages": [
    {
      "role": "system",
      "content": "You are a data extraction API. Always respond with valid JSON containing fields: name (string), age (number), skills (string[])."
    },
    {
      "role": "user",
      "content": "Extract data: Alex, 28 years old, knows Python, Go, and Kubernetes."
    }
  ]
}

9. Set constraints

Constraints help the model stay on task. Specify what not to do, which topics to avoid, and how long the response should be.

Using constraints
Explain the concept of REST API.

Constraints:
- Maximum 200 words
- Do not use technical jargon — explain for a non-technical person
- Include one analogy from everyday life
- Do not mention specific frameworks or programming languages

10. Iterate and experiment

Prompt engineering is an iterative process. The first prompt rarely gives perfect results. Experiment with wording, temperature, number of examples, and structure.

  • Temperature 0–0.3For tasks with a single correct answer (code, facts, classification).
  • Temperature 0.5–0.8For creative tasks balancing accuracy and variety.
  • Temperature 0.9–1.5For maximum creativity (poetry, brainstorming, fiction).

Best practices summary

TechniqueWhen to use
SpecificityAlways
Few-shot examplesClassification, formatting, transformation
XML tagsComplex prompts with multiple parts
Chain of thoughtMath, logic, analytics
Role promptingExpert answers, specialization
Task decompositionComplex, multi-step tasks
Output formatStructured data, integrations
ConstraintsControlling scope and content

Next steps