Nano Banana 2 & Pro Sale — 15% OFF | Apr 1–15 Only
X

xiaomi/mimo-v2-omni

xiaomi/mimo-v2-omni

262,144 context · $0.40/M input tokens · $2.00/M output tokens

MiMo-V2-Omni is a frontier omni-modal model that natively processes image, video, and audio inputs within a unified architecture. It combines strong multimodal perception with agentic capability - visual grounding, multi-step planning, tool use, and code execution - making it well-suited for complex real-world tasks that span modalities. 256K context window.

Pricing

Pay-per-use

No upfront costs, pay only for what you use

Input$0.40 / M Tokens
Output$2.00 / M Tokens

API Usage

Use the following code examples to integrate with our API:

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="xiaomi/mimo-v2-omni",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

Info

Providerxiaomi
Typellm

Supported Functionality

Input
TextImageAudio
Output
Text
Context262,144
Max Output65,536
Vision✓ Supported
Function Calling✓ Supported

API Access Guide

Base URLhttps://llm.wavespeed.ai/v1
API Endpointchat/completions
Model IDxiaomi/mimo-v2-omni