xiaomi/mimo-v2-omni
262,144 context · $0.40/M input tokens · $2.00/M output tokens
MiMo-V2-Omni is a frontier omni-modal model that natively processes image, video, and audio inputs within a unified architecture. It combines strong multimodal perception with agentic capability - visual grounding, multi-step planning, tool use, and code execution - making it well-suited for complex real-world tasks that span modalities. 256K context window.
従量課金
初期費用なし、使った分だけお支払い
以下のコード例を使用して API と連携してください:
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://llm.wavespeed.ai/v1"
)
response = client.chat.completions.create(
model="xiaomi/mimo-v2-omni",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)