minimax/minimax-m2.1
196,608 context · $0.30/M input tokens · $1.20/M output tokens
MiniMax-M2.1 is a lightweight, state-of-the-art large language model optimized for coding, agentic workflows, and modern application development. With only 10 billion activated parameters, it delivers a major jump in real-world...
Pay-per-use
No upfront costs, pay only for what you use
Use the following code examples to integrate with our API:
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://llm.wavespeed.ai/v1"
)
response = client.chat.completions.create(
model="minimax/minimax-m2.1",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)MiniMax-M2
MiniMax-M2.1 is a lightweight, state-of-the-art large language model optimized for coding, agentic workflows, and modern application development.
| Specification | Value |
|---|---|
| Provider | Minimax |
| Model Type | Large Language Model (LLM) |
| Architecture | MoE (Mixture of Experts) |
| Context Window | 196608 tokens |
| Max Output | tokens |
| Input | Text |
| Output | Text |
| Vision | Supported |
| Function Calling | Supported |
| Token Type | Cost per Million Tokens |
|---|---|
| Input | $0.3 |
| Output | $1.0 |
Base URL: https://llm.wavespeed.ai/v1 API Endpoint: chat/completions Model ID: minimax/minimax-m2.1
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://llm.wavespeed.ai/v1"
)
response = client.chat.completions.create(
model="minimax/minimax-m2.1",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)
curl https://llm.wavespeed.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "minimax/minimax-m2.1",
"messages": [{"role": "user", "content": "Hello!"}]
}'
minimax/minimax-m2.1
MiniMax-M2.1 is a lightweight, state-of-the-art large language model optimized for coding, agentic workflows, and modern application development. With only 10 billion activated parameters, it delivers a major jump in real-world...
Input
$0.3 /M
Output
$1.2 /M
Context
197K
Tool Use
Supported
Access Minimax M2.1 through our unified API — OpenAI-compatible, no cold starts, transparent pricing.
Pricing on WaveSpeedAI: $0.30 per million input tokens and $1.20 per million output tokens. Prompt caching and batch processing are billed separately and reduce effective cost on long, repetitive workloads.
Minimax M2.1 supports up to 197K tokens of context with up to — tokens of output per request.
Yes. WaveSpeedAI exposes Minimax M2.1 through an OpenAI-compatible endpoint at https://llm.wavespeed.ai/v1. Point the official OpenAI SDK at this base URL with your WaveSpeedAI API key — no other code changes required.
Sign in to WaveSpeedAI, create an API key in Access Keys, then send a request to https://llm.wavespeed.ai/v1/chat/completions with model id set to the value shown above. New accounts receive free credits to evaluate Minimax M2.1 before paying per token.