minimax/minimax-m2.7
204,800 context · $0.30/M input tokens · $1.20/M output tokens
MiniMax M2.7 is a next-generation flagship text model designed for agent-centric workflows, with strong improvements in coding, complex office tasks, and long-context reasoning. Built on the OpenClaw (Agent Harness) framework, it enables continuous self-improvement in real-world environments, allowing the model to actively participate in execution and decision-making for higher-quality and more efficient task completion.
Pago por uso
Sin costos iniciales, paga solo por lo que uses
Usa los siguientes ejemplos de código para integrar con nuestra API:
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://llm.wavespeed.ai/v1"
)
response = client.chat.completions.create(
model="minimax/minimax-m2.7",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)MiniMax-M2
MiniMax-M2.7 represents the journey of recursive self-improvement with enhanced reasoning and agentic capabilities.
| Specification | Value |
|---|---|
| Provider | Minimax |
| Model Type | Large Language Model (LLM) |
| Architecture | MoE (Mixture of Experts) |
| Context Window | 204800 tokens |
| Max Output | tokens |
| Input | Text |
| Output | Text |
| Vision | Supported |
| Function Calling | Supported |
| Token Type | Cost per Million Tokens |
|---|---|
| Input | $0.3 |
| Output | $1.2 |
Base URL: https://llm.wavespeed.ai/v1 API Endpoint: chat/completions Model ID: minimax/minimax-m2.7
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://llm.wavespeed.ai/v1"
)
response = client.chat.completions.create(
model="minimax/minimax-m2.7",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)
curl https://llm.wavespeed.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "minimax/minimax-m2.7",
"messages": [{"role": "user", "content": "Hello!"}]
}'
minimax/minimax-m2.7
MiniMax M2.7 is a next-generation flagship text model designed for agent-centric workflows, with strong improvements in coding, complex office tasks, and long-context reasoning. Built on the OpenClaw (Agent Harness) framework, it enables continuous self-improvement in real-world environments, allowing the model to actively participate in execution and decision-making for higher-quality and more efficient task completion.
Entrada
$0.3 /M
Salida
$1.2 /M
Contexto
205K
Uso de herramientas
Compatible
Accede a Minimax M2.7 mediante nuestra API unificada — compatible con OpenAI, sin arranques en frío, precios transparentes.
Precios en WaveSpeedAI: $0.30 por millón de tokens de entrada y $1.20 por millón de tokens de salida. El prompt caching y el procesamiento por lotes se facturan por separado y reducen el coste efectivo en cargas largas y repetitivas.
Minimax M2.7 admite hasta 205K tokens de contexto y hasta — tokens de salida por solicitud.
Sí. WaveSpeedAI expone Minimax M2.7 a través de un endpoint compatible con OpenAI en https://llm.wavespeed.ai/v1. Apunta el SDK oficial de OpenAI a esta base URL con tu clave API de WaveSpeedAI — sin más cambios de código.
Inicia sesión en WaveSpeedAI, crea una clave API en Access Keys y envía una solicitud a https://llm.wavespeed.ai/v1/chat/completions con el id de modelo mostrado arriba. Las cuentas nuevas reciben créditos gratuitos para evaluar Minimax M2.7 antes de pagar por token.