Seedance 2.0 | Special Offer ✦ 10% OFF NOW | Ends May 13 (UTC+0)
minimax
minimax/minimax-m1

minimax/minimax-m1

1,000,000 context · $0.40/M input tokens · $2.20/M output tokens

MiniMax-M1 is a large-scale, open-weight reasoning model designed for extended context and high-efficiency inference. It leverages a hybrid Mixture-of-Experts (MoE) architecture paired with a custom "lightning attention" mechanism, allowing it...

Prezzi

Pay-per-use

Nessun costo iniziale, paga solo per ciò che usi

Input$0.40 / M Tokens
Output$2.20 / M Tokens

Prova il modello

minimax/minimax-m1
Online
minimax
Ciao! Sono un assistente IA utile. Come posso aiutarti?

Utilizzo API

Usa i seguenti esempi di codice per integrare la nostra API:

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="minimax/minimax-m1",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

Introduzione al modello

Minimax minimax-m1

MiniMax-M1 is a high-performance large language model optimized for efficiency and accuracy in diverse tasks

MiniMax-M1 is a high-performance large language model optimized for efficiency and accuracy in diverse tasks.


Why It Looks Great

  • MoE (Mixture of Experts) architecture for efficient processing
  • 1000000 context window for long document handling
  • Competitive pricing at $0.4/$2.4 per million tokens

Key Features

  • Context Window: 1000000 tokens
  • Max Output: 40000 tokens
  • Vision: Supported
  • Function Calling: Supported

Specifications

SpecificationValue
ProviderMinimax
Model TypeLarge Language Model (LLM)
ArchitectureMoE (Mixture of Experts)
Context Window1000000 tokens
Max Output40000 tokens
InputText
OutputText
VisionSupported
Function CallingSupported

Pricing

Token TypeCost per Million Tokens
Input$0.4
Output$2.4

How to Use

  1. Write your prompt — describe the task, provide context, and specify desired output format.
  2. Submit — the model processes your request and returns the response.

API Integration

Base URL: https://llm.wavespeed.ai/v1 API Endpoint: chat/completions Model ID: minimax/minimax-m1


API Usage

Python SDK

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="minimax/minimax-m1",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

cURL

curl https://llm.wavespeed.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "minimax/minimax-m1",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Notes

  • Model: minimax/minimax-m1
  • Provider: Minimax

Info

Providerminimax
Tipollm

Funzionalità supportate

Input
Testo
Output
Testo
Contesto1,000,000
Output massimo40,000
Vision-
Function Calling✓ Supportato

Guida all'accesso API

Base URLhttps://llm.wavespeed.ai/v1
API Endpointchat/completions
ID modellominimax/minimax-m1

Minimax M1 API

minimax/minimax-m1

MiniMax-M1 is a large-scale, open-weight reasoning model designed for extended context and high-efficiency inference. It leverages a hybrid Mixture-of-Experts (MoE) architecture paired with a custom "lightning attention" mechanism, allowing it...

Input

$0.4 /M

Output

$2.2 /M

Contesto

1000K

Output max

40K

Uso strumenti

Supportato

Prova Minimax M1 su WaveSpeedAI

Accedi a Minimax M1 tramite la nostra API unificata — compatibile con OpenAI, senza cold start, prezzi trasparenti.

Domande frequenti su Minimax M1

Quanto costa Minimax M1 via API?+

Prezzi su WaveSpeedAI: $0.40 per milione di token in input e $2.20 per milione di token in output. Prompt caching e batch processing sono fatturati separatamente e riducono il costo effettivo su carichi lunghi e ripetitivi.

Qual è la context window di Minimax M1?+

Minimax M1 supporta fino a 1000K token di contesto e fino a 40K token di output per richiesta.

Minimax M1 è compatibile con OpenAI?+

Sì. WaveSpeedAI espone Minimax M1 tramite un endpoint compatibile con OpenAI all'indirizzo https://llm.wavespeed.ai/v1. Punta l'SDK ufficiale di OpenAI a questa base URL con la tua API key WaveSpeedAI — senza altre modifiche al codice.

Come si inizia con Minimax M1?+

Accedi a WaveSpeedAI, crea una API key in Access Keys, poi invia una richiesta a https://llm.wavespeed.ai/v1/chat/completions con il model id mostrato sopra. I nuovi account ricevono crediti gratuiti per testare Minimax M1.

API LLM correlate