Seedance 2.0 | Special Offer ✦ 10% OFF NOW | Ends May 13 (UTC+0)
minimax
minimax/minimax-m1

minimax/minimax-m1

1,000,000 context · $0.40/M input tokens · $2.20/M output tokens

MiniMax-M1 is a large-scale, open-weight reasoning model designed for extended context and high-efficiency inference. It leverages a hybrid Mixture-of-Experts (MoE) architecture paired with a custom "lightning attention" mechanism, allowing it...

Pricing

Pay-per-use

No upfront costs, pay only for what you use

Input$0.40 / M Tokens
Output$2.20 / M Tokens

Try the model

minimax/minimax-m1
Online
minimax
Hi! I am a helpful AI assistant. What can I do for you?

API Usage

Use the following code examples to integrate with our API:

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="minimax/minimax-m1",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

Model Introduction

Minimax minimax-m1

MiniMax-M1 is a high-performance large language model optimized for efficiency and accuracy in diverse tasks

MiniMax-M1 is a high-performance large language model optimized for efficiency and accuracy in diverse tasks.


Why It Looks Great

  • MoE (Mixture of Experts) architecture for efficient processing
  • 1000000 context window for long document handling
  • Competitive pricing at $0.4/$2.4 per million tokens

Key Features

  • Context Window: 1000000 tokens
  • Max Output: 40000 tokens
  • Vision: Supported
  • Function Calling: Supported

Specifications

SpecificationValue
ProviderMinimax
Model TypeLarge Language Model (LLM)
ArchitectureMoE (Mixture of Experts)
Context Window1000000 tokens
Max Output40000 tokens
InputText
OutputText
VisionSupported
Function CallingSupported

Pricing

Token TypeCost per Million Tokens
Input$0.4
Output$2.4

How to Use

  1. Write your prompt — describe the task, provide context, and specify desired output format.
  2. Submit — the model processes your request and returns the response.

API Integration

Base URL: https://llm.wavespeed.ai/v1 API Endpoint: chat/completions Model ID: minimax/minimax-m1


API Usage

Python SDK

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="minimax/minimax-m1",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

cURL

curl https://llm.wavespeed.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "minimax/minimax-m1",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Notes

  • Model: minimax/minimax-m1
  • Provider: Minimax

Info

Providerminimax
Typellm

Supported Functionality

Input
Text
Output
Text
Context1,000,000
Max Output40,000
Vision-
Function Calling✓ Supported

API Access Guide

Base URLhttps://llm.wavespeed.ai/v1
API Endpointchat/completions
Model IDminimax/minimax-m1

Minimax M1 API

minimax/minimax-m1

MiniMax-M1 is a large-scale, open-weight reasoning model designed for extended context and high-efficiency inference. It leverages a hybrid Mixture-of-Experts (MoE) architecture paired with a custom "lightning attention" mechanism, allowing it...

Input

$0.4 /M

Output

$2.2 /M

Context

1000K

Max Output

40K

Tool Use

Supported

Try Minimax M1 on WaveSpeedAI

Access Minimax M1 through our unified API — OpenAI-compatible, no cold starts, transparent pricing.

Frequently Asked Questions about Minimax M1

How much does Minimax M1 cost via the API?+

Pricing on WaveSpeedAI: $0.40 per million input tokens and $2.20 per million output tokens. Prompt caching and batch processing are billed separately and reduce effective cost on long, repetitive workloads.

What is the context window of Minimax M1?+

Minimax M1 supports up to 1000K tokens of context with up to 40K tokens of output per request.

Is Minimax M1 OpenAI-compatible?+

Yes. WaveSpeedAI exposes Minimax M1 through an OpenAI-compatible endpoint at https://llm.wavespeed.ai/v1. Point the official OpenAI SDK at this base URL with your WaveSpeedAI API key — no other code changes required.

How do I get started with Minimax M1?+

Sign in to WaveSpeedAI, create an API key in Access Keys, then send a request to https://llm.wavespeed.ai/v1/chat/completions with model id set to the value shown above. New accounts receive free credits to evaluate Minimax M1 before paying per token.

Related LLM APIs