x-ai/grok-3-mini
131,072 context · $0.30/M input tokens · $0.50/M output tokens
A lightweight model that thinks before responding. Fast, smart, and great for logic-based tasks that do not require deep domain knowledge. The raw thinking traces are accessible.
Pay-per-use
No upfront costs, pay only for what you use
Use the following code examples to integrate with our API:
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://llm.wavespeed.ai/v1"
)
response = client.chat.completions.create(
model="x-ai/grok-3-mini",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)x-ai grok-3-mini
| Specification | Value |
|---|---|
| Provider | X-Ai |
| Model Type | Large Language Model (LLM) |
| Architecture | N/A |
| Context Window | 131072 tokens |
| Max Output | 4096 tokens |
| Input | Text |
| Output | Text |
| Vision | Supported |
| Function Calling | Supported |
| Token Type | Cost per Million Tokens |
|---|---|
| Input | $0.3 |
| Output | $0.5 |
Base URL: https://llm.wavespeed.ai/v1 API Endpoint: chat/completions Model ID: x-ai/grok-3-mini
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://llm.wavespeed.ai/v1"
)
response = client.chat.completions.create(
model="x-ai/grok-3-mini",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)
curl https://llm.wavespeed.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "x-ai/grok-3-mini",
"messages": [{"role": "user", "content": "Hello!"}]
}'
x-ai/grok-3-mini
A lightweight model that thinks before responding. Fast, smart, and great for logic-based tasks that do not require deep domain knowledge. The raw thinking traces are accessible.
Input
$0.3 /M
Output
$0.5 /M
Context
131K
Max Output
4K
Tool Use
Supported
Access Grok 3 Mini through our unified API — OpenAI-compatible, no cold starts, transparent pricing.
Pricing on WaveSpeedAI: $0.30 per million input tokens and $0.50 per million output tokens. Prompt caching and batch processing are billed separately and reduce effective cost on long, repetitive workloads.
Grok 3 Mini supports up to 131K tokens of context with up to 4K tokens of output per request.
Yes. WaveSpeedAI exposes Grok 3 Mini through an OpenAI-compatible endpoint at https://llm.wavespeed.ai/v1. Point the official OpenAI SDK at this base URL with your WaveSpeedAI API key — no other code changes required.
Sign in to WaveSpeedAI, create an API key in Access Keys, then send a request to https://llm.wavespeed.ai/v1/chat/completions with model id set to the value shown above. New accounts receive free credits to evaluate Grok 3 Mini before paying per token.