Supported LLM Models
Large language models available on WaveSpeedAI.
Available Models
| Model | Provider | Context | Input Price | Output Price |
|---|---|---|---|---|
bytedance-seed/seed-1.6-flash | ByteDance | 262,144 | $0.0750/1M | $0.3000/1M |
bytedance-seed/seed-1.6 | ByteDance | - | - | - |
minimax/minimax-m2.1 | Minimax | - | - | - |
z-ai/glm-4.7 | Zhipu AI | - | - | - |
google/gemini-3-flash-preview | - | - | - | |
mistralai/mistral-small-creative | Mistral | - | - | - |
nvidia/nemotron-3-nano-30b-a3b | NVIDIA | - | - | - |
openai/gpt-5.2-pro | OpenAI | - | - | - |
openai/gpt-5.2-chat | OpenAI | - | - | - |
Note: Visit wavespeed.ai/llm for the latest pricing and model availability. Select a model in the Playground to see its context length, input price, and output price.
Model Categories
Fast & Affordable
For quick responses and cost efficiency:
bytedance-seed/seed-1.6-flash— Fast inference, large contextgoogle/gemini-3-flash-preview— Google’s fast model
High Performance
For complex tasks requiring advanced reasoning:
openai/gpt-5.2-pro— OpenAI’s professional modelbytedance-seed/seed-1.6— Full Seed model
Creative & Specialized
For creative writing and specialized tasks:
mistralai/mistral-small-creative— Creative writing focusminimax/minimax-m2.1— Minimax’s latest model
Code & Technical
For coding and technical tasks:
z-ai/glm-4.7— Strong technical capabilitiesnvidia/nemotron-3-nano-30b-a3b— NVIDIA’s efficient model
Choosing a Model
| Need | Recommendation |
|---|---|
| Fast, cheap responses | bytedance-seed/seed-1.6-flash |
| Best quality | openai/gpt-5.2-pro |
| Creative writing | mistralai/mistral-small-creative |
| Long documents | Models with large context (262K+) |
| Code generation | z-ai/glm-4.7 or GPT models |
Model Properties
When you select a model in the Playground, you’ll see:
| Property | Description |
|---|---|
| Context | Maximum tokens in conversation (e.g., 262,144) |
| Input | Cost per 1M input tokens |
| Output | Cost per 1M output tokens |
Using Different Models
Select a different model from the dropdown in the LLM Playground. Each model has different capabilities, context lengths, and pricing.
Click View Code in the Playground to get the API code for your selected model and configuration.
Model Updates
We regularly add new models. Check the LLM Playground for the latest availability.