LLM Quick Start
Get started with WaveSpeedAI’s LLM service in minutes.
Web Playground
The fastest way to try LLMs:
- Go to wavespeed.ai/llm
- Select a model from the dropdown (e.g.,
bytedance-seed/seed-1.6-flash) - Adjust parameters if needed (max_tokens, temperature, etc.)
- Type your message and press Enter
- Click View Code to get the API code for your configuration
Parameters
| Parameter | Description | Default |
|---|---|---|
model | Model ID (required) | - |
messages | Conversation history (required) | - |
max_tokens | Maximum response length | 2048 |
temperature | Creativity (0-2) | 1.0 |
top_p | Nucleus sampling | 1.0 |
top_k | Top-k sampling | 50 |
presence_penalty | Penalize repeated topics | 0.0 |
frequency_penalty | Penalize repeated words | 0.0 |
stream | Enable streaming | false |
Multi-Turn Conversations
The Playground maintains conversation history automatically. Each message you send includes the context of previous messages.
To start a new conversation, click Clear History.
System Messages
Set the model’s behavior by adding a system message at the start of your conversation. System messages define how the model should respond (e.g., “You are a helpful coding assistant”).
Enable Thinking Mode
Some models support a “thinking” mode for complex reasoning tasks. Enable it in the Playground by clicking Enable Thinking.
Next Steps
- Supported LLM Models — Available models and pricing
- How to Use Streaming — Real-time responses