LLM Quick Start

LLM Quick Start

Get started with WaveSpeedAI’s LLM service in minutes.

Web Playground

The fastest way to try LLMs:

  1. Go to wavespeed.ai/llm
  2. Select a model from the dropdown (e.g., bytedance-seed/seed-1.6-flash)
  3. Adjust parameters if needed (max_tokens, temperature, etc.)
  4. Type your message and press Enter
  5. Click View Code to get the API code for your configuration

Parameters

ParameterDescriptionDefault
modelModel ID (required)-
messagesConversation history (required)-
max_tokensMaximum response length2048
temperatureCreativity (0-2)1.0
top_pNucleus sampling1.0
top_kTop-k sampling50
presence_penaltyPenalize repeated topics0.0
frequency_penaltyPenalize repeated words0.0
streamEnable streamingfalse

Multi-Turn Conversations

The Playground maintains conversation history automatically. Each message you send includes the context of previous messages.

To start a new conversation, click Clear History.

System Messages

Set the model’s behavior by adding a system message at the start of your conversation. System messages define how the model should respond (e.g., “You are a helpful coding assistant”).

Enable Thinking Mode

Some models support a “thinking” mode for complex reasoning tasks. Enable it in the Playground by clicking Enable Thinking.

Next Steps

© 2025 WaveSpeedAI. All rights reserved.