Any Llm
Playground
Try it on WavespeedAI!Any LLM is a versatile large language model for text generation, comprehension, and diverse NLP tasks such as chat and summarization. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
Features
Any LLM
Any LLM is a unified large language model gateway that provides access to multiple state-of-the-art AI models through a single interface. Chat, reason, and generate text using models from Google, OpenAI, Anthropic, and more — all in one place.
Why It Stands Out
- Multi-model access: Choose from a variety of leading AI models including Gemini, GPT, Claude, and more.
- Unified interface: One consistent API and playground for all supported models.
- System prompt support: Customize model behavior with custom instructions.
- Reasoning mode: Enable step-by-step reasoning for complex problem-solving tasks.
- Priority control: Choose between latency-optimized or quality-optimized responses.
- Flexible parameters: Fine-tune temperature, max tokens, and other settings.
- Prompt Enhancer: Built-in AI-powered prompt optimization for better results.
Parameters
| Parameter | Required | Description |
|---|---|---|
| prompt | Yes | Your question or instruction to the model. |
| system_prompt | No | Custom instructions to guide model behavior. |
| reasoning | No | Include reasoning steps in the final answer. |
| priority | No | Optimize for latency or quality (default: latency). |
| temperature | No | Controls randomness (lower = focused, higher = creative). |
| max_tokens | No | Maximum length of the response. |
| model | No | Select which LLM to use (e.g., google/gemini-2.5-flash). |
| enable_sync_mode | No | Wait for result before returning response (API only). |
Supported Models
- google/gemini-2.5-flash
- anthropic/claude-3.5-sonnet
- openai/gpt-5-chat
- And more…
How to Use
- Write your prompt — enter your question or instruction. Use the Prompt Enhancer for AI-assisted optimization.
- Add a system prompt (optional) — provide custom instructions to guide the model’s behavior.
- Enable reasoning (optional) — turn on for step-by-step explanations.
- Select priority — choose “latency” for faster responses or “quality” for better outputs.
- Adjust parameters (optional) — set temperature and max_tokens as needed.
- Select a model — choose from available LLMs.
- Click Run and receive your response.
Best Use Cases
- General Q&A — Get answers to questions across any topic.
- Writing Assistance — Draft emails, articles, reports, and creative content.
- Code Generation — Write, debug, and explain code in multiple languages.
- Research & Analysis — Summarize documents, analyze data, and extract insights.
- Reasoning Tasks — Solve math problems, logic puzzles, and complex reasoning challenges.
- Brainstorming — Generate ideas, outlines, and creative concepts.
Pro Tips for Best Quality
- Use system prompts to define the model’s role, tone, and output format.
- Enable reasoning for math, logic, and multi-step problems.
- Lower temperature (0.1–0.3) for factual, consistent answers.
- Higher temperature (0.7–1.0) for creative, varied responses.
- Choose “latency” priority for quick interactions, “quality” for important tasks.
- Experiment with different models to find the best fit for your use case.
Notes
- Processing time varies based on model selection and prompt complexity.
- Please ensure your prompts comply with usage guidelines.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/any-llm" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"reasoning": false,
"priority": "latency",
"model": "google/gemini-2.5-flash",
"enable_sync_mode": false
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| prompt | string | Yes | - | The positive prompt for the generation. | |
| system_prompt | string | No | - | - | System prompt to provide context or instructions to the model |
| reasoning | boolean | No | false | - | Should reasoning be the part of the final answer. |
| priority | string | No | latency | throughput, latency | Throughput is the default and is recommended for most use cases. Latency is recommended for use cases where low latency is important. |
| temperature | number | No | - | 0 ~ 2 | This setting influences the variety in the model’s responses. Lower values lead to more predictable and typical responses, while higher values encourage more diverse and less common responses. At 0, the model always gives the same response for a given input. |
| max_tokens | integer | No | - | - | This sets the upper limit for the number of tokens the model can generate in response. It won’t produce more than this limit. The maximum value is the context length minus the prompt length. |
| model | string | No | google/gemini-2.5-flash | anthropic/claude-3.7-sonnet, anthropic/claude-3.5-sonnet, anthropic/claude-3-haiku, google/gemini-2.5-flash, google/gemini-2.0-flash-001, google/gemini-2.0-flash-lite-001, google/gemini-2.5-flash-preview-09-2025, google/gemini-2.0-flash-exp:free, google/gemini-2.5-pro, google/gemini-3-flash-preview, openai/gpt-4o, openai/gpt-4.1, openai/gpt-5-chat, meta-llama/llama-3.2-90b-vision-instruct, meta-llama/llama-4-maverick, meta-llama/llama-4-scout | Name of the model to use. Premium models are charged at 10x the rate of standard models, they include: deepseek/deepseek-r1, google/gemini-pro-1.5, openai/gpt-4.1, anthropic/claude-3-5-haiku, openai/gpt-4o, anthropic/claude-3.5-sonnet, openai/o3, meta-llama/llama-3.2-90b-vision-instruct, anthropic/claude-3.7-sonnet, openai/gpt-5-chat. |
| enable_sync_mode | boolean | No | false | - | If set to true, the function will wait for the result to be generated and uploaded before returning the response. It allows you to get the result directly in the response. This property is only available through the API. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data | object | The prediction data object containing all details |
| data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
| data.model | string | Model ID used for the prediction |
| data.outputs | string | Array of URLs to the generated content (empty when status is not completed). |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |