Any Llm

Any Llm

Playground

Try it on WavespeedAI!

Any LLM is a versatile large language model for text generation, comprehension, and diverse NLP tasks such as chat and summarization. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.

Features

Any LLM

Any LLM is a unified large language model gateway that provides access to multiple state-of-the-art AI models through a single interface. Chat, reason, and generate text using models from Google, OpenAI, Anthropic, and more — all in one place.


Why It Stands Out

  • Multi-model access: Choose from a variety of leading AI models including Gemini, GPT, Claude, and more.
  • Unified interface: One consistent API and playground for all supported models.
  • System prompt support: Customize model behavior with custom instructions.
  • Reasoning mode: Enable step-by-step reasoning for complex problem-solving tasks.
  • Priority control: Choose between latency-optimized or quality-optimized responses.
  • Flexible parameters: Fine-tune temperature, max tokens, and other settings.
  • Prompt Enhancer: Built-in AI-powered prompt optimization for better results.

Parameters

ParameterRequiredDescription
promptYesYour question or instruction to the model.
system_promptNoCustom instructions to guide model behavior.
reasoningNoInclude reasoning steps in the final answer.
priorityNoOptimize for latency or quality (default: latency).
temperatureNoControls randomness (lower = focused, higher = creative).
max_tokensNoMaximum length of the response.
modelNoSelect which LLM to use (e.g., google/gemini-2.5-flash).
enable_sync_modeNoWait for result before returning response (API only).

Supported Models

  • google/gemini-2.5-flash
  • anthropic/claude-3.5-sonnet
  • openai/gpt-5-chat
  • And more…

How to Use

  1. Write your prompt — enter your question or instruction. Use the Prompt Enhancer for AI-assisted optimization.
  2. Add a system prompt (optional) — provide custom instructions to guide the model’s behavior.
  3. Enable reasoning (optional) — turn on for step-by-step explanations.
  4. Select priority — choose “latency” for faster responses or “quality” for better outputs.
  5. Adjust parameters (optional) — set temperature and max_tokens as needed.
  6. Select a model — choose from available LLMs.
  7. Click Run and receive your response.

Best Use Cases

  • General Q&A — Get answers to questions across any topic.
  • Writing Assistance — Draft emails, articles, reports, and creative content.
  • Code Generation — Write, debug, and explain code in multiple languages.
  • Research & Analysis — Summarize documents, analyze data, and extract insights.
  • Reasoning Tasks — Solve math problems, logic puzzles, and complex reasoning challenges.
  • Brainstorming — Generate ideas, outlines, and creative concepts.

Pro Tips for Best Quality

  • Use system prompts to define the model’s role, tone, and output format.
  • Enable reasoning for math, logic, and multi-step problems.
  • Lower temperature (0.1–0.3) for factual, consistent answers.
  • Higher temperature (0.7–1.0) for creative, varied responses.
  • Choose “latency” priority for quick interactions, “quality” for important tasks.
  • Experiment with different models to find the best fit for your use case.

Notes

  • Processing time varies based on model selection and prompt complexity.
  • Please ensure your prompts comply with usage guidelines.

Authentication

For authentication details, please refer to the Authentication Guide.

API Endpoints

Submit Task & Query Result


# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/any-llm" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
    "reasoning": false,
    "priority": "latency",
    "model": "google/gemini-2.5-flash",
    "enable_sync_mode": false
}'

# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"

Parameters

Task Submission Parameters

Request Parameters

ParameterTypeRequiredDefaultRangeDescription
promptstringYes-The positive prompt for the generation.
system_promptstringNo--System prompt to provide context or instructions to the model
reasoningbooleanNofalse-Should reasoning be the part of the final answer.
prioritystringNolatencythroughput, latencyThroughput is the default and is recommended for most use cases. Latency is recommended for use cases where low latency is important.
temperaturenumberNo-0 ~ 2This setting influences the variety in the model’s responses. Lower values lead to more predictable and typical responses, while higher values encourage more diverse and less common responses. At 0, the model always gives the same response for a given input.
max_tokensintegerNo--This sets the upper limit for the number of tokens the model can generate in response. It won’t produce more than this limit. The maximum value is the context length minus the prompt length.
modelstringNogoogle/gemini-2.5-flashanthropic/claude-3.7-sonnet, anthropic/claude-3.5-sonnet, anthropic/claude-3-haiku, google/gemini-2.5-flash, google/gemini-2.0-flash-001, google/gemini-2.0-flash-lite-001, google/gemini-2.5-flash-preview-09-2025, google/gemini-2.0-flash-exp:free, google/gemini-2.5-pro, google/gemini-3-flash-preview, openai/gpt-4o, openai/gpt-4.1, openai/gpt-5-chat, meta-llama/llama-3.2-90b-vision-instruct, meta-llama/llama-4-maverick, meta-llama/llama-4-scoutName of the model to use. Premium models are charged at 10x the rate of standard models, they include: deepseek/deepseek-r1, google/gemini-pro-1.5, openai/gpt-4.1, anthropic/claude-3-5-haiku, openai/gpt-4o, anthropic/claude-3.5-sonnet, openai/o3, meta-llama/llama-3.2-90b-vision-instruct, anthropic/claude-3.7-sonnet, openai/gpt-5-chat.
enable_sync_modebooleanNofalse-If set to true, the function will wait for the result to be generated and uploaded before returning the response. It allows you to get the result directly in the response. This property is only available through the API.

Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
data.idstringUnique identifier for the prediction, Task Id
data.modelstringModel ID used for the prediction
data.outputsarrayArray of URLs to the generated content (empty when status is not completed)
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.has_nsfw_contentsarrayArray of boolean values indicating NSFW detection for each output
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds

Result Request Parameters

ParameterTypeRequiredDefaultDescription
idstringYes-Task ID

Result Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
dataobjectThe prediction data object containing all details
data.idstringUnique identifier for the prediction, the ID of the prediction to get
data.modelstringModel ID used for the prediction
data.outputsstringArray of URLs to the generated content (empty when status is not completed).
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds
© 2025 WaveSpeedAI. All rights reserved.