← Blog

How to Configure Claude Code with WaveSpeed LLM (OpenAI-Compatible Endpoint)

Point Claude Code at WaveSpeed LLM in under five minutes. Use Claude, GPT, Gemini, and 290+ other models through one OpenAI-compatible endpoint — no Anthropic key required.

4 min read
How to Configure Claude Code with WaveSpeed LLM (OpenAI-Compatible Endpoint)

Claude Code Without an Anthropic Key

Claude Code is Anthropic’s official CLI for coding with Claude — but you don’t have to route requests through Anthropic’s API. If you already have a WaveSpeed account, you can point Claude Code at WaveSpeed LLM and get access to Claude Opus 4.6, GPT-5.2, Gemini 3, DeepSeek, and 290+ other models through a single OpenAI-compatible endpoint.

This guide covers the exact environment variables, base URL, and model ID format you need.

Why Route Claude Code Through WaveSpeed

  • One bill, many models. Claude, GPT, Gemini, Llama, Qwen — all billed through your WaveSpeed account.
  • OpenAI-compatible everywhere. No need to juggle Anthropic API keys, OpenAI keys, and Google keys separately.
  • No cold starts. Consistent latency under load, same as Anthropic’s native API.
  • Model portability. Switch from Claude Opus to GPT-5.2 mid-session by changing one env var.

What You Need

FieldValue
Base URLhttps://llm.wavespeed.ai/v1
API KeyYour WaveSpeed API key
ProtocolOpenAI Chat Completions
Model ID formatvendor/model (e.g., anthropic/claude-opus-4.6)

Step 1: Install Claude Code

If you don’t already have it:

npm install -g @anthropic-ai/claude-code

Step 2: Get Your WaveSpeed API Key

  1. Sign in at wavespeed.ai.
  2. Open the dashboard → API Keys.
  3. Create a new key and copy it.

Step 3: Configure Claude Code to Use WaveSpeed

Claude Code supports custom OpenAI-compatible endpoints via environment variables. Set these before launching:

macOS / Linux

export ANTHROPIC_BASE_URL="https://llm.wavespeed.ai/v1"
export ANTHROPIC_API_KEY="YOUR_WAVESPEED_API_KEY"
export ANTHROPIC_MODEL="anthropic/claude-opus-4.6"

Windows (PowerShell)

$env:ANTHROPIC_BASE_URL = "https://llm.wavespeed.ai/v1"
$env:ANTHROPIC_API_KEY = "YOUR_WAVESPEED_API_KEY"
$env:ANTHROPIC_MODEL = "anthropic/claude-opus-4.6"

To make these persistent, add them to your ~/.zshrc, ~/.bashrc, or Windows environment variables.

Step 4: Start Claude Code

claude

Claude Code will route every request through WaveSpeed LLM using the model you set. You’ll see normal streaming responses — no difference from Anthropic’s native API.

Switching Models Mid-Project

Because all 290+ models share one endpoint, switching is a one-line change. Open a new shell with a different model:

export ANTHROPIC_MODEL="openai/gpt-5.2-pro"
claude

Or try an open-source alternative:

export ANTHROPIC_MODEL="deepseek/deepseek-v4"
claude

Useful when you want to compare how different models handle the same codebase.

Not every model is tuned for agentic coding. These work well:

Model IDBest For
anthropic/claude-opus-4.6Default — strongest long-context coding and tool use
anthropic/claude-sonnet-4.6Faster, cheaper Claude for day-to-day tasks
openai/gpt-5.2-proTop alternative reasoning performance
openai/gpt-5.2-chatBalanced speed and quality
deepseek/deepseek-v4Strong open-source coder, lowest cost

Troubleshooting

“Invalid API key” / 401 Make sure ANTHROPIC_API_KEY is set to your WaveSpeed key, not an Anthropic key. Claude Code sends this as the bearer token to whatever ANTHROPIC_BASE_URL points at.

“Model not found” Always include the vendor prefix. anthropic/claude-opus-4.6 works; claude-opus-4.6 does not.

Requests still going to api.anthropic.com ANTHROPIC_BASE_URL must be set in the same shell you launch claude from. Check with:

echo $ANTHROPIC_BASE_URL

If it’s empty, re-export it or reload your shell config.

Tool use / function calling doesn’t work Make sure the model you picked supports tools. anthropic/claude-opus-4.6 and openai/gpt-5.2-pro both do; some smaller models don’t.

Pricing

You pay WaveSpeed’s per-token rate for the model you select — no additional Claude Code fee. Compare pricing at wavespeed.ai/llm. Claude Opus 4.6 runs $5/M input, $25/M output. Smaller models drop under $1/M.

Why This Setup Works

Claude Code’s ANTHROPIC_BASE_URL override was designed so teams could route through proxies, gateways, and enterprise auth layers. WaveSpeed LLM is an OpenAI-compatible gateway, and Claude Code is happy to speak OpenAI Chat Completions when pointed at one — so the override works cleanly.

Start Coding Today

Three env vars and you’re running Claude Code on 290+ models with one bill.

Get your WaveSpeed API key and point Claude Code at https://llm.wavespeed.ai/v1 in minutes.