Connect Coding Agents

Connect Coding Agents to WaveSpeedAI LLM

Use this guide when you want a coding assistant or developer agent to run through WaveSpeedAI LLM using an OpenAI-compatible endpoint. It covers Claude Code, OpenAI Codex, OpenClaw, and other custom model clients.

Most coding tools are built around one provider. WaveSpeedAI LLM is usually easiest to test when the tool supports an OpenAI-compatible custom provider:

Coding tool
  -> Custom OpenAI-compatible provider
  -> https://llm.wavespeed.ai/v1
  -> Selected WaveSpeedAI model

Claude Code, OpenAI Codex, OpenClaw, or Custom OpenAI?

ToolRecommended path
Claude CodeUse an Anthropic-compatible gateway that routes to WaveSpeedAI
OpenAI CodexUse a custom OpenAI-compatible base URL if your setup supports it
OpenClawAdd WaveSpeedAI as a custom provider
Other toolsChoose OpenAI-compatible or Custom OpenAI mode

Universal OpenAI-Compatible Settings

SettingValue
Base URLhttps://llm.wavespeed.ai/v1
API keyYour WaveSpeedAI API key
ProtocolOpenAI Chat Completions
Model IDProvider-prefixed ID, such as anthropic/claude-opus-4.7

Verify the Backend First

Before configuring an agent, confirm your key and model work with a small API call.

curl https://llm.wavespeed.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_WAVESPEED_API_KEY" \
  -d '{
    "model": "anthropic/claude-opus-4.7",
    "messages": [
      {
        "role": "user",
        "content": "Reply with one sentence confirming you are ready for coding tasks."
      }
    ]
  }'

Verified WaveSpeedAI Endpoint Checks

The examples on this page use WaveSpeedAI LLM endpoint values verified with https://llm.wavespeed.ai/v1/chat/completions.

CheckResult
anthropic/claude-opus-4.7 chat completionVerified
openai/gpt-5.5 chat completionVerified
openai/gpt-5.5 streaming responseVerified
qwen/qwen3-coder chat completionVerified
deepseek/deepseek-chat chat completionVerified
bytedance-seed/seed-1.6-flash chat completionVerified

After the endpoint check passes, run a read-only prompt in your coding tool to confirm the selected provider, model, and project permissions before asking it to edit files.

Before a Long Coding Session

Run a small prompt first:

Reply with one sentence confirming which model you are.

This helps confirm the agent is using the intended provider, key, and model before it starts reading or editing a large codebase.

Model Selection Tips

TaskSuggested setup
Large codebase changesUse a stronger reasoning or coding model
Small editsUse a faster, cheaper model
Explaining codeUse a balanced chat model
Long-context repo analysisPrioritize context window
Cost controlUse cheaper models for exploration, stronger models for final edits

Claude Code

Claude Code uses Anthropic-style client settings. When you want to power Claude Code workflows with WaveSpeedAI LLM, use an Anthropic-compatible gateway that routes requests to the WaveSpeedAI OpenAI-compatible endpoint.

Claude Code
  -> Anthropic-compatible gateway
  -> https://llm.wavespeed.ai/v1

Set the WaveSpeedAI values in the shell that launches Claude Code:

export ANTHROPIC_BASE_URL="https://llm.wavespeed.ai/v1"
export ANTHROPIC_AUTH_TOKEN="YOUR_WAVESPEED_API_KEY"
export ANTHROPIC_MODEL="anthropic/claude-opus-4.7"
claude

Windows PowerShell:

$env:ANTHROPIC_BASE_URL = "https://llm.wavespeed.ai/v1"
$env:ANTHROPIC_AUTH_TOKEN = "YOUR_WAVESPEED_API_KEY"
$env:ANTHROPIC_MODEL = "anthropic/claude-opus-4.7"
claude

If your Claude Code setup loads environment variables from ~/.claude/settings.json, add the WaveSpeedAI values under env.

{
  "env": {
    "DISABLE_AUTOUPDATER": "1",
    "ANTHROPIC_BASE_URL": "https://llm.wavespeed.ai/v1",
    "ANTHROPIC_AUTH_TOKEN": "YOUR_WAVESPEED_API_KEY",
    "ANTHROPIC_MODEL": "anthropic/claude-opus-4.7"
  }
}

Set ANTHROPIC_MODEL to the model name accepted by your Claude Code runtime. Use the WaveSpeedAI model ID when your setup passes model names through, or use the local alias that maps to a WaveSpeedAI model upstream.

OpenAI Codex

For Codex clients that support custom OpenAI-compatible providers, configure WaveSpeedAI as the provider and use a tested WaveSpeedAI model ID.

model = "openai/gpt-5.5"
model_provider = "wavespeed"
 
[model_providers.wavespeed]
name = "WaveSpeedAI LLM"
base_url = "https://llm.wavespeed.ai/v1"
env_key = "WAVESPEED_API_KEY"
wire_api = "chat"

Then launch Codex with your WaveSpeedAI key in the environment:

export WAVESPEED_API_KEY="YOUR_WAVESPEED_API_KEY"
codex

Windows PowerShell:

$env:WAVESPEED_API_KEY = "YOUR_WAVESPEED_API_KEY"
codex

The OpenAI-compatible endpoint path for Codex-style clients was verified with openai/gpt-5.5, including a streaming request. If your Codex version uses different config field names or a different wire protocol, keep the same WaveSpeedAI base URL, API key, and model ID while adapting the surrounding config to that client version.

After configuring Codex, start with a read-only prompt such as:

Summarize this repository in three bullets. Do not edit files.

OpenClaw

For OpenClaw setups with custom OpenAI-compatible providers, add WaveSpeedAI as a provider and test a small prompt first.

{
  agents: {
    defaults: {
      model: { primary: "wavespeed/anthropic/claude-opus-4.7" },
    },
  },
  models: {
    mode: "merge",
    providers: {
      wavespeed: {
        baseUrl: "https://llm.wavespeed.ai/v1",
        apiKey: "${WAVESPEED_API_KEY}",
        api: "openai-completions",
        models: [
          {
            id: "anthropic/claude-opus-4.7",
            name: "Claude Opus 4.7 via WaveSpeedAI",
            input: ["text"],
          },
          {
            id: "openai/gpt-5.5",
            name: "GPT-5.5 via WaveSpeedAI",
            input: ["text"],
          }
        ],
      },
    },
  },
}

If your OpenClaw version uses a different provider config shape, keep the same base URL, API key, and model IDs while adapting the surrounding fields to that version.

After selecting the provider in OpenClaw, start with a read-only prompt before asking it to modify a project.

Troubleshooting

ProblemLikely cause
401 errorWrong API key or missing bearer token
Model not foundModel ID is incomplete or not available
Tool still calls OpenAIBase URL was not changed
Claude Code request is not routedConfirm the WaveSpeedAI base URL, API key, and upstream model mapping
Claude Code model is not recognizedSet ANTHROPIC_MODEL to the model name accepted by your Claude Code runtime
© 2025 WaveSpeedAI. All rights reserved.