Seedance 2.0 | Special Offer ✦ 10% OFF NOW
anthropic
anthropic/claude-opus-4.7

anthropic/claude-opus-4.7

1,000,000 context · $5.00/M input$4.50/M input · $25.00/M output$22.50/M output10% off

Opus 4.7 is Anthropic's latest and most capable Opus model, designed for autonomous agents that operate across long-running, multi-step workflows. It builds on Opus 4.6 with significant gains in agentic coding — scoring 64.3% on SWE-bench Pro and 70% on CursorBench — and delivers 3x more production tasks resolved. The model excels at large codebase navigation, complex refactors, multi-stage debugging, and end-to-end project execution over extended sessions.Beyond engineering, Opus 4.7 shows strong knowledge work performance — document drafting, presentation building, and data analysis — with near-production-ready output in a single pass. It maintains coherence across very long outputs and supports high-resolution vision up to 3.75 megapixels with 98.5% visual acuity.

定價

按用量付費

無需預付費用,僅按實際使用量付費

輸入
$5.00 / M Tokens$4.50 / M Tokens
輸出
$25.00 / M Tokens$22.50 / M Tokens

API 使用

使用以下程式碼範例整合我們的 API:

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="anthropic/claude-opus-4.7",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

模型介紹

Anthropic Claude Opus 4.7

Opus 4.7 is Anthropic's most capable generally available model, released on April 16, 2026. It delivers a 13% lift on coding benchmarks, 3x more production tasks resolved, and near-perfect vision accuracy at 98.5% — all at the same pricing as Opus 4.6. The model is purpose-built for long-horizon agentic work, complex software engineering, and sustained knowledge tasks that require deep reasoning and self-verification.

Beyond coding, Opus 4.7 introduces high-resolution vision support up to 3.75 megapixels (3x the previous limit), a new xhigh effort level for finer quality-cost control, and stricter instruction following that makes it more predictable in production. It scores 64.3% on SWE-bench Pro (up from 53.4%), 70% on CursorBench (up from 58%), and leads on agentic benchmarks ahead of GPT-5.4 and Gemini 3.1 Pro.


Why It Looks Great

  • Large Language Model architecture with adaptive thinking for efficient processing
  • 1000000 context window for long document handling and multi-session memory
  • Competitive pricing at $5.0/$25.0 per million tokens (same as Opus 4.6)
  • 98.5% visual acuity with 3.75MP image support — a transformational leap from 54.5%
  • 3x more production tasks solved compared to its predecessor

Key Features

  • Context Window: 1000000 tokens
  • Max Output: 128000 tokens
  • Vision: Supported (high-resolution up to 3.75 megapixels)
  • Function Calling: Supported
  • Effort Levels: low, medium, high, xhigh (new), max
  • Task Budgets: Supported (public beta)

Benchmarks

BenchmarkOpus 4.6Opus 4.7GPT-5.4Gemini 3.1 Pro
SWE-bench Pro53.4%64.3%57.7%54.2%
SWE-bench Verified80.8%87.6%80.6%
CursorBench58%70%
93-task Coding BenchmarkBaseline+13%
Production Tasks (Rakuten-SWE-Bench)Baseline3x
Visual Acuity (Computer Use)54.5%98.5%
OfficeQA Pro (Document Reasoning)Baseline21% fewer errors
Multi-step WorkflowsBaseline+14%, 1/3 fewer tool errors

Specifications

SpecificationValue
ProviderAnthropic
Model TypeLarge Language Model (LLM)
ArchitectureTransformer (Adaptive Thinking)
Context Window1000000 tokens
Max Output128000 tokens
InputText, Image
OutputText
VisionSupported (up to 3.75MP)
Function CallingSupported
Effort Levelslow, medium, high, xhigh, max
Release DateApril 16, 2026

Pricing

Token TypeCost per Million Tokens
Input$5.0
Output$25.0

Note: Opus 4.7 ships with an updated tokenizer that may produce 1.0–1.35x more tokens depending on content type. The per-token price is unchanged, but effective cost may increase up to 35% for certain content (structured data, code). Benchmark your actual workloads before migrating at scale.


How to Use

  1. Write your prompt — describe the task, provide context, and specify desired output format.
  2. Submit — the model processes your request and returns the response.

API Integration

Base URL: https://llm.wavespeed.ai/v1 API Endpoint: chat/completions Model ID: anthropic/claude-opus-4.7


API Usage

Python SDK

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="anthropic/claude-opus-4.7",
    messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)

cURL

curl https://llm.wavespeed.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "anthropic/claude-opus-4.7",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

What's New vs Opus 4.6

AspectOpus 4.6Opus 4.7
Coding (93-task)Baseline+13%
Production TasksBaseline3x more resolved
Visual Acuity54.5%98.5%
Max Image Resolution~1.25MP3.75MP (3x)
Effort Levelslow/medium/high/max+ xhigh (new)
Task BudgetsPublic Beta
Instruction FollowingStandardStricter, more literal
Tokenizerv1Updated (1.0–1.35x more tokens)
Pricing$5/$25$5/$25 (unchanged)

Notes

  • Model: anthropic/claude-opus-4.7
  • Provider: Anthropic
  • Also available on: Amazon Bedrock, Google Cloud Vertex AI, Microsoft Foundry
  • The model follows instructions more literally than Opus 4.6 — prompts may need retuning
  • Extended thinking now uses adaptive mode only (manual budget_tokens removed)
  • Sampling parameters (temperature, top_p, top_k) are no longer supported via API

Sources: Anthropic official release, felloai.com, nxcode.io, thenextweb.com. Content was rephrased for compliance with licensing restrictions.

資訊

提供商anthropic
類型llm

支援功能

輸入
文字影像
輸出
文字
上下文1,000,000
最大輸出128,000
視覺✓ 支援
函式呼叫✓ 支援

API 存取指南

Base URLhttps://llm.wavespeed.ai/v1
API 端點chat/completions
Model IDanthropic/claude-opus-4.7

Claude Opus 4.7 API

anthropic/claude-opus-4.7

Opus 4.7 is Anthropic's latest and most capable Opus model, designed for autonomous agents that operate across long-running, multi-step workflows. It builds on Opus 4.6 with significant gains in agentic coding — scoring 64.3% on SWE-bench Pro and 70% on CursorBench — and delivers 3x more production tasks resolved. The model excels at large codebase navigation, complex refactors, multi-stage debugging, and end-to-end project execution over extended sessions.Beyond engineering, Opus 4.7 shows strong knowledge work performance — document drafting, presentation building, and data analysis — with near-production-ready output in a single pass. It maintains coherence across very long outputs and supports high-resolution vision up to 3.75 megapixels with 98.5% visual acuity.

輸入

$5$4.50 /M

輸出

$25$22.50 /M

折扣

10% 折扣

上下文

1000K

最大輸出

128K

Vision

支援

工具調用

支援

在 WaveSpeedAI 試用 Claude Opus 4.7

透過我們的統一 API 接入 Claude Opus 4.7 — 相容 OpenAI、無冷啟動、透明計費。

開啟 Playground

關於 Claude Opus 4.7 的常見問題

Claude Opus 4.7 API 多少錢?+

WaveSpeedAI 定價:輸入每百萬 token $4.50,輸出每百萬 token $22.50。Prompt 快取與批次處理分別計費,可顯著降低長上下文、高重複任務的實際成本。

Claude Opus 4.7 的上下文視窗有多大?+

Claude Opus 4.7 每次請求最多支援 1000K 上下文 token,輸出最多 128K token。

Claude Opus 4.7 是否相容 OpenAI?+

是的。WaveSpeedAI 透過 https://llm.wavespeed.ai/v1 的 OpenAI 相容端點提供 Claude Opus 4.7。將官方 OpenAI SDK 的 base URL 指向該位址,使用 WaveSpeedAI 的 API Key 即可,無需其他程式碼變更。

如何開始使用 Claude Opus 4.7?+

登入 WaveSpeedAI,在 Access Keys 建立 API Key,使用上方顯示的 model id 向 https://llm.wavespeed.ai/v1/chat/completions 發送請求。新帳號將獲得免費額度,用於試用 Claude Opus 4.7。

相關 LLM API