Seedance 2.0 | Special Offer ✦ 10% OFF NOW
anthropic
anthropic/claude-opus-4.7

anthropic/claude-opus-4.7

1,000,000 context · $5.00/M input$4.50/M input · $25.00/M output$22.50/M output10% off

Opus 4.7 is Anthropic's latest and most capable Opus model, designed for autonomous agents that operate across long-running, multi-step workflows. It builds on Opus 4.6 with significant gains in agentic coding — scoring 64.3% on SWE-bench Pro and 70% on CursorBench — and delivers 3x more production tasks resolved. The model excels at large codebase navigation, complex refactors, multi-stage debugging, and end-to-end project execution over extended sessions.Beyond engineering, Opus 4.7 shows strong knowledge work performance — document drafting, presentation building, and data analysis — with near-production-ready output in a single pass. It maintains coherence across very long outputs and supports high-resolution vision up to 3.75 megapixels with 98.5% visual acuity.

料金

従量課金

初期費用なし、使った分だけお支払い

入力
$5.00 / M Tokens$4.50 / M Tokens
出力
$25.00 / M Tokens$22.50 / M Tokens

API 利用

以下のコード例を使用して API と連携してください:

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="anthropic/claude-opus-4.7",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

モデル紹介

Anthropic Claude Opus 4.7

Opus 4.7 is Anthropic's most capable generally available model, released on April 16, 2026. It delivers a 13% lift on coding benchmarks, 3x more production tasks resolved, and near-perfect vision accuracy at 98.5% — all at the same pricing as Opus 4.6. The model is purpose-built for long-horizon agentic work, complex software engineering, and sustained knowledge tasks that require deep reasoning and self-verification.

Beyond coding, Opus 4.7 introduces high-resolution vision support up to 3.75 megapixels (3x the previous limit), a new xhigh effort level for finer quality-cost control, and stricter instruction following that makes it more predictable in production. It scores 64.3% on SWE-bench Pro (up from 53.4%), 70% on CursorBench (up from 58%), and leads on agentic benchmarks ahead of GPT-5.4 and Gemini 3.1 Pro.


Why It Looks Great

  • Large Language Model architecture with adaptive thinking for efficient processing
  • 1000000 context window for long document handling and multi-session memory
  • Competitive pricing at $5.0/$25.0 per million tokens (same as Opus 4.6)
  • 98.5% visual acuity with 3.75MP image support — a transformational leap from 54.5%
  • 3x more production tasks solved compared to its predecessor

Key Features

  • Context Window: 1000000 tokens
  • Max Output: 128000 tokens
  • Vision: Supported (high-resolution up to 3.75 megapixels)
  • Function Calling: Supported
  • Effort Levels: low, medium, high, xhigh (new), max
  • Task Budgets: Supported (public beta)

Benchmarks

BenchmarkOpus 4.6Opus 4.7GPT-5.4Gemini 3.1 Pro
SWE-bench Pro53.4%64.3%57.7%54.2%
SWE-bench Verified80.8%87.6%80.6%
CursorBench58%70%
93-task Coding BenchmarkBaseline+13%
Production Tasks (Rakuten-SWE-Bench)Baseline3x
Visual Acuity (Computer Use)54.5%98.5%
OfficeQA Pro (Document Reasoning)Baseline21% fewer errors
Multi-step WorkflowsBaseline+14%, 1/3 fewer tool errors

Specifications

SpecificationValue
ProviderAnthropic
Model TypeLarge Language Model (LLM)
ArchitectureTransformer (Adaptive Thinking)
Context Window1000000 tokens
Max Output128000 tokens
InputText, Image
OutputText
VisionSupported (up to 3.75MP)
Function CallingSupported
Effort Levelslow, medium, high, xhigh, max
Release DateApril 16, 2026

Pricing

Token TypeCost per Million Tokens
Input$5.0
Output$25.0

Note: Opus 4.7 ships with an updated tokenizer that may produce 1.0–1.35x more tokens depending on content type. The per-token price is unchanged, but effective cost may increase up to 35% for certain content (structured data, code). Benchmark your actual workloads before migrating at scale.


How to Use

  1. Write your prompt — describe the task, provide context, and specify desired output format.
  2. Submit — the model processes your request and returns the response.

API Integration

Base URL: https://llm.wavespeed.ai/v1 API Endpoint: chat/completions Model ID: anthropic/claude-opus-4.7


API Usage

Python SDK

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="anthropic/claude-opus-4.7",
    messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)

cURL

curl https://llm.wavespeed.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "anthropic/claude-opus-4.7",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

What's New vs Opus 4.6

AspectOpus 4.6Opus 4.7
Coding (93-task)Baseline+13%
Production TasksBaseline3x more resolved
Visual Acuity54.5%98.5%
Max Image Resolution~1.25MP3.75MP (3x)
Effort Levelslow/medium/high/max+ xhigh (new)
Task BudgetsPublic Beta
Instruction FollowingStandardStricter, more literal
Tokenizerv1Updated (1.0–1.35x more tokens)
Pricing$5/$25$5/$25 (unchanged)

Notes

  • Model: anthropic/claude-opus-4.7
  • Provider: Anthropic
  • Also available on: Amazon Bedrock, Google Cloud Vertex AI, Microsoft Foundry
  • The model follows instructions more literally than Opus 4.6 — prompts may need retuning
  • Extended thinking now uses adaptive mode only (manual budget_tokens removed)
  • Sampling parameters (temperature, top_p, top_k) are no longer supported via API

Sources: Anthropic official release, felloai.com, nxcode.io, thenextweb.com. Content was rephrased for compliance with licensing restrictions.

情報

プロバイダーanthropic
タイプllm

対応機能

入力
テキスト画像
出力
テキスト
コンテキスト1,000,000
最大出力128,000
Vision✓ 対応
Function Calling✓ 対応

API アクセスガイド

Base URLhttps://llm.wavespeed.ai/v1
API エンドポイントchat/completions
モデル IDanthropic/claude-opus-4.7

Claude Opus 4.7 API

anthropic/claude-opus-4.7

Opus 4.7 is Anthropic's latest and most capable Opus model, designed for autonomous agents that operate across long-running, multi-step workflows. It builds on Opus 4.6 with significant gains in agentic coding — scoring 64.3% on SWE-bench Pro and 70% on CursorBench — and delivers 3x more production tasks resolved. The model excels at large codebase navigation, complex refactors, multi-stage debugging, and end-to-end project execution over extended sessions.Beyond engineering, Opus 4.7 shows strong knowledge work performance — document drafting, presentation building, and data analysis — with near-production-ready output in a single pass. It maintains coherence across very long outputs and supports high-resolution vision up to 3.75 megapixels with 98.5% visual acuity.

入力

$5$4.50 /M

出力

$25$22.50 /M

割引

10%オフ

コンテキスト

1000K

最大出力

128K

Vision

対応

ツール利用

対応

WaveSpeedAIでClaude Opus 4.7を試す

統合APIを通じてClaude Opus 4.7にアクセス — OpenAI互換、コールドスタートなし、透明な料金。

Playgroundを開く

Claude Opus 4.7に関するよくある質問

Claude Opus 4.7 API の料金はいくらですか?+

WaveSpeedAI の料金: 入力 100 万トークンあたり $4.50、出力 100 万トークンあたり $22.50。プロンプトキャッシュとバッチ処理は別途料金で、長く反復的なワークロードでは実効コストを下げられます。

Claude Opus 4.7 のコンテキストウィンドウはどのくらいですか?+

Claude Opus 4.7 はリクエストあたり最大 1000K のコンテキストトークンと最大 128K の出力トークンをサポートします。

Claude Opus 4.7 は OpenAI 互換ですか?+

はい。WaveSpeedAI は OpenAI 互換エンドポイント https://llm.wavespeed.ai/v1 で Claude Opus 4.7 を提供します。公式 OpenAI SDK のベース URL をこちらに変更し WaveSpeedAI の API キーを設定するだけで利用可能です。

Claude Opus 4.7 を使い始めるには?+

WaveSpeedAI にサインインし、Access Keys で API キーを作成して、上に表示されているモデル ID を指定して https://llm.wavespeed.ai/v1/chat/completions にリクエストを送信してください。新規アカウントには Claude Opus 4.7 を試用できる無料クレジットが付与されます。

関連 LLM API