GPT Image 2 is LIVE Now. Try in Image Generator→
anthropic

anthropic/claude-opus-4.7

anthropic/claude-opus-4.7

1,000,000 context · $5.00/M input tokens · $25.00/M output tokens

Opus 4.7 is Anthropic's latest and most capable Opus model, designed for autonomous agents that operate across long-running, multi-step workflows. It builds on Opus 4.6 with significant gains in agentic coding — scoring 64.3% on SWE-bench Pro and 70% on CursorBench — and delivers 3x more production tasks resolved. The model excels at large codebase navigation, complex refactors, multi-stage debugging, and end-to-end project execution over extended sessions.Beyond engineering, Opus 4.7 shows strong knowledge work performance — document drafting, presentation building, and data analysis — with near-production-ready output in a single pass. It maintains coherence across very long outputs and supports high-resolution vision up to 3.75 megapixels with 98.5% visual acuity.

Harga

Bayar sesuai pemakaian

Tanpa biaya di muka, bayar hanya sesuai penggunaan

Input$5.00 / M Tokens
Output$25.00 / M Tokens

Penggunaan API

Gunakan contoh kode berikut untuk integrasi dengan API kami:

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="anthropic/claude-opus-4.7",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

Pengenalan Model

Anthropic Claude Opus 4.7

Opus 4.7 is Anthropic's most capable generally available model, released on April 16, 2026. It delivers a 13% lift on coding benchmarks, 3x more production tasks resolved, and near-perfect vision accuracy at 98.5% — all at the same pricing as Opus 4.6. The model is purpose-built for long-horizon agentic work, complex software engineering, and sustained knowledge tasks that require deep reasoning and self-verification.

Beyond coding, Opus 4.7 introduces high-resolution vision support up to 3.75 megapixels (3x the previous limit), a new xhigh effort level for finer quality-cost control, and stricter instruction following that makes it more predictable in production. It scores 64.3% on SWE-bench Pro (up from 53.4%), 70% on CursorBench (up from 58%), and leads on agentic benchmarks ahead of GPT-5.4 and Gemini 3.1 Pro.


Why It Looks Great

  • Large Language Model architecture with adaptive thinking for efficient processing
  • 1000000 context window for long document handling and multi-session memory
  • Competitive pricing at $5.0/$25.0 per million tokens (same as Opus 4.6)
  • 98.5% visual acuity with 3.75MP image support — a transformational leap from 54.5%
  • 3x more production tasks solved compared to its predecessor

Key Features

  • Context Window: 1000000 tokens
  • Max Output: 128000 tokens
  • Vision: Supported (high-resolution up to 3.75 megapixels)
  • Function Calling: Supported
  • Effort Levels: low, medium, high, xhigh (new), max
  • Task Budgets: Supported (public beta)

Benchmarks

BenchmarkOpus 4.6Opus 4.7GPT-5.4Gemini 3.1 Pro
SWE-bench Pro53.4%64.3%57.7%54.2%
SWE-bench Verified80.8%87.6%80.6%
CursorBench58%70%
93-task Coding BenchmarkBaseline+13%
Production Tasks (Rakuten-SWE-Bench)Baseline3x
Visual Acuity (Computer Use)54.5%98.5%
OfficeQA Pro (Document Reasoning)Baseline21% fewer errors
Multi-step WorkflowsBaseline+14%, 1/3 fewer tool errors

Specifications

SpecificationValue
ProviderAnthropic
Model TypeLarge Language Model (LLM)
ArchitectureTransformer (Adaptive Thinking)
Context Window1000000 tokens
Max Output128000 tokens
InputText, Image
OutputText
VisionSupported (up to 3.75MP)
Function CallingSupported
Effort Levelslow, medium, high, xhigh, max
Release DateApril 16, 2026

Pricing

Token TypeCost per Million Tokens
Input$5.0
Output$25.0

Note: Opus 4.7 ships with an updated tokenizer that may produce 1.0–1.35x more tokens depending on content type. The per-token price is unchanged, but effective cost may increase up to 35% for certain content (structured data, code). Benchmark your actual workloads before migrating at scale.


How to Use

  1. Write your prompt — describe the task, provide context, and specify desired output format.
  2. Submit — the model processes your request and returns the response.

API Integration

Base URL: https://llm.wavespeed.ai/v1 API Endpoint: chat/completions Model ID: anthropic/claude-opus-4.7


API Usage

Python SDK

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="anthropic/claude-opus-4.7",
    messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)

cURL

curl https://llm.wavespeed.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "anthropic/claude-opus-4.7",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

What's New vs Opus 4.6

AspectOpus 4.6Opus 4.7
Coding (93-task)Baseline+13%
Production TasksBaseline3x more resolved
Visual Acuity54.5%98.5%
Max Image Resolution~1.25MP3.75MP (3x)
Effort Levelslow/medium/high/max+ xhigh (new)
Task BudgetsPublic Beta
Instruction FollowingStandardStricter, more literal
Tokenizerv1Updated (1.0–1.35x more tokens)
Pricing$5/$25$5/$25 (unchanged)

Notes

  • Model: anthropic/claude-opus-4.7
  • Provider: Anthropic
  • Also available on: Amazon Bedrock, Google Cloud Vertex AI, Microsoft Foundry
  • The model follows instructions more literally than Opus 4.6 — prompts may need retuning
  • Extended thinking now uses adaptive mode only (manual budget_tokens removed)
  • Sampling parameters (temperature, top_p, top_k) are no longer supported via API

Sources: Anthropic official release, felloai.com, nxcode.io, thenextweb.com. Content was rephrased for compliance with licensing restrictions.

Info

Provideranthropic
Typellm

Fitur yang Didukung

Input
TextImage
Output
Text
Konteks1,000,000
Output Maks128,000
Vision✓ Supported
Function Calling✓ Supported

Panduan Akses API

Base URLhttps://llm.wavespeed.ai/v1
API Endpointchat/completions
Model IDanthropic/claude-opus-4.7

anthropic/claude-opus-4.7

Opus 4.7 is Anthropic's latest and most capable Opus model, designed for autonomous agents that operate across long-running, multi-step workflows. It builds on Opus 4.6 with significant gains in agentic coding — scoring 64.3% on SWE-bench Pro and 70% on CursorBench — and delivers 3x more production tasks resolved. The model excels at large codebase navigation, complex refactors, multi-stage debugging, and end-to-end project execution over extended sessions.Beyond engineering, Opus 4.7 shows strong knowledge work performance — document drafting, presentation building, and data analysis — with near-production-ready output in a single pass. It maintains coherence across very long outputs and supports high-resolution vision up to 3.75 megapixels with 98.5% visual acuity.

Input

$5 /M

Output

$25 /M

Context

1000K

Max Output

128K

Vision

Supported

Tool Use

Supported

Try anthropic/claude-opus-4.7 on WaveSpeedAI

Access anthropic/claude-opus-4.7 through our unified API — OpenAI-compatible, no cold starts, transparent pricing.

Open Playground