Seedance 2.0 | Special Offer ✦ 10% OFF NOW
anthropic
anthropic/claude-opus-4.7

anthropic/claude-opus-4.7

1,000,000 context · $5.00/M input$4.50/M input · $25.00/M output$22.50/M output10% off

Opus 4.7 is Anthropic's latest and most capable Opus model, designed for autonomous agents that operate across long-running, multi-step workflows. It builds on Opus 4.6 with significant gains in agentic coding — scoring 64.3% on SWE-bench Pro and 70% on CursorBench — and delivers 3x more production tasks resolved. The model excels at large codebase navigation, complex refactors, multi-stage debugging, and end-to-end project execution over extended sessions.Beyond engineering, Opus 4.7 shows strong knowledge work performance — document drafting, presentation building, and data analysis — with near-production-ready output in a single pass. It maintains coherence across very long outputs and supports high-resolution vision up to 3.75 megapixels with 98.5% visual acuity.

가격

사용량 기반 과금

선결제 없이 사용한 만큼만 지불

입력
$5.00 / M Tokens$4.50 / M Tokens
출력
$25.00 / M Tokens$22.50 / M Tokens

API 사용법

다음 코드 예시를 사용해 API와 연동하세요:

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="anthropic/claude-opus-4.7",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

모델 소개

Anthropic Claude Opus 4.7

Opus 4.7 is Anthropic's most capable generally available model, released on April 16, 2026. It delivers a 13% lift on coding benchmarks, 3x more production tasks resolved, and near-perfect vision accuracy at 98.5% — all at the same pricing as Opus 4.6. The model is purpose-built for long-horizon agentic work, complex software engineering, and sustained knowledge tasks that require deep reasoning and self-verification.

Beyond coding, Opus 4.7 introduces high-resolution vision support up to 3.75 megapixels (3x the previous limit), a new xhigh effort level for finer quality-cost control, and stricter instruction following that makes it more predictable in production. It scores 64.3% on SWE-bench Pro (up from 53.4%), 70% on CursorBench (up from 58%), and leads on agentic benchmarks ahead of GPT-5.4 and Gemini 3.1 Pro.


Why It Looks Great

  • Large Language Model architecture with adaptive thinking for efficient processing
  • 1000000 context window for long document handling and multi-session memory
  • Competitive pricing at $5.0/$25.0 per million tokens (same as Opus 4.6)
  • 98.5% visual acuity with 3.75MP image support — a transformational leap from 54.5%
  • 3x more production tasks solved compared to its predecessor

Key Features

  • Context Window: 1000000 tokens
  • Max Output: 128000 tokens
  • Vision: Supported (high-resolution up to 3.75 megapixels)
  • Function Calling: Supported
  • Effort Levels: low, medium, high, xhigh (new), max
  • Task Budgets: Supported (public beta)

Benchmarks

BenchmarkOpus 4.6Opus 4.7GPT-5.4Gemini 3.1 Pro
SWE-bench Pro53.4%64.3%57.7%54.2%
SWE-bench Verified80.8%87.6%80.6%
CursorBench58%70%
93-task Coding BenchmarkBaseline+13%
Production Tasks (Rakuten-SWE-Bench)Baseline3x
Visual Acuity (Computer Use)54.5%98.5%
OfficeQA Pro (Document Reasoning)Baseline21% fewer errors
Multi-step WorkflowsBaseline+14%, 1/3 fewer tool errors

Specifications

SpecificationValue
ProviderAnthropic
Model TypeLarge Language Model (LLM)
ArchitectureTransformer (Adaptive Thinking)
Context Window1000000 tokens
Max Output128000 tokens
InputText, Image
OutputText
VisionSupported (up to 3.75MP)
Function CallingSupported
Effort Levelslow, medium, high, xhigh, max
Release DateApril 16, 2026

Pricing

Token TypeCost per Million Tokens
Input$5.0
Output$25.0

Note: Opus 4.7 ships with an updated tokenizer that may produce 1.0–1.35x more tokens depending on content type. The per-token price is unchanged, but effective cost may increase up to 35% for certain content (structured data, code). Benchmark your actual workloads before migrating at scale.


How to Use

  1. Write your prompt — describe the task, provide context, and specify desired output format.
  2. Submit — the model processes your request and returns the response.

API Integration

Base URL: https://llm.wavespeed.ai/v1 API Endpoint: chat/completions Model ID: anthropic/claude-opus-4.7


API Usage

Python SDK

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="anthropic/claude-opus-4.7",
    messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)

cURL

curl https://llm.wavespeed.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "anthropic/claude-opus-4.7",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

What's New vs Opus 4.6

AspectOpus 4.6Opus 4.7
Coding (93-task)Baseline+13%
Production TasksBaseline3x more resolved
Visual Acuity54.5%98.5%
Max Image Resolution~1.25MP3.75MP (3x)
Effort Levelslow/medium/high/max+ xhigh (new)
Task BudgetsPublic Beta
Instruction FollowingStandardStricter, more literal
Tokenizerv1Updated (1.0–1.35x more tokens)
Pricing$5/$25$5/$25 (unchanged)

Notes

  • Model: anthropic/claude-opus-4.7
  • Provider: Anthropic
  • Also available on: Amazon Bedrock, Google Cloud Vertex AI, Microsoft Foundry
  • The model follows instructions more literally than Opus 4.6 — prompts may need retuning
  • Extended thinking now uses adaptive mode only (manual budget_tokens removed)
  • Sampling parameters (temperature, top_p, top_k) are no longer supported via API

Sources: Anthropic official release, felloai.com, nxcode.io, thenextweb.com. Content was rephrased for compliance with licensing restrictions.

정보

제공자anthropic
유형llm

지원 기능

입력
텍스트이미지
출력
텍스트
컨텍스트1,000,000
최대 출력128,000
Vision✓ 지원
Function Calling✓ 지원

API 접근 가이드

Base URLhttps://llm.wavespeed.ai/v1
API 엔드포인트chat/completions
모델 IDanthropic/claude-opus-4.7

Claude Opus 4.7 API

anthropic/claude-opus-4.7

Opus 4.7 is Anthropic's latest and most capable Opus model, designed for autonomous agents that operate across long-running, multi-step workflows. It builds on Opus 4.6 with significant gains in agentic coding — scoring 64.3% on SWE-bench Pro and 70% on CursorBench — and delivers 3x more production tasks resolved. The model excels at large codebase navigation, complex refactors, multi-stage debugging, and end-to-end project execution over extended sessions.Beyond engineering, Opus 4.7 shows strong knowledge work performance — document drafting, presentation building, and data analysis — with near-production-ready output in a single pass. It maintains coherence across very long outputs and supports high-resolution vision up to 3.75 megapixels with 98.5% visual acuity.

입력

$5$4.50 /M

출력

$25$22.50 /M

할인

10% 할인

컨텍스트

1000K

최대 출력

128K

Vision

지원

도구 사용

지원

WaveSpeedAI에서 Claude Opus 4.7 체험

통합 API를 통해 Claude Opus 4.7 액세스 — OpenAI 호환, 콜드 스타트 없음, 투명한 가격.

Playground 열기

Claude Opus 4.7에 대해 자주 묻는 질문

Claude Opus 4.7 API 비용은 얼마인가요?+

WaveSpeedAI 가격: 입력 토큰 100만 개당 $4.50, 출력 토큰 100만 개당 $22.50. 프롬프트 캐싱과 배치 처리는 별도로 청구되며 긴 반복 작업에서 실질 비용을 줄여 줍니다.

Claude Opus 4.7의 컨텍스트 윈도우는 얼마나 되나요?+

Claude Opus 4.7은 요청당 최대 1000K 컨텍스트 토큰과 최대 128K 출력 토큰을 지원합니다.

Claude Opus 4.7은 OpenAI 호환인가요?+

네. WaveSpeedAI는 OpenAI 호환 엔드포인트 https://llm.wavespeed.ai/v1을 통해 Claude Opus 4.7을 제공합니다. 공식 OpenAI SDK의 base URL을 이 주소로 변경하고 WaveSpeedAI API 키를 사용하면 코드 변경 없이 사용할 수 있습니다.

Claude Opus 4.7을 어떻게 시작하나요?+

WaveSpeedAI에 로그인하고 Access Keys에서 API 키를 만든 다음, 위에 표시된 모델 ID로 https://llm.wavespeed.ai/v1/chat/completions에 요청을 보내세요. 신규 계정은 Claude Opus 4.7을 평가할 수 있는 무료 크레딧을 받습니다.

관련 LLM API