Seedance 2.0 | Special Offer ✦ 10% OFF NOW
deepseek
deepseek/deepseek-v4-pro

deepseek/deepseek-v4-pro

1,048,576 context · $1.84/M input tokens · $3.66/M output tokens

DeepSeek V4 Pro is DeepSeek's flagship open-source model released in April 2026, featuring a 1.6T-parameter Mixture-of-Experts architecture with 49B parameters active per token. It supports a 1M-token context window through a novel hybrid attention mechanism combining Compressed Sparse Attention and DeepSeek Sparse Attention, reducing inference FLOPs to 27% and KV cache to 10% compared to V3.2 at million-token scale. Pre-trained on 33T tokens with post-training via GRPO reinforcement learning and on-policy distillation, V4 Pro delivers frontier-level performance in coding (LiveCodeBench 93.5, Codeforces 3206), math (IMOAnswerBench 89.8), and agentic tasks (SWE-bench Verified 80.6) — competitive with GPT-5.4 and Claude Opus 4.6 at a fraction of the cost. It natively supports thinking and non-thinking modes with configurable reasoning effort, function calling, JSON output, and has been specifically optimized for mainstream agent frameworks including Claude Code, OpenClaw, and OpenCode.

Tarification

Paiement à l'usage

Aucun coût initial, payez uniquement ce que vous utilisez

Entrée$1.84 / M Tokens
Sortie$3.66 / M Tokens

Utilisation de l'API

Utilisez les exemples de code suivants pour intégrer notre API :

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="deepseek/deepseek-v4-pro",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

Introduction au modèle

Deepseek deepseek-v4-pro

DeepSeek-V4-Pro is DeepSeek's most powerful open-source model, released on April 24, 2026. It is a 1.6 trillion parameter Mixture-of-Experts (MoE) language model with 49B active parameters, pre-trained on 33T tokens, supporting a context length of one million tokens. V4-Pro achieves performance on par with top closed-source models like GPT-5.4 and Claude Opus 4.6 across coding, reasoning, and agentic benchmarks — at a fraction of the cost.


Why It Looks Great

  • Mixture-of-Experts architecture with 1.6T total parameters and only 49B active for efficient inference
  • 1000000 context window powered by Compressed Sparse Attention (CSA) and DeepSeek Sparse Attention (DSA)
  • World-class agentic capabilities — optimized for Claude Code, OpenClaw, OpenCode, and CodeBuddy

Key Features

  • Context Window: 1000000 tokens
  • Max Output: 384000 tokens
  • Vision: Not Supported
  • Function Calling: Supported
  • Thinking Mode: Supported (non-thinking / high / max)
  • JSON Output: Supported
  • FIM Completion: Supported (non-thinking mode only)

Benchmarks

BenchmarkV4-ProClaude Opus 4.6GPT-5.4Gemini 3.1 Pro
SWE-bench Verified80.680.880.6
LiveCodeBench93.588.891.791.7
Codeforces Rating320631683052
MMLU-Pro87.589.187.591.0
IMOAnswerBench89.875.391.481.0
Terminal Bench 2.067.965.475.168.5
Toolathlon51.847.254.648.8
BrowseComp83.483.785.9

Specifications

SpecificationValue
ProviderDeepseek
Model TypeLarge Language Model (LLM)
ArchitectureMixture-of-Experts (MoE)
Total Parameters1.6T (49B active)
Context Window1000000 tokens
Max Output384000 tokens
InputText
OutputText
VisionNot Supported
Function CallingSupported
Thinking ModeSupported (high / max)
Release DateApril 24, 2026

How to Use

  1. Write your prompt — describe the task, provide context, and specify desired output format.
  2. Submit — the model processes your request and returns the response.

API Integration

Base URL: https://llm.wavespeed.ai/v1 API Endpoint: chat/completions Model ID: deepseek/deepseek-v4-pro


API Usage

Python SDK

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="deepseek/deepseek-v4-pro",
    messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)

cURL

curl https://llm.wavespeed.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "deepseek/deepseek-v4-pro",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Notes

  • Model: deepseek/deepseek-v4-pro
  • Provider: Deepseek
  • Open-source weights available on HuggingFace and ModelScope
  • Supports both OpenAI and Anthropic API formats
  • For complex Agent scenarios, use thinking mode with reasoning_effort set to max

Infos

Fournisseurdeepseek
Typellm

Fonctionnalités prises en charge

Entrée
Texte
Sortie
Texte
Contexte1,048,576
Sortie max384,000
Vision-
Function Calling✓ Pris en charge

Guide d'accès API

Base URLhttps://llm.wavespeed.ai/v1
API Endpointchat/completions
ID du modèledeepseek/deepseek-v4-pro

DeepSeek V4 Pro API

deepseek/deepseek-v4-pro

DeepSeek V4 Pro is DeepSeek's flagship open-source model released in April 2026, featuring a 1.6T-parameter Mixture-of-Experts architecture with 49B parameters active per token. It supports a 1M-token context window through a novel hybrid attention mechanism combining Compressed Sparse Attention and DeepSeek Sparse Attention, reducing inference FLOPs to 27% and KV cache to 10% compared to V3.2 at million-token scale. Pre-trained on 33T tokens with post-training via GRPO reinforcement learning and on-policy distillation, V4 Pro delivers frontier-level performance in coding (LiveCodeBench 93.5, Codeforces 3206), math (IMOAnswerBench 89.8), and agentic tasks (SWE-bench Verified 80.6) — competitive with GPT-5.4 and Claude Opus 4.6 at a fraction of the cost. It natively supports thinking and non-thinking modes with configurable reasoning effort, function calling, JSON output, and has been specifically optimized for mainstream agent frameworks including Claude Code, OpenClaw, and OpenCode.

Entrée

$1.84 /M

Sortie

$3.66 /M

Contexte

1049K

Sortie max.

384K

Utilisation d'outils

Pris en charge

Essayez DeepSeek V4 Pro sur WaveSpeedAI

Accédez à DeepSeek V4 Pro via notre API unifiée — compatible OpenAI, sans démarrages à froid, prix transparents.

Ouvrir le Playground

Questions fréquentes sur DeepSeek V4 Pro

Combien coûte l'API DeepSeek V4 Pro ?+

Tarification sur WaveSpeedAI : $1.84 par million de tokens d'entrée et $3.66 par million de tokens de sortie. Le prompt caching et le traitement par batch sont facturés séparément et réduisent le coût effectif sur les charges longues et répétitives.

Quelle est la fenêtre de contexte de DeepSeek V4 Pro ?+

DeepSeek V4 Pro prend en charge jusqu'à 1049K tokens de contexte et jusqu'à 384K tokens de sortie par requête.

DeepSeek V4 Pro est-il compatible avec OpenAI ?+

Oui. WaveSpeedAI expose DeepSeek V4 Pro via un endpoint compatible OpenAI à https://llm.wavespeed.ai/v1. Pointez le SDK officiel d'OpenAI vers cette base URL avec votre clé API WaveSpeedAI — aucune autre modification de code requise.

Comment démarrer avec DeepSeek V4 Pro ?+

Connectez-vous à WaveSpeedAI, créez une clé API dans Access Keys, puis envoyez une requête à https://llm.wavespeed.ai/v1/chat/completions avec l'id du modèle affiché ci-dessus. Les nouveaux comptes reçoivent des crédits gratuits pour évaluer DeepSeek V4 Pro.

APIs LLM associées