Seedance 2.0 | Special Offer ✦ 10% OFF NOW
qwen
qwen/qwen3-max-thinking

qwen/qwen3-max-thinking

262,144 context · $1.20/M input tokens · $6.00/M output tokens

Qwen3-Max-Thinking is the flagship reasoning model in the Qwen3 series, designed for high-stakes cognitive tasks that require deep, multi-step reasoning. By significantly scaling model capacity and reinforcement learning compute, it delivers major gains in factual accuracy, complex reasoning, instruction following, alignment with human preferences, and agentic behavior.

定價

按用量付費

無需預付費用,僅按實際使用量付費

輸入
32K $1.20 / M Tokens
128K > 32K $2.40 / M Tokens
> 128K $3.00 / M Tokens
輸出
32K $6.00 / M Tokens
128K > 32K $12.00 / M Tokens
> 128K $15.00 / M Tokens

API 使用

使用以下程式碼範例整合我們的 API:

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="qwen/qwen3-max-thinking",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

模型介紹

Qwen qwen3-max-thinking

Qwen3-Max-Thinking is the flagship reasoning model in the Qwen3 series, designed for high-stakes cognitive tasks that require deep, multi-step reasoni

Qwen3-Max-Thinking is the flagship reasoning model in the Qwen3 series, designed for high-stakes cognitive tasks that require deep, multi-step reasoning. By significantly scaling model capacity and reinforcement learning compute, it delivers major gains in factual accuracy, complex reasoning, instruction following, alignment with human preferences, and agentic behavior.


Why It Looks Great

  • Large Language Model architecture for efficient processing
  • 262144 context window for long document handling
  • Competitive pricing at $0.8/$3.9 per million tokens

Key Features

  • Context Window: 262144 tokens
  • Max Output: 32768 tokens
  • Vision: Supported
  • Function Calling: Supported

Specifications

SpecificationValue
ProviderQwen
Model TypeLarge Language Model (LLM)
ArchitectureN/A
Context Window262144 tokens
Max Output32768 tokens
InputText
OutputText
VisionSupported
Function CallingSupported

Pricing

Token TypeCost per Million Tokens
Input$0.8
Output$3.9

How to Use

  1. Write your prompt — describe the task, provide context, and specify desired output format.
  2. Submit — the model processes your request and returns the response.

API Integration

Base URL: https://llm.wavespeed.ai/v1 API Endpoint: chat/completions Model ID: qwen/qwen3-max-thinking


API Usage

Python SDK

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="qwen/qwen3-max-thinking",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

cURL

curl https://llm.wavespeed.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "qwen/qwen3-max-thinking",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Notes

  • Model: qwen/qwen3-max-thinking
  • Provider: Qwen

資訊

提供商qwen
類型llm

支援功能

輸入
文字
輸出
文字
上下文262,144
最大輸出32,768
視覺-
函式呼叫✓ 支援

API 存取指南

Base URLhttps://llm.wavespeed.ai/v1
API 端點chat/completions
Model IDqwen/qwen3-max-thinking

Qwen3 Max Thinking API

qwen/qwen3-max-thinking

Qwen3-Max-Thinking is the flagship reasoning model in the Qwen3 series, designed for high-stakes cognitive tasks that require deep, multi-step reasoning. By significantly scaling model capacity and reinforcement learning compute, it delivers major gains in factual accuracy, complex reasoning, instruction following, alignment with human preferences, and agentic behavior.

輸入

$1.2 /M

輸出

$6 /M

上下文

262K

最大輸出

33K

工具調用

支援

在 WaveSpeedAI 試用 Qwen3 Max Thinking

透過我們的統一 API 接入 Qwen3 Max Thinking — 相容 OpenAI、無冷啟動、透明計費。

開啟 Playground

關於 Qwen3 Max Thinking 的常見問題

Qwen3 Max Thinking API 多少錢?+

WaveSpeedAI 定價:輸入每百萬 token $1.20,輸出每百萬 token $6.00。Prompt 快取與批次處理分別計費,可顯著降低長上下文、高重複任務的實際成本。

Qwen3 Max Thinking 的上下文視窗有多大?+

Qwen3 Max Thinking 每次請求最多支援 262K 上下文 token,輸出最多 33K token。

Qwen3 Max Thinking 是否相容 OpenAI?+

是的。WaveSpeedAI 透過 https://llm.wavespeed.ai/v1 的 OpenAI 相容端點提供 Qwen3 Max Thinking。將官方 OpenAI SDK 的 base URL 指向該位址,使用 WaveSpeedAI 的 API Key 即可,無需其他程式碼變更。

如何開始使用 Qwen3 Max Thinking?+

登入 WaveSpeedAI,在 Access Keys 建立 API Key,使用上方顯示的 model id 向 https://llm.wavespeed.ai/v1/chat/completions 發送請求。新帳號將獲得免費額度,用於試用 Qwen3 Max Thinking。

相關 LLM API