Seedance 2.0 | Special Offer ✦ 10% OFF NOW
qwen
qwen/qwen3-max-thinking

qwen/qwen3-max-thinking

262,144 context · $1.20/M input tokens · $6.00/M output tokens

Qwen3-Max-Thinking is the flagship reasoning model in the Qwen3 series, designed for high-stakes cognitive tasks that require deep, multi-step reasoning. By significantly scaling model capacity and reinforcement learning compute, it delivers major gains in factual accuracy, complex reasoning, instruction following, alignment with human preferences, and agentic behavior.

Precios

Pago por uso

Sin costos iniciales, paga solo por lo que uses

Entrada
32K $1.20 / M Tokens
128K > 32K $2.40 / M Tokens
> 128K $3.00 / M Tokens
Salida
32K $6.00 / M Tokens
128K > 32K $12.00 / M Tokens
> 128K $15.00 / M Tokens

Uso de API

Usa los siguientes ejemplos de código para integrar con nuestra API:

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="qwen/qwen3-max-thinking",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

Introducción del modelo

Qwen qwen3-max-thinking

Qwen3-Max-Thinking is the flagship reasoning model in the Qwen3 series, designed for high-stakes cognitive tasks that require deep, multi-step reasoni

Qwen3-Max-Thinking is the flagship reasoning model in the Qwen3 series, designed for high-stakes cognitive tasks that require deep, multi-step reasoning. By significantly scaling model capacity and reinforcement learning compute, it delivers major gains in factual accuracy, complex reasoning, instruction following, alignment with human preferences, and agentic behavior.


Why It Looks Great

  • Large Language Model architecture for efficient processing
  • 262144 context window for long document handling
  • Competitive pricing at $0.8/$3.9 per million tokens

Key Features

  • Context Window: 262144 tokens
  • Max Output: 32768 tokens
  • Vision: Supported
  • Function Calling: Supported

Specifications

SpecificationValue
ProviderQwen
Model TypeLarge Language Model (LLM)
ArchitectureN/A
Context Window262144 tokens
Max Output32768 tokens
InputText
OutputText
VisionSupported
Function CallingSupported

Pricing

Token TypeCost per Million Tokens
Input$0.8
Output$3.9

How to Use

  1. Write your prompt — describe the task, provide context, and specify desired output format.
  2. Submit — the model processes your request and returns the response.

API Integration

Base URL: https://llm.wavespeed.ai/v1 API Endpoint: chat/completions Model ID: qwen/qwen3-max-thinking


API Usage

Python SDK

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="qwen/qwen3-max-thinking",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

cURL

curl https://llm.wavespeed.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "qwen/qwen3-max-thinking",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Notes

  • Model: qwen/qwen3-max-thinking
  • Provider: Qwen

Información

Proveedorqwen
Tipollm

Funcionalidades compatibles

Entrada
Texto
Salida
Texto
Contexto262,144
Salida máxima32,768
Visión-
Function Calling✓ Compatible

Guía de acceso a la API

Base URLhttps://llm.wavespeed.ai/v1
API Endpointchat/completions
ID del modeloqwen/qwen3-max-thinking

Qwen3 Max Thinking API

qwen/qwen3-max-thinking

Qwen3-Max-Thinking is the flagship reasoning model in the Qwen3 series, designed for high-stakes cognitive tasks that require deep, multi-step reasoning. By significantly scaling model capacity and reinforcement learning compute, it delivers major gains in factual accuracy, complex reasoning, instruction following, alignment with human preferences, and agentic behavior.

Entrada

$1.2 /M

Salida

$6 /M

Contexto

262K

Salida máx.

33K

Uso de herramientas

Compatible

Prueba Qwen3 Max Thinking en WaveSpeedAI

Accede a Qwen3 Max Thinking mediante nuestra API unificada — compatible con OpenAI, sin arranques en frío, precios transparentes.

Abrir Playground

Preguntas frecuentes sobre Qwen3 Max Thinking

¿Cuánto cuesta Qwen3 Max Thinking a través de la API?+

Precios en WaveSpeedAI: $1.20 por millón de tokens de entrada y $6.00 por millón de tokens de salida. El prompt caching y el procesamiento por lotes se facturan por separado y reducen el coste efectivo en cargas largas y repetitivas.

¿Cuál es la ventana de contexto de Qwen3 Max Thinking?+

Qwen3 Max Thinking admite hasta 262K tokens de contexto y hasta 33K tokens de salida por solicitud.

¿Es Qwen3 Max Thinking compatible con OpenAI?+

Sí. WaveSpeedAI expone Qwen3 Max Thinking a través de un endpoint compatible con OpenAI en https://llm.wavespeed.ai/v1. Apunta el SDK oficial de OpenAI a esta base URL con tu clave API de WaveSpeedAI — sin más cambios de código.

¿Cómo empiezo con Qwen3 Max Thinking?+

Inicia sesión en WaveSpeedAI, crea una clave API en Access Keys y envía una solicitud a https://llm.wavespeed.ai/v1/chat/completions con el id de modelo mostrado arriba. Las cuentas nuevas reciben créditos gratuitos para evaluar Qwen3 Max Thinking antes de pagar por token.

APIs LLM relacionadas