Vidu Q3 Pro is online — try it now
deepseek
deepseek/deepseek-r1-distill-qwen-32b

deepseek/deepseek-r1-distill-qwen-32b

32,768 context · $0.29/M input tokens · $0.29/M output tokens

DeepSeek R1 Distill Qwen 32B is a distilled large language model based on Qwen 2.5 32B, using outputs from DeepSeek R1. It outperforms OpenAI's o1-mini across various benchmarks, achieving new...

Pricing

Pay-per-use

No upfront costs, pay only for what you use

Input$0.29 / M Tokens
Output$0.29 / M Tokens

Try the model

deepseek/deepseek-r1-distill-qwen-32b
Online
deepseek
Hi! I am a helpful AI assistant. What can I do for you?

API Usage

Use the following code examples to integrate with our API:

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="deepseek/deepseek-r1-distill-qwen-32b",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

Model Introduction

Deepseek deepseek-r1-distill-qwen-32b

DeepSeek R1 Distill Qwen 32B is a distilled large language model based on [Qwen 2

DeepSeek R1 Distill Qwen 32B is a distilled large language model based on Qwen 2.5 32B, using outputs from DeepSeek R1. It outperforms OpenAI's o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.\n\nOther benchmark results include:\n\n- AIME 2024 pass@1: 72.6\n- MATH-500 pass@1: 94.3\n- CodeForces Rating: 1691\n\nThe model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.


Why It Looks Great

  • Large Language Model architecture for efficient processing
  • 32768 context window for long document handling
  • Competitive pricing at $0.3/$0.3 per million tokens

Key Features

  • Context Window: 32768 tokens
  • Max Output: 32768 tokens
  • Vision: Supported
  • Function Calling: Supported

Specifications

SpecificationValue
ProviderDeepseek
Model TypeLarge Language Model (LLM)
ArchitectureN/A
Context Window32768 tokens
Max Output32768 tokens
InputText
OutputText
VisionSupported
Function CallingSupported

Pricing

Token TypeCost per Million Tokens
Input$0.3
Output$0.3

How to Use

  1. Write your prompt — describe the task, provide context, and specify desired output format.
  2. Submit — the model processes your request and returns the response.

API Integration

Base URL: https://llm.wavespeed.ai/v1 API Endpoint: chat/completions Model ID: deepseek/deepseek-r1-distill-qwen-32b


API Usage

Python SDK

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://llm.wavespeed.ai/v1"
)

response = client.chat.completions.create(
    model="deepseek/deepseek-r1-distill-qwen-32b",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

cURL

curl https://llm.wavespeed.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "deepseek/deepseek-r1-distill-qwen-32b",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Notes

  • Model: deepseek/deepseek-r1-distill-qwen-32b
  • Provider: Deepseek

Info

Providerdeepseek
Typellm

Supported Functionality

Input
Text
Output
Text
Context32,768
Max Output32,768
Vision-
Function Calling-

API Access Guide

Base URLhttps://llm.wavespeed.ai/v1
API Endpointchat/completions
Model IDdeepseek/deepseek-r1-distill-qwen-32b

DeepSeek R1 Distill Qwen 32b API

deepseek/deepseek-r1-distill-qwen-32b

DeepSeek R1 Distill Qwen 32B is a distilled large language model based on [Qwen 2.5 32B](https://huggingface.co/Qwen/Qwen2.5-32B), using outputs from DeepSeek R1. It outperforms OpenAI's o1-mini across various benchmarks, achieving new...

Input

$0.29 /M

Output

$0.29 /M

Context

33K

Max Output

33K

Try DeepSeek R1 Distill Qwen 32b on WaveSpeedAI

Access DeepSeek R1 Distill Qwen 32b through our unified API — OpenAI-compatible, no cold starts, transparent pricing.

Frequently Asked Questions about DeepSeek R1 Distill Qwen 32b

How much does DeepSeek R1 Distill Qwen 32b cost via the API?+

Pricing on WaveSpeedAI: $0.29 per million input tokens and $0.29 per million output tokens. Prompt caching and batch processing are billed separately and reduce effective cost on long, repetitive workloads.

What is the context window of DeepSeek R1 Distill Qwen 32b?+

DeepSeek R1 Distill Qwen 32b supports up to 33K tokens of context with up to 33K tokens of output per request.

Is DeepSeek R1 Distill Qwen 32b OpenAI-compatible?+

Yes. WaveSpeedAI exposes DeepSeek R1 Distill Qwen 32b through an OpenAI-compatible endpoint at https://llm.wavespeed.ai/v1. Point the official OpenAI SDK at this base URL with your WaveSpeedAI API key — no other code changes required.

How do I get started with DeepSeek R1 Distill Qwen 32b?+

Sign in to WaveSpeedAI, create an API key in Access Keys, then send a request to https://llm.wavespeed.ai/v1/chat/completions with model id set to the value shown above. New accounts receive free credits to evaluate DeepSeek R1 Distill Qwen 32b before paying per token.

Related LLM APIs