openai/o4-mini-deep-research
200,000 context · $2.00/M input tokens · $8.00/M output tokens
o4-mini-deep-research is OpenAI's faster, more affordable deep research model—ideal for tackling complex, multi-step research tasks. Note: This model always uses the 'web_search' tool which adds additional cost.
Bayar sesuai pemakaian
Tanpa biaya di muka, bayar hanya sesuai penggunaan
Gunakan contoh kode berikut untuk integrasi dengan API kami:
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://llm.wavespeed.ai/v1"
)
response = client.chat.completions.create(
model="openai/o4-mini-deep-research",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)openai o4-mini-deep-research
| Specification | Value |
|---|---|
| Provider | Openai |
| Model Type | Large Language Model (LLM) |
| Architecture | N/A |
| Context Window | 200000 tokens |
| Max Output | 100000 tokens |
| Input | Text |
| Output | Text |
| Vision | Supported |
| Function Calling | Supported |
| Token Type | Cost per Million Tokens |
|---|---|
| Input | $2.2 |
| Output | $8.8 |
Base URL: https://llm.wavespeed.ai/v1 API Endpoint: chat/completions Model ID: openai/o4-mini-deep-research
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://llm.wavespeed.ai/v1"
)
response = client.chat.completions.create(
model="openai/o4-mini-deep-research",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)
curl https://llm.wavespeed.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "openai/o4-mini-deep-research",
"messages": [{"role": "user", "content": "Hello!"}]
}'
openai/o4-mini-deep-research
o4-mini-deep-research is OpenAI's faster, more affordable deep research model—ideal for tackling complex, multi-step research tasks. Note: This model always uses the 'web_search' tool which adds additional cost.
Input
$2 /M
Output
$8 /M
Konteks
200K
Output Maks.
100K
Vision
Didukung
Penggunaan Tool
Didukung
Akses o4 Mini Deep Research melalui API terpadu kami — kompatibel dengan OpenAI, tanpa cold start, harga transparan.
Harga di WaveSpeedAI: $2.00 per juta token input dan $8.00 per juta token output. Prompt caching dan batch processing ditagih terpisah dan mengurangi biaya efektif pada beban kerja yang panjang dan berulang.
o4 Mini Deep Research mendukung hingga 200K token konteks dengan hingga 100K token output per permintaan.
Ya. WaveSpeedAI menyediakan o4 Mini Deep Research melalui endpoint yang kompatibel dengan OpenAI di https://llm.wavespeed.ai/v1. Arahkan OpenAI SDK resmi ke base URL ini dengan API key WaveSpeedAI Anda — tanpa perubahan kode lainnya.
Masuk ke WaveSpeedAI, buat API key di Access Keys, lalu kirim permintaan ke https://llm.wavespeed.ai/v1/chat/completions dengan model id seperti ditampilkan di atas. Akun baru menerima kredit gratis untuk menguji o4 Mini Deep Research.