openai/o1-pro
200,000 context · $150.00/M input tokens · $600.00/M output tokens
The o1 series of models are trained with reinforcement learning to think before they answer and perform complex reasoning. The o1-pro model uses more compute to think harder and provide...
Pay-per-use
No upfront costs, pay only for what you use
Use the following code examples to integrate with our API:
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://llm.wavespeed.ai/v1"
)
response = client.chat.completions.create(
model="openai/o1-pro",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)openai o1-pro
| Specification | Value |
|---|---|
| Provider | Openai |
| Model Type | Large Language Model (LLM) |
| Architecture | N/A |
| Context Window | 200000 tokens |
| Max Output | 100000 tokens |
| Input | Text |
| Output | Text |
| Vision | Supported |
| Function Calling | Supported |
| Token Type | Cost per Million Tokens |
|---|---|
| Input | $165.0 |
| Output | $660.0 |
Base URL: https://llm.wavespeed.ai/v1 API Endpoint: chat/completions Model ID: openai/o1-pro
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://llm.wavespeed.ai/v1"
)
response = client.chat.completions.create(
model="openai/o1-pro",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)
curl https://llm.wavespeed.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "openai/o1-pro",
"messages": [{"role": "user", "content": "Hello!"}]
}'
openai/o1-pro
The o1 series of models are trained with reinforcement learning to think before they answer and perform complex reasoning. The o1-pro model uses more compute to think harder and provide...
Input
$150 /M
Output
$600 /M
Context
200K
Max Output
100K
Vision
Supported
Access o1 Pro through our unified API — OpenAI-compatible, no cold starts, transparent pricing.
Pricing on WaveSpeedAI: $150.00 per million input tokens and $600.00 per million output tokens. Prompt caching and batch processing are billed separately and reduce effective cost on long, repetitive workloads.
o1 Pro supports up to 200K tokens of context with up to 100K tokens of output per request.
Yes. WaveSpeedAI exposes o1 Pro through an OpenAI-compatible endpoint at https://llm.wavespeed.ai/v1. Point the official OpenAI SDK at this base URL with your WaveSpeedAI API key — no other code changes required.
Sign in to WaveSpeedAI, create an API key in Access Keys, then send a request to https://llm.wavespeed.ai/v1/chat/completions with model id set to the value shown above. New accounts receive free credits to evaluate o1 Pro before paying per token.