x-ai/grok-4
256,000 context · $3.00/M input tokens · $15.00/M output tokens
Grok 4 is xAI's latest reasoning model with a 256k context window. It supports parallel tool calling, structured outputs, and both image and text inputs. Note that reasoning is not...
Bayar sesuai pemakaian
Tanpa biaya di muka, bayar hanya sesuai penggunaan
Gunakan contoh kode berikut untuk integrasi dengan API kami:
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://llm.wavespeed.ai/v1"
)
response = client.chat.completions.create(
model="x-ai/grok-4",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)x-ai grok-4
| Specification | Value |
|---|---|
| Provider | X-Ai |
| Model Type | Large Language Model (LLM) |
| Architecture | N/A |
| Context Window | 256000 tokens |
| Max Output | 4096 tokens |
| Input | Text |
| Output | Text |
| Vision | Supported |
| Function Calling | Supported |
| Token Type | Cost per Million Tokens |
|---|---|
| Input | $3.3 |
| Output | $16.5 |
Base URL: https://llm.wavespeed.ai/v1 API Endpoint: chat/completions Model ID: x-ai/grok-4
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://llm.wavespeed.ai/v1"
)
response = client.chat.completions.create(
model="x-ai/grok-4",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)
curl https://llm.wavespeed.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "x-ai/grok-4",
"messages": [{"role": "user", "content": "Hello!"}]
}'
Grok 4 is xAI's latest reasoning model with a 256k context window. It supports parallel tool calling, structured outputs, and both image and text inputs. Note that reasoning is not...
Input
$3 /M
Output
$15 /M
Context
256K
Max Output
4K
Vision
Supported
Tool Use
Supported
Access x-ai/grok-4 through our unified API — OpenAI-compatible, no cold starts, transparent pricing.
Open Playground