Python SDK
The official Python SDK for WaveSpeedAI. Run AI models with just a few lines of code.
Installation
pip install wavespeedQuick Start
import wavespeed
output = wavespeed.run(
"wavespeed-ai/z-image/turbo",
{"prompt": "Cat in space, cinematic lighting"},
)
print(output["outputs"][0]) # Output URLAuthentication
Get your API key from wavespeed.ai/accesskey.
Option 1: Environment variable (recommended)
export WAVESPEED_API_KEY="your-api-key"Option 2: Pass directly
from wavespeed import Client
client = Client(api_key="your-api-key")
result = client.run("wavespeed-ai/z-image/turbo", {"prompt": "Cat"})Configuration
Timeout & Polling
output = wavespeed.run(
"wavespeed-ai/z-image/turbo",
{"prompt": "Cat"},
timeout=36000.0, # Max wait time in seconds (default: 36000)
poll_interval=1.0, # Status check interval (default: 1.0)
)Sync Mode
Some models support single-request execution without polling:
output = wavespeed.run(
"wavespeed-ai/z-image/turbo",
{"prompt": "Cat"},
enable_sync_mode=True, # No polling, faster for supported models
)Note: Not all models support sync mode.
Retry Configuration
from wavespeed import Client
client = Client(
api_key="your-api-key",
max_retries=3, # Task-level retries (default: 0)
max_connection_retries=5, # HTTP connection retries (default: 5)
retry_interval=1.0, # Delay between retries (default: 1.0)
)File Upload
Upload images, videos, or audio files to use as model inputs:
import wavespeed
url = wavespeed.upload("/path/to/image.png")
print(url) # Use this URL as input for modelsServerless Workers
Build and deploy custom AI workers on the WaveSpeed platform.
import wavespeed.serverless as serverless
def handler(job):
prompt = job["input"].get("prompt", "")
return {"output": prompt.upper()}
serverless.start({"handler": handler})Supports sync, async, and generator (streaming) handlers. See the GitHub repository for advanced examples including concurrency control and input validation.
Environment Variables
| Variable | Description |
|---|---|
WAVESPEED_API_KEY | Your WaveSpeedAI API key |