Z Image Turbo Controlnet
Playground
Try it on WavespeedAI!Z-Image-Turbo ControlNet generates images guided by structural control signals (depth, canny edge, pose) for precise composition control. Ready-to-use REST inference API, best performance, no cold starts, affordable pricing.
Features
Z-Image Turbo ControlNet
Z-Image Turbo ControlNet is a powerful image generation model that gives you precise control over composition through structural guidance signals. Unlike standard text-to-image models that interpret prompts freely, ControlNet lets you define the exact structure, edges, depth, or pose of your output by analyzing a reference image.
Think of it as a blueprint system: you provide a reference image, choose how to analyze it (depth map, edge detection, or pose estimation), and the model generates a new image that follows that structural blueprint while matching your text prompt.
Why Choose This?
-
Precise composition control Define exact layouts, poses, and spatial relationships instead of hoping the model interprets your prompt correctly.
-
Multiple control modes Choose depth mapping for 3D structure, canny edge detection for outlines, pose estimation for human figures, or none for standard generation.
-
Reference-guided generation Use existing images as structural templates while completely changing style, content, and appearance.
-
Flexible strength control Adjust how strictly the model follows the control signal — from loose inspiration to exact replication.
-
Fast and affordable Turbo-optimized for quick generation at just $0.05 per image.
Understanding Control Modes
The mode parameter determines how the model analyzes your reference image:
| Mode | What It Extracts | Best For |
|---|---|---|
| depth | 3D depth information (near/far relationships) | Architectural scenes, landscapes, maintaining spatial depth |
| canny | Edge outlines and contours | Line art, sketches, preserving shapes and boundaries |
| pose | Human body keypoints and skeleton | Character poses, figure drawing, action scenes |
| none | No control signal (standard generation) | When you don’t need structural guidance |
Parameters
| Parameter | Required | Description |
|---|---|---|
| prompt | Yes | Text description of the image you want to generate |
| image | Yes | Reference image URL for ControlNet to analyze |
| mode | No | Control mode: depth, canny, pose, or none (default: depth) |
| size | No | Output size in pixels as widthheight (default: 10241024) |
| strength | No | Control signal strength 0-1 (default: 0.6) |
| seed | No | Random seed for reproducibility (-1 for random) |
| output_format | No | Output format: jpeg, png, or webp (default: jpeg) |
How to Use
- Upload reference image — The image that defines your composition structure.
- Choose control mode — Select depth, canny, pose, or none based on what you want to preserve.
- Write your prompt — Describe the style, content, and appearance you want.
- Adjust strength — Higher values follow the control signal more strictly.
- Set output size — Define your target dimensions.
- Run — Submit and download your controlled generation.
Pricing
$0.012 Per image. Simple flat-rate pricing regardless of control mode or image size.
Best Use Cases
- Architectural Visualization (depth mode) — Maintain spatial relationships while changing style or materials.
- Character Posing (pose mode) — Generate characters in specific poses from reference photos.
- Style Transfer with Structure (canny mode) — Apply new styles while preserving exact outlines and shapes.
- Product Photography (depth mode) — Generate product images with consistent composition across variations.
- Comic and Illustration (canny/pose modes) — Convert sketches or poses into fully rendered artwork.
Pro Tips
- Depth mode works best with images that have clear foreground/background separation.
- Canny mode is ideal when you have line art, sketches, or want to preserve exact shapes.
- Pose mode requires images with visible human figures — it won’t work on landscapes or objects.
- Start with strength 0.6 and adjust: lower for loose interpretation, higher for strict adherence.
- The prompt matters more at lower strength values; at high strength, structure dominates.
- Use the same seed to compare different control modes on the same reference image.
Notes
- Reference image quality affects control accuracy — clear, well-lit images work best.
- Pose mode only detects human poses; it won’t extract structure from other subjects.
- At strength 0, the control signal has minimal effect (similar to standard generation).
- At strength 1, output will closely match the reference structure regardless of prompt.
Related Models
- Z-Image Turbo Text-to-Image — Standard text-to-image without structural control.
- Z-Image Turbo Image-to-Image — Transform existing images with variable strength.
- Z-Image Turbo Inpaint — Edit specific regions of images with masks.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/z-image-turbo/controlnet" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"mode": "depth",
"size": "1024*1024",
"strength": 1,
"seed": -1,
"output_format": "jpeg",
"enable_sync_mode": false,
"enable_base64_output": false
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| prompt | string | Yes | - | Text description of the image you want to generate. | |
| image | string | Yes | - | Reference image URL for ControlNet to extract structural guidance from. | |
| mode | string | No | depth | depth, canny, pose, none | ControlNet mode: 'depth' for depth map guidance, 'canny' for edge detection, 'pose' for human pose estimation, 'none' for no control. |
| size | string | No | 1024*1024 | 256 ~ 1536 per dimension | Output image size in pixels (width*height). |
| strength | number | No | 1 | 0.00 ~ 1.00 | Controls how strongly the ControlNet guidance affects the output (0-1). Higher values follow the control signal more strictly. |
| seed | integer | No | -1 | -1 ~ 2147483647 | Random seed for reproducible generation. Use -1 for random seed. |
| output_format | string | No | jpeg | jpeg, png, webp | Output image format. |
| enable_sync_mode | boolean | No | false | - | If true, waits for generation to complete before returning. API only. |
| enable_base64_output | boolean | No | false | - | If true, returns BASE64 encoded image instead of URL. API only. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data | object | The prediction data object containing all details |
| data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
| data.model | string | Model ID used for the prediction |
| data.outputs | string | Array of URLs to the generated content (empty when status is not completed). |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |