Browse ModelsWavespeed AIZ Image Turbo Controlnet

Z Image Turbo Controlnet

Z Image Turbo Controlnet

Playground

Try it on WavespeedAI!

Z-Image-Turbo ControlNet generates images guided by structural control signals (depth, canny edge, pose) for precise composition control. Ready-to-use REST inference API, best performance, no cold starts, affordable pricing.

Features

Z-Image Turbo ControlNet

Z-Image Turbo ControlNet is a powerful image generation model that gives you precise control over composition through structural guidance signals. Unlike standard text-to-image models that interpret prompts freely, ControlNet lets you define the exact structure, edges, depth, or pose of your output by analyzing a reference image.

Think of it as a blueprint system: you provide a reference image, choose how to analyze it (depth map, edge detection, or pose estimation), and the model generates a new image that follows that structural blueprint while matching your text prompt.


Why Choose This?

  • Precise composition control Define exact layouts, poses, and spatial relationships instead of hoping the model interprets your prompt correctly.

  • Multiple control modes Choose depth mapping for 3D structure, canny edge detection for outlines, pose estimation for human figures, or none for standard generation.

  • Reference-guided generation Use existing images as structural templates while completely changing style, content, and appearance.

  • Flexible strength control Adjust how strictly the model follows the control signal — from loose inspiration to exact replication.

  • Fast and affordable Turbo-optimized for quick generation at just $0.05 per image.


Understanding Control Modes

The mode parameter determines how the model analyzes your reference image:

ModeWhat It ExtractsBest For
depth3D depth information (near/far relationships)Architectural scenes, landscapes, maintaining spatial depth
cannyEdge outlines and contoursLine art, sketches, preserving shapes and boundaries
poseHuman body keypoints and skeletonCharacter poses, figure drawing, action scenes
noneNo control signal (standard generation)When you don’t need structural guidance

Parameters

ParameterRequiredDescription
promptYesText description of the image you want to generate
imageYesReference image URL for ControlNet to analyze
modeNoControl mode: depth, canny, pose, or none (default: depth)
sizeNoOutput size in pixels as widthheight (default: 10241024)
strengthNoControl signal strength 0-1 (default: 0.6)
seedNoRandom seed for reproducibility (-1 for random)
output_formatNoOutput format: jpeg, png, or webp (default: jpeg)

How to Use

  1. Upload reference image — The image that defines your composition structure.
  2. Choose control mode — Select depth, canny, pose, or none based on what you want to preserve.
  3. Write your prompt — Describe the style, content, and appearance you want.
  4. Adjust strength — Higher values follow the control signal more strictly.
  5. Set output size — Define your target dimensions.
  6. Run — Submit and download your controlled generation.

Pricing

$0.012 Per image. Simple flat-rate pricing regardless of control mode or image size.


Best Use Cases

  • Architectural Visualization (depth mode) — Maintain spatial relationships while changing style or materials.
  • Character Posing (pose mode) — Generate characters in specific poses from reference photos.
  • Style Transfer with Structure (canny mode) — Apply new styles while preserving exact outlines and shapes.
  • Product Photography (depth mode) — Generate product images with consistent composition across variations.
  • Comic and Illustration (canny/pose modes) — Convert sketches or poses into fully rendered artwork.

Pro Tips

  • Depth mode works best with images that have clear foreground/background separation.
  • Canny mode is ideal when you have line art, sketches, or want to preserve exact shapes.
  • Pose mode requires images with visible human figures — it won’t work on landscapes or objects.
  • Start with strength 0.6 and adjust: lower for loose interpretation, higher for strict adherence.
  • The prompt matters more at lower strength values; at high strength, structure dominates.
  • Use the same seed to compare different control modes on the same reference image.

Notes

  • Reference image quality affects control accuracy — clear, well-lit images work best.
  • Pose mode only detects human poses; it won’t extract structure from other subjects.
  • At strength 0, the control signal has minimal effect (similar to standard generation).
  • At strength 1, output will closely match the reference structure regardless of prompt.

Authentication

For authentication details, please refer to the Authentication Guide.

API Endpoints

Submit Task & Query Result


# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/z-image-turbo/controlnet" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
    "mode": "depth",
    "size": "1024*1024",
    "strength": 1,
    "seed": -1,
    "output_format": "jpeg",
    "enable_sync_mode": false,
    "enable_base64_output": false
}'

# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"

Parameters

Task Submission Parameters

Request Parameters

ParameterTypeRequiredDefaultRangeDescription
promptstringYes-Text description of the image you want to generate.
imagestringYes-Reference image URL for ControlNet to extract structural guidance from.
modestringNodepthdepth, canny, pose, noneControlNet mode: 'depth' for depth map guidance, 'canny' for edge detection, 'pose' for human pose estimation, 'none' for no control.
sizestringNo1024*1024256 ~ 1536 per dimensionOutput image size in pixels (width*height).
strengthnumberNo10.00 ~ 1.00Controls how strongly the ControlNet guidance affects the output (0-1). Higher values follow the control signal more strictly.
seedintegerNo-1-1 ~ 2147483647Random seed for reproducible generation. Use -1 for random seed.
output_formatstringNojpegjpeg, png, webpOutput image format.
enable_sync_modebooleanNofalse-If true, waits for generation to complete before returning. API only.
enable_base64_outputbooleanNofalse-If true, returns BASE64 encoded image instead of URL. API only.

Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
data.idstringUnique identifier for the prediction, Task Id
data.modelstringModel ID used for the prediction
data.outputsarrayArray of URLs to the generated content (empty when status is not completed)
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.has_nsfw_contentsarrayArray of boolean values indicating NSFW detection for each output
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds

Result Request Parameters

ParameterTypeRequiredDefaultDescription
idstringYes-Task ID

Result Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
dataobjectThe prediction data object containing all details
data.idstringUnique identifier for the prediction, the ID of the prediction to get
data.modelstringModel ID used for the prediction
data.outputsstringArray of URLs to the generated content (empty when status is not completed).
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds
© 2025 WaveSpeedAI. All rights reserved.