WaveSpeed.ai
Home/Explore/Best Open Source Image Models/wavespeed-ai/z-image-turbo/controlnet
text-to-image

text-to-image

Z-Image Turbo ControlNet

wavespeed-ai/z-image-turbo/controlnet

Z-Image-Turbo ControlNet generates images guided by structural control signals (depth, canny edge, pose) for precise composition control. Ready-to-use REST inference API, best performance, no cold starts, affordable pricing.

Input

Hint: You can drag and drop a file or click to upload

preview
width
height
If true, waits for generation to complete before returning. API only.
If true, returns BASE64 encoded image instead of URL. API only.

Idle

Change the background to Time Square.

Your request will cost $0.012 per run.

For $1 you can run this model approximately 83 times.

One more thing::

ExamplesView all

Change the background to Time Square.
Change the background to a western street
A luxury fashion editorial poster style, high-contrast lighting, elegant color grading, subtle film grain. Follow the exact outlines and composition from the reference. No extra text.
A cinematic cyberpunk heroine in a neon city, dramatic rim light, wet reflective streets. Keep the exact same body pose and framing as the reference. High detail, realistic motion feel.
turn into an oil painting style

README

Z-Image Turbo ControlNet

Z-Image Turbo ControlNet is a powerful image generation model that gives you precise control over composition through structural guidance signals. Unlike standard text-to-image models that interpret prompts freely, ControlNet lets you define the exact structure, edges, depth, or pose of your output by analyzing a reference image.

Think of it as a blueprint system: you provide a reference image, choose how to analyze it (depth map, edge detection, or pose estimation), and the model generates a new image that follows that structural blueprint while matching your text prompt.

Why Choose This?

  • Precise composition control Define exact layouts, poses, and spatial relationships instead of hoping the model interprets your prompt correctly.

  • Multiple control modes Choose depth mapping for 3D structure, canny edge detection for outlines, pose estimation for human figures, or none for standard generation.

  • Reference-guided generation Use existing images as structural templates while completely changing style, content, and appearance.

  • Flexible strength control Adjust how strictly the model follows the control signal — from loose inspiration to exact replication.

  • Fast and affordable Turbo-optimized for quick generation at just $0.05 per image.

Understanding Control Modes

The mode parameter determines how the model analyzes your reference image:

ModeWhat It ExtractsBest For
depth3D depth information (near/far relationships)Architectural scenes, landscapes, maintaining spatial depth
cannyEdge outlines and contoursLine art, sketches, preserving shapes and boundaries
poseHuman body keypoints and skeletonCharacter poses, figure drawing, action scenes
noneNo control signal (standard generation)When you don't need structural guidance

Parameters

ParameterRequiredDescription
promptYesText description of the image you want to generate
imageYesReference image URL for ControlNet to analyze
modeNoControl mode: depth, canny, pose, or none (default: depth)
sizeNoOutput size in pixels as widthheight (default: 10241024)
strengthNoControl signal strength 0-1 (default: 0.6)
seedNoRandom seed for reproducibility (-1 for random)
output_formatNoOutput format: jpeg, png, or webp (default: jpeg)

How to Use

  1. Upload reference image — The image that defines your composition structure.
  2. Choose control mode — Select depth, canny, pose, or none based on what you want to preserve.
  3. Write your prompt — Describe the style, content, and appearance you want.
  4. Adjust strength — Higher values follow the control signal more strictly.
  5. Set output size — Define your target dimensions.
  6. Run — Submit and download your controlled generation.

Pricing

$0.012 Per image. Simple flat-rate pricing regardless of control mode or image size.

Best Use Cases

  • Architectural Visualization (depth mode) — Maintain spatial relationships while changing style or materials.
  • Character Posing (pose mode) — Generate characters in specific poses from reference photos.
  • Style Transfer with Structure (canny mode) — Apply new styles while preserving exact outlines and shapes.
  • Product Photography (depth mode) — Generate product images with consistent composition across variations.
  • Comic and Illustration (canny/pose modes) — Convert sketches or poses into fully rendered artwork.

Pro Tips

  • Depth mode works best with images that have clear foreground/background separation.
  • Canny mode is ideal when you have line art, sketches, or want to preserve exact shapes.
  • Pose mode requires images with visible human figures — it won't work on landscapes or objects.
  • Start with strength 0.6 and adjust: lower for loose interpretation, higher for strict adherence.
  • The prompt matters more at lower strength values; at high strength, structure dominates.
  • Use the same seed to compare different control modes on the same reference image.

Notes

  • Reference image quality affects control accuracy — clear, well-lit images work best.
  • Pose mode only detects human poses; it won't extract structure from other subjects.
  • At strength 0, the control signal has minimal effect (similar to standard generation).
  • At strength 1, output will closely match the reference structure regardless of prompt.

Related Models