Home/Explore/Best Open Source Video Models/wavespeed-ai/ltx-2-19b/control
video-to-video

video-to-video

LTX-2 19B ControlNet

wavespeed-ai/ltx-2-19b/control

LTX-2 19B ControlNet generates synchronized audio-video (up to 20s) from video input with pose, depth, or canny edge guidance. Supports audio preservation, generation, or removal for flexible video transformation. Ready-to-use REST inference API, best performance, no cold starts, affordable pricing.

Hint: You can drag and drop a file or click to upload

Hint: You can drag and drop a file or click to upload

preview

Idle

Your request will cost $0.2 per run.

For $10 you can run this model approximately 50 times.

One more thing::

ExamplesView all

README

LTX-2 19B ControlNet

LTX-2 ControlNet is a video-to-video transformation model that applies pose, depth, or edge guidance to generate new video content while preserving motion structure from the input. Built on the 19B DiT architecture, it supports synchronized audio handling with options to preserve original audio, generate new audio, or output silent video.

Why Choose This?

  • ControlNet guidance modes Choose from pose, depth, or canny edge detection to guide video generation while preserving motion structure.

  • Flexible audio handling Preserve original audio, generate new synchronized audio, or create silent output.

  • High-fidelity output Leverages the 19B-parameter DiT architecture for detailed, temporally consistent video.

  • Character-driven transformation Use a reference image to drive the appearance while the input video controls motion.

  • Prompt Enhancer Built-in tool to automatically improve your prompts for better results.

Parameters

ParameterRequiredDescription
videoYesInput video providing motion and structure
imageNoReference image for appearance guidance
promptNoText description of desired output
modeNoControl mode: pose (default), depth, or canny
audio_modeNoAudio handling: preserve (default), generate, or none
resolutionNoOutput resolution: 480p, 720p (default), or 1080p
seedNoRandom seed for reproducibility (-1 for random)

Control Modes

ModeDescription
poseSkeleton/pose guidance for human motion (default)
depthDepth map guidance for scene structure
cannyEdge detection guidance for shape preservation

Audio Modes

ModeDescription
preserveKeep original audio from input video (default)
generateCreate new synchronized audio
noneOutput video without audio

How to Use

  1. Upload your video — the input video providing motion structure.
  2. Upload your image (optional) — reference image that defines appearance.
  3. Write your prompt — describe the desired transformation.
  4. Select control mode — pose, depth, or canny based on your needs.
  5. Select audio mode — preserve, generate, or none.
  6. Set resolution — 480p for speed, 720p for balance, 1080p for quality.
  7. Run — submit and download the transformed video.

Pricing

Resolution5s10s15s20s (max)
480p$0.15$0.30$0.45$0.60
720p$0.20$0.40$0.60$0.80
1080p$0.30$0.60$0.90$1.20

Billing Rules

  • Base price: $0.20 (720p, 5 seconds)
  • Resolution multiplier: 480p = 0.75×, 720p = 1×, 1080p = 1.5×
  • Minimum charge: 5 seconds
  • Maximum billed duration: 20 seconds
  • Total cost = (duration / 5) × $0.20 × resolution_multiplier

Best Use Cases

  • Character Animation — Animate a character image with motion from reference video.
  • Style Transfer — Apply a reference style to existing video content.
  • Motion Preservation — Maintain motion structure while changing appearance.
  • Video Editing — Transform video subjects while keeping movement.
  • Dance Transfer — Apply dance moves to different characters.

Pro Tips

  • Match the subject pose in your image to the starting pose in the video.
  • Use pose mode for human/character motion, depth for scene structure, canny for edge-based guidance.
  • Preserve audio for lip-sync scenarios, generate for new content.
  • Iterate at 480p, then render final at 720p or 1080p.

Notes

  • Maximum video duration is 20 seconds per job.
  • Videos shorter than 5 seconds are billed as 5 seconds.
  • For best results, ensure the reference image matches the subject type in the video.

Related Models