Seedance 1.5 Pro is Live Now!Try Now!
video-to-video

video-to-video

Sync Lipsync 2

sync/lipsync-2

Sync Lipsync-2 synchronizes lip movements in any video to supplied audio, enabling realistic mouth alignment for films, podcasts, games, or animations. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.

Hint: You can drag and drop a file or click to upload

Hint: You can drag and drop a file or click to upload

Idle

Your request will cost $0.05 per run.

For $1 you can run this model approximately 20 times.

One more thing::

ExamplesView all

README

Lipsync 2.0 — Video Re-Dub (Video + Audio to Lipsynced Video)

Lipsync 2.0 is a zero-shot lipsync model that takes an existing video and a separate audio track, then re-animates the mouth so lip movements match the speech. No training or fine-tuning is required, and it preserves the speaker’s style across languages, dubbing scenarios, and character types.

🔍 What it does

  • Zero-shot lipsync – Drop in any talking-face clip plus new audio; the model directly outputs a re-synced video.
  • Style preservation – Keeps the original speaker’s facial mannerisms and timing as much as possible, even for translated or re-recorded lines.
  • Cross-domain support – Works with live-action footage, stylised 3D characters, and AI-generated avatars.
  • Flexible editing workflows – Use it for dubbing, dialogue fixes in post, or re-animating entire performances while keeping the original shot.

🧩 Parameters

  • video* Source video to be re-dubbed (URL or upload). Use clips where the face is clearly visible and not heavily occluded.

  • audio* Target speech audio (URL or upload). The lips will be synced to this track.

  • sync_mode Strategy for matching video and audio durations when they differ:

    • bounce – Ping-pong the video to cover a longer audio span.
    • loop – Loop the video until the audio finishes.
    • cut_off – Truncate to the shorter of video/audio.
    • silence – Pad with silence or frozen frames where needed.
    • remap – Time-remap to better align audio and video over the full clip.

Output: a re-synced MP4 video with lips matching the provided audio.

💰 Pricing

Pricing is linear in video length:

  • Effective rate: $0.05 per second of input video

Examples:

Video lengthPrice
5 s$0.25
10 s$0.50
30 s$1.50
60 s$3.00

🚀 How to use

  1. Upload video under video (clear face, stable framing works best).
  2. Upload audio under audio (clean speech, minimal background noise).
  3. Choose a sync_mode depending on how you want to handle duration mismatches.
  4. Click Run and download the re-dubbed clip once processing completes.

🎛 Practical tips

  • Aim for good lighting and frontal or three-quarter views for more accurate mouth motion.
  • If you just need simple dubbing, start with cut_off; for longer audio over short clips, try loop or remap.
  • Keep audio free of strong music or over-compressed noise for cleaner alignment.
  • For multi-shot edits, process each shot separately, then assemble in your NLE (Premiere, DaVinci, etc.) for full control.

More Models to Try

  • WaveSpeedAI / InfiniteTalk WaveSpeedAI’s single-avatar talking-head model that turns one photo plus audio into smooth, lip-synced digital presenter videos for tutorials, marketing, and social content.

  • WaveSpeedAI / InfiniteTalk Multi Multi-avatar version of InfiniteTalk that drives several characters in one scene from separate audio tracks, ideal for dialog-style explainers, interviews, and role-play videos.

  • Kwaivgi / Kling V2 AI Avatar Standard Cost-effective Kling-based AI avatar model that generates natural talking-face videos from a single reference image and voice track, suitable for everyday content and customer support.

  • Kwaivgi / Kling V2 AI Avatar Pro Higher-fidelity Kling V2 avatar model for premium digital humans, offering smoother motion, better lip-sync, and more stable faces for commercials, brand spokespeople, and product demos.