Vidu Q3 Pro 已上線 — 立即體驗

Infinitetalk Fast Video to Video API

wavespeed-ai /

Audio-driven infinitetalk-fast turns one video plus audio into realistic talking or singing videos with lip-sync. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.

digital-human
輸入

拖放檔案或點擊上傳

拖放檔案或點擊上傳

提示:您可以拖放檔案或點擊上傳

就緒

$0.075每次運行·~13 / $1

示例查看全部

相關模型

README

infinitetalk-fast Video-to-Video

What is InfiniteTalk?

InfiniteTalk creates new videos by combining an input silent video and an audio track. It ensures precise lip synchronization while aligning head, face, and body movements with the audio. With optional masking and prompting, you can control which areas move and how the scene appears. The model also maintains visual identity for natural and consistent results.

Why it looks great

  • Accurate lip synchronization: matches lip motion precisely to the audio.
  • Full-body coherence: aligns head pose, facial expressions, and posture with speech.
  • Mask control: optional mask images let you define which regions can move.
  • Instruction following: prompts can guide style, pose, or behavior.
  • Identity preservation: ensures consistent visual identity across all frames.

How to Use

  1. Upload the audio file.
  2. Upload a video as the base.
  3. (Optional) Upload a mask image to control which regions can move.
  4. (Optional) Write a prompt to guide the style, pose, or expressions.
  5. Set the seed if you want reproducibility.
  6. Submit the job and download the generated video.

Note

  • Max clip length per job: 10 minutes
  • Processing speed: ~10–30 seconds of wall time per 1 second of video (varies by resolution and queue load)
  • Mask safety tip: Do not upload the full image as mask_image. The mask should only cover the regions you want to animate—otherwise the result may render as fully black.

More Versions

無障礙:本網站使用的 AI 模型由第三方提供。