Vidu Q3 Pro 已上線 — 立即體驗

Infinitetalk Fast Multi API

wavespeed-ai /

InfiniteTalk fast multi converts a single image and two audio inputs into multi-character talking or singing videos. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.

digital-human
輸入

拖放檔案或點擊上傳

preview

拖放檔案或點擊上傳

拖放檔案或點擊上傳

就緒

$0.075每次運行·~13 / $1

示例查看全部

相關模型

README

InfiniteTalkFast multi

What is InfiniteTalk?

InfiniteTalk creates videos with accurate lip sync while matching head, face, and body movements to the audio. It maintains identity across unlimited-length videos and also allows image-to-video creation, transforming static photos into lively speaking or singing videos.

Why it looks great

  • Accurate lip synchronization: aligns lip motion precisely with audio, preserving natural rhythm and pronunciation.

  • Full-body coherence: captures head movements, facial expressions, and posture changes beyond the lips.

  • Identity preservation: maintains consistent facial identity and visual style across frames.

  • Image-to-video capability: turns static photos into realistic speaking or singing videos.

  • Instruction following: accepts text prompts to control scene, pose, or behavior while syncing to audio.

How to Use

  1. Upload the left and right audio files.
  2. Upload your image (The image should clearly show two people).
  3. Select the speaking order (left to right, right to left, or meanwhile).
  4. Write the prompt if needed.
  5. Submit the job and download the results once they're ready.

Note

  • Max clip length per job: up to 10 minutes

  • Processing speed: approximately 10–30 seconds of wall time per 1 second of video (varies by resolution and queue load)

More Versions

Reference

無障礙:本網站使用的 AI 模型由第三方提供。