Kwaivgi Kling Lipsync Audio To Video
Playground
Try it on WavespeedAI!Kling LipSync converts audio into talking head video by generating lifelike lip movements perfectly synced to the input audio. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
Features
Kling LipSync — Audio-to-Video Lip Synchronization
Kling LipSync is an advanced audio-to-video model that drives natural, lifelike lip movements to match any input audio. Give it a clean voice or singing track plus a video, and it reanimates the mouth region so your character looks like they are truly speaking or singing those lines.
What Kling LipSync does
- Synchronizes lip movements in a video to match provided speech or singing.
- Works perfectly with videos generated by Kling AI or other sources.
- Preserves the original background, body motion, and camera work.
- Produces realistic, production-ready talking or singing footage.
Key Features
-
Natural, highly matched lip motion Mouth shapes line up closely with phonemes in the audio, while respecting each character’s facial structure. This yields expressive, believable speech and singing instead of robotic mouth flapping.
-
Accurate facial muscle response Lip animation also drives cheeks, jawline, and surrounding muscles. Subtle stretches and contractions are reflected in real time, greatly improving realism and immersion.
-
Non-destructive background and body Only the face region is re-rendered. Clothing, hands, environment, and lighting outside the face stay consistent with the original video, preserving continuity and avoiding unwanted artifacts.
Inputs and Outputs
Required inputs
-
audio: The target voice or singing track (locally recorded or generated). Duration of this audio controls billing and should roughly match the video length.
-
video: The source video whose character(s) will be lip-synced to the audio.
Output
- A new video where the character’s lip movements and facial micro-motions are synchronized with the provided audio while the rest of the frame remains unchanged.
Pricing
Billing is based on the audio duration with a 5-second minimum and a cap at 600 seconds.
From this:
- Effective price per second: 0.15 / 5 = $0.03 per second
- Minimum billed duration: 5 seconds → minimum charge $0.15
- Maximum billed duration: 600 seconds → maximum charge $18.00
Summary table:
| Metric | Value |
|---|---|
| Price per second | $0.030 |
| Minimum billed duration | 5 seconds |
| Minimum total price | $0.15 |
| Maximum billed duration | 600 seconds |
| Maximum total price per run | $18.00 |
Example costs:
| Audio length | Billed seconds | Total price |
|---|---|---|
| 4 s | 5 s | $0.15 |
| 10 s | 10 s | $0.30 |
| 60 s | 60 s | $1.80 |
| 180 s | 180 s | $5.40 |
| 600 s | 600 s | $18.00 |
| 900 s | 600 s (capped) | $18.00 |
How to Use
- Upload or paste the URL of your audio track.
- Upload or paste the URL of the video to be lip-synced.
- Ensure the audio length is close to the video duration for best results.
- Click Run.
- Preview the output video, then download it for editing or publishing.
Tips for Best Quality
- Use clean, high-quality audio without heavy background noise or music.
- Keep the audio and video durations aligned; large mismatches can cause timing drift.
- Choose videos with clear views of the speaker’s face and minimal occlusions.
- Avoid extreme motion blur or rapid cuts during key speaking moments.
Note
- Please note that the length of the input audio and video must be at least 2 seconds, otherwise a compilation error will occur.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/kwaivgi/kling-lipsync/audio-to-video" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| audio | string | Yes | - | - | The URL pointing to the audio file that will be used for generating synchronized lip movements. Supported audio file formats: .mp3/.wav/.m4a/.aac, with a maximum file size of 5MB. |
| video | string | Yes | - | The URL of the video file for generating synchronized lip movements. Video files support .mp4/.mov, file size does not exceed 100MB, video length does not exceed 10s and is not shorter than 2s, only 720p and 1080p are supported, length and width dimensions should both be between 720px and 1920px. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data | object | The prediction data object containing all details |
| data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
| data.model | string | Model ID used for the prediction |
| data.outputs | string | Array of URLs to the generated content (empty when status is not completed). |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |