WaveSpeedAI APIKwaivgiKwaivgi Kling Lipsync Audio To Video

Kwaivgi Kling Lipsync Audio To Video

Kwaivgi Kling Lipsync Audio To Video

Playground

Try it on WavespeedAI!

Kling LipSync converts audio into talking head video by generating lifelike lip movements perfectly synced to the input audio. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.

Features

Kling LipSync — Audio-to-Video Lip Synchronization

Kling LipSync is an advanced audio-to-video model that drives natural, lifelike lip movements to match any input audio. Give it a clean voice or singing track plus a video, and it reanimates the mouth region so your character looks like they are truly speaking or singing those lines.


What Kling LipSync does

  • Synchronizes lip movements in a video to match provided speech or singing.
  • Works perfectly with videos generated by Kling AI or other sources.
  • Preserves the original background, body motion, and camera work.
  • Produces realistic, production-ready talking or singing footage.

Key Features

  • Natural, highly matched lip motion Mouth shapes line up closely with phonemes in the audio, while respecting each character’s facial structure. This yields expressive, believable speech and singing instead of robotic mouth flapping.

  • Accurate facial muscle response Lip animation also drives cheeks, jawline, and surrounding muscles. Subtle stretches and contractions are reflected in real time, greatly improving realism and immersion.

  • Non-destructive background and body Only the face region is re-rendered. Clothing, hands, environment, and lighting outside the face stay consistent with the original video, preserving continuity and avoiding unwanted artifacts.


Inputs and Outputs

Required inputs

  • audio: The target voice or singing track (locally recorded or generated). Duration of this audio controls billing and should roughly match the video length.

  • video: The source video whose character(s) will be lip-synced to the audio.

Output

  • A new video where the character’s lip movements and facial micro-motions are synchronized with the provided audio while the rest of the frame remains unchanged.

Pricing

Billing is based on the audio duration with a 5-second minimum and a cap at 600 seconds.

From this:

  • Effective price per second: 0.15 / 5 = $0.03 per second
  • Minimum billed duration: 5 seconds → minimum charge $0.15
  • Maximum billed duration: 600 seconds → maximum charge $18.00

Summary table:

MetricValue
Price per second$0.030
Minimum billed duration5 seconds
Minimum total price$0.15
Maximum billed duration600 seconds
Maximum total price per run$18.00

Example costs:

Audio lengthBilled secondsTotal price
4 s5 s$0.15
10 s10 s$0.30
60 s60 s$1.80
180 s180 s$5.40
600 s600 s$18.00
900 s600 s (capped)$18.00

How to Use

  1. Upload or paste the URL of your audio track.
  2. Upload or paste the URL of the video to be lip-synced.
  3. Ensure the audio length is close to the video duration for best results.
  4. Click Run.
  5. Preview the output video, then download it for editing or publishing.

Tips for Best Quality

  • Use clean, high-quality audio without heavy background noise or music.
  • Keep the audio and video durations aligned; large mismatches can cause timing drift.
  • Choose videos with clear views of the speaker’s face and minimal occlusions.
  • Avoid extreme motion blur or rapid cuts during key speaking moments.

Note

  • Please note that the length of the input audio and video must be at least 2 seconds, otherwise a compilation error will occur.

Authentication

For authentication details, please refer to the Authentication Guide.

API Endpoints

Submit Task & Query Result


# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/kwaivgi/kling-lipsync/audio-to-video" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{}'

# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"

Parameters

Task Submission Parameters

Request Parameters

ParameterTypeRequiredDefaultRangeDescription
audiostringYes--The URL pointing to the audio file that will be used for generating synchronized lip movements. Supported audio file formats: .mp3/.wav/.m4a/.aac, with a maximum file size of 5MB.
videostringYes-The URL of the video file for generating synchronized lip movements. Video files support .mp4/.mov, file size does not exceed 100MB, video length does not exceed 10s and is not shorter than 2s, only 720p and 1080p are supported, length and width dimensions should both be between 720px and 1920px.

Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
data.idstringUnique identifier for the prediction, Task Id
data.modelstringModel ID used for the prediction
data.outputsarrayArray of URLs to the generated content (empty when status is not completed)
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.has_nsfw_contentsarrayArray of boolean values indicating NSFW detection for each output
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds

Result Request Parameters

ParameterTypeRequiredDefaultDescription
idstringYes-Task ID

Result Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
dataobjectThe prediction data object containing all details
data.idstringUnique identifier for the prediction, the ID of the prediction to get
data.modelstringModel ID used for the prediction
data.outputsstringArray of URLs to the generated content (empty when status is not completed).
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds
© 2025 WaveSpeedAI. All rights reserved.