Bytedance Lipsync Audio To Video
Playground
Try it on WavespeedAI!Bytedance LipSync is an advanced audio-to-video model designed to create lifelike lip movements that perfectly sync with the given audio input.
Features
Bytedance LipSync is an advanced audio-to-video model designed to create lifelike lip movements that perfectly sync with the given audio input. The lip-sync feature can perfectly synchronize the lip movements of characters in videos generated by Bytedance AI with locally recorded or online-generated dubbing/singing files, achieving the effect of real people speaking or singing, and making the video appear lifelike!
Naturally and Highly Matched Lip Movements:
The lip movements of characters in the generated video not only synchronize precisely with the audio but also create unique movement trajectories based on individual facial features and physiological structures, significantly enhancing the video’s naturalness and realism.
Clear Facial Muscle Texture:
The changes in lip movements accurately drive the facial muscles, adjusting in real-time and meticulously presenting the stretching and contraction of muscles during lip movement, resulting in a highly coordinated visual effect that further enhances the overall realism and immersive quality of the video.
Vivid and Lifelike Imagery:
The areas outside the face in the generated video remain consistent with the original video, ensuring the integrity and continuity of the original footage, avoiding interference with non-target areas during the generation process, and restoring the original appearance of the video to the greatest extent possible.
LatentSync: An End-to-End Lip-Sync Method Based on Audio-Conditioned Latent Diffusion Models
We present LatentSync, an end-to-end lip-sync method based on audio-conditioned latent diffusion models without any intermediate motion representation, diverging from previous diffusion-based lip-sync methods based on pixel-space diffusion or two-stage generation. Our framework can leverage the powerful capabilities of Stable Diffusion to directly model complex audio-visual correlations.
LatentSync uses the Whisper to convert melspectrogram into audio embeddings, which are then integrated into the U-Net via cross-attention layers. The reference and masked frames are channel-wise concatenated with noised latents as the input of U-Net. In the training process, we use a one-step method to get estimated clean latents from predicted noises, which are then decoded to obtain the estimated clean frames. The TREPA, LPIPS and SyncNet losses are added in the pixel space.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/bytedance/lipsync/audio-to-video" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"audio": "https://replicate.delivery/pbxt/MGZuENopzAwWcpFsZ7SwoZ7itP4gvqasswPeEJwbRHTxtkwF/demo2_audio.wav",
"video": "https://replicate.delivery/pbxt/MGZuEgzJZh6avv1LDEMppJZXLP9avGXqRuH7iAb7MBAz0Wu4/demo2_video.mp4"
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
Parameter | Type | Required | Default | Range | Description |
---|---|---|---|---|---|
audio | string | Yes | https://replicate.delivery/pbxt/MGZuENopzAwWcpFsZ7SwoZ7itP4gvqasswPeEJwbRHTxtkwF/demo2_audio.wav | - | The URL pointing to the audio file that will be used for generating synchronized lip movements. |
video | string | Yes | - | The URL of the video file for generating synchronized lip movements. |
Response Parameters
Parameter | Type | Description |
---|---|---|
code | integer | HTTP status code (e.g., 200 for success) |
message | string | Status message (e.g., “success”) |
data.id | string | Unique identifier for the prediction, Task Id |
data.model | string | Model ID used for the prediction |
data.outputs | array | Array of URLs to the generated content (empty when status is not completed ) |
data.urls | object | Object containing related API endpoints |
data.urls.get | string | URL to retrieve the prediction result |
data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
data.status | string | Status of the task: created , processing , completed , or failed |
data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
data.error | string | Error message (empty if no error occurred) |
data.timings | object | Object containing timing details |
data.timings.inference | integer | Inference time in milliseconds |
Result Query Parameters
Result Request Parameters
Parameter | Type | Required | Default | Description |
---|---|---|---|---|
id | string | Yes | - | Task ID |
Result Response Parameters
Parameter | Type | Description |
---|---|---|
code | integer | HTTP status code (e.g., 200 for success) |
message | string | Status message (e.g., “success”) |
data | object | The prediction data object containing all details |
data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
data.model | string | Model ID used for the prediction |
data.outputs | array | Array of URLs to the generated content (empty when status is not completed ) |
data.urls | object | Object containing related API endpoints |
data.urls.get | string | URL to retrieve the prediction result |
data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
data.status | string | Status of the task: created , processing , completed , or failed |
data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
data.error | string | Error message (empty if no error occurred) |
data.timings | object | Object containing timing details |
data.timings.inference | integer | Inference time in milliseconds |