Vidu Start End to Video 2.0
Create dynamic videos using just the first and last frame images, enhanced with text descriptions for seamless storytelling.
Features
Vidu2.0 Start end to Video creates coherent video by adding motion between the start and end frames, and is an effective tool for scene transitions and storytelling.
Key Features
- Bi-frame guided synthesis
- Strong narrative continuity
- Object-aware and human-aware motion interpolation
- Adaptive to camera movement and layout shifts
ComfyUI
vidu 2.0 Start end to Video is available on ComfyUI, providing local inference capabilities through a node-based workflow, ensuring flexible and efficient image generation on your system.
Use Cases
- Storyboarding and concept animation
- Scene interpolation in long-form content
- Instructional visual sequences
- Film previsualization
Accelerated Inference
Our accelerated inference approach leverages advanced optimization technology from WavespeedAI. This innovative fusion technique significantly reduces computational overhead and latency, enabling rapid image generation without compromising quality. The entire system is designed to efficiently handle large-scale inference tasks while ensuring that real-time applications achieve an optimal balance between speed and accuracy. For further details, please refer to the blog post.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/vidu/start-end-to-video-2.0" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"images": [
"https://d2g64w682n9w0w.cloudfront.net/media/images/1745494594983907143_liqlhd9u.jpg",
"https://d2g64w682n9w0w.cloudfront.net/media/images/1745494607637805608_31gIDzwr.jpg"
],
"prompt": "the iron man transform into the sport car ",
"duration": 4,
"movement_amplitude": "auto",
"seed": 0
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
Parameter | Type | Required | Default | Range | Description |
---|---|---|---|---|---|
images | array | Yes | https://d2g64w682n9w0w.cloudfront.net/media/images/1745494594983907143_liqlhd9u.jpghttps://d2g64w682n9w0w.cloudfront.net/media/images/1745494607637805608_31gIDzwr.jpg | - | Supports input of two images, with the first uploaded image considered as the start frame and the second image as the end frame. The model will use these provided images to generate the video. For fields that accept images: Only accept 2 images; The pixel density of the start frame and end frame should be similar. The pixel of the start frame divided by the end frame should be between 0.8 and 1.25; Images Assets can be provided via URLs or Base64 encode; You must use one of the following codecs: PNG, JPEG, JPG, WebP; The aspect ratio of the images must be less than 1:4 or 4:1; All images are limited to 50MB; The length of the base64 decode must be under 50MB, and it must include an appropriate content type string. |
prompt | string | Yes | the iron man transform into the sport car | - | Text prompt: A textual description for video generation, with a maximum length of 1500 characters. |
duration | integer | No | 4 | 4, 8 | The number of seconds of duration for the output video. Default to 4 accepted value: 4 8 |
movement_amplitude | string | No | auto | - | The movement amplitude of objects in the frame. Defaults to auto, accepted value: auto small medium large. |
seed | integer | No | - | -1 ~ 9999999999 | Random seed: Defaults to a random seed number; Manually set values will override the default random seed. |
Response Parameters
Parameter | Type | Description |
---|---|---|
code | integer | HTTP status code (e.g., 200 for success) |
message | string | Status message (e.g., “success”) |
data.id | string | Unique identifier for the prediction, Task Id |
data.model | string | Model ID used for the prediction |
data.outputs | array | Array of URLs to the generated content (empty when status is not completed ) |
data.urls | object | Object containing related API endpoints |
data.urls.get | string | URL to retrieve the prediction result |
data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
data.status | string | Status of the task: created , processing , completed , or failed |
data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
data.error | string | Error message (empty if no error occurred) |
data.timings | object | Object containing timing details |
data.timings.inference | integer | Inference time in milliseconds |
Result Query Parameters
Result Request Parameters
Parameter | Type | Required | Default | Description |
---|---|---|---|---|
id | string | Yes | - | Task ID |
Result Response Parameters
Parameter | Type | Description |
---|---|---|
code | integer | HTTP status code (e.g., 200 for success) |
message | string | Status message (e.g., “success”) |
data | object | The prediction data object containing all details |
data.id | string | Unique identifier for the prediction |
data.model | string | Model ID used for the prediction |
data.outputs | array | Array of URLs to the generated content (empty when status is not completed ) |
data.urls | object | Object containing related API endpoints |
data.urls.get | string | URL to retrieve the prediction result |
data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
data.status | string | Status of the task: created , processing , completed , or failed |
data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
data.error | string | Error message (empty if no error occurred) |
data.timings | object | Object containing timing details |
data.timings.inference | integer | Inference time in milliseconds |