WaveSpeedAI APIViduVidu Start End To Video 2.0

Vidu Start End To Video 2.0

Vidu Start End To Video 2.0

Playground

Try it on WavespeedAI!

Vidu Start-End to Video 2.0 generates smooth transition videos interpolating between given start and end images for natural morphing effects. Ready-to-use REST API, best performance, no coldstarts, affordable pricing.

Features

Vidu Start-End-to-Video 2.0 — vidu/start-end-to-video-2.0

Vidu Start-End-to-Video 2.0 generates a short video that transitions from a start image to an end image, guided by a text prompt. You provide the first frame (image) and the final target frame (last_image), then describe the transformation story in the prompt. The model fills in the in-between motion and visual continuity, making it ideal for morphs, product reveals, character transformations, and “before → after” storytelling. You can control how dramatic the motion is with movement_amplitude.

Key capabilities

  • Start-to-end guided video generation (image → last_image)
  • Prompt-driven transformation with controllable motion intensity
  • Designed for smooth visual transitions and narrative continuity
  • Movement amplitude control: auto / small / medium / large
  • Seed control for reproducible variations

Use cases

  • Character transformation and morph sequences (human → creature, outfit swaps, style shifts)
  • Product reveals (closed box → unboxed product, sketch → final render)
  • Before/after transitions (room makeover, color grading shift, weather change)
  • Logo or brand animations (static mark → dynamic finish frame)
  • Short cinematic transitions for ads and social clips

Pricing

OutputPrice per run
Start-End-to-Video (per generation)$0.30

Inputs

  • prompt (required): describe the transition and what happens in-between
  • image (required): start image (first frame)
  • last_image (required): end image (final frame)

Parameters

  • movement_amplitude: motion intensity (auto, small, medium, large)
  • seed: random seed (-1 for random; fixed for reproducible results)

Prompting guide (start → end)

Write prompts that explain how the start becomes the end:

  • Describe the transformation mechanism (morphing, unfolding, dissolving, assembling)
  • Call out key beats (sparks, parts shifting, panels opening, smoke/energy)
  • Mention camera intent (fixed shot, slow push-in) if you need it
  • Keep continuity constraints clear (same framing, stable background, consistent lighting)

Template: Fixed camera. Start as [start]. The subject transforms by [mechanism] into [end]. Smooth transition, coherent motion, consistent lighting, no flicker.

Example prompts

  • A metal suit transforms into a sleek sports car through mechanical folding panels, glowing arc energy, parts assembling smoothly, fixed camera, cinematic lighting, seamless morph from start to end.
  • A sketch drawing gradually fills with color and detail, turning into a polished illustration, gentle dust particles, slow push-in, smooth transition.
  • A winter landscape melts into spring: snow dissolves, grass emerges, flowers bloom, warm sunlight grows stronger, smooth continuous motion.

Authentication

For authentication details, please refer to the Authentication Guide.

API Endpoints

Submit Task & Query Result


# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/vidu/start-end-to-video-2.0" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
    "movement_amplitude": "auto",
    "seed": -1
}'

# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"

Parameters

Task Submission Parameters

Request Parameters

ParameterTypeRequiredDefaultRangeDescription
promptstringYes-The positive prompt for the generation.
imagestringYes-The start image for generating the output.
last_imagestringYes--The end image for generating the output.
movement_amplitudestringNoautoauto, small, medium, largeThe movement amplitude of objects in the frame. Defaults to auto, accepted value: auto small medium large.
seedintegerNo-1-1 ~ 2147483647The random seed to use for the generation. -1 means a random seed will be used.

Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
data.idstringUnique identifier for the prediction, Task Id
data.modelstringModel ID used for the prediction
data.outputsarrayArray of URLs to the generated content (empty when status is not completed)
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.has_nsfw_contentsarrayArray of boolean values indicating NSFW detection for each output
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds

Result Request Parameters

ParameterTypeRequiredDefaultDescription
idstringYes-Task ID

Result Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
dataobjectThe prediction data object containing all details
data.idstringUnique identifier for the prediction, the ID of the prediction to get
data.modelstringModel ID used for the prediction
data.outputsstringArray of URLs to the generated content (empty when status is not completed).
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds
© 2025 WaveSpeedAI. All rights reserved.