Browse ModelsBytedanceBytedance Seedance 2.0 Video Edit

Bytedance Seedance 2.0 Video Edit

Bytedance Seedance 2.0 Video Edit

Playground

Try it on WavespeedAI!

Seedance 2.0 (Video-Edit) edits an input video from a natural-language prompt. The reference video drives subject identity, composition, and motion while the model rewrites lighting, style, weather, environment, or specific elements as instructed. Built on ByteDance Seed’s unified multimodal architecture for cinematic, motion-stable output.

Features

Seedance 2.0 Video-Edit

Seedance 2.0 Video-Edit transforms an input video from a natural-language prompt — change lighting, weather, style, environment, or specific elements while preserving the subject identity, composition, and motion of the original. Built on ByteDance Seed’s unified multimodal architecture for cinematic, motion-stable output.


Key Features

  • Conversational video editing — Describe the change in plain language; the model rewrites the scene while keeping the original motion intact.
  • Subject and motion preservation — Faces, objects, and camera movement from the input video stay consistent through the edit.
  • Multi-reference support — Optionally guide style, character identity, or audio with reference images and audio clips.
  • Native audio synchronization — Generates synchronized audio in a single pass.
  • Cinematic output quality — Director-level lighting, framing, and motion stability inherited from Seedance 2.0.

Parameters

ParameterRequiredDescription
promptYesDescribe the edit. The prefix Edit the input video. is added automatically.
videoYesInput video URL. Videos longer than 15 s are trimmed to 15 s.
reference_imagesNoOptional reference images for style or character guidance.
reference_audiosNoOptional reference audio for audio guidance.
durationNoOutput length in seconds (4-15). Auto-detected from the input video if not specified.
aspect_ratioNo16:9, 9:16, 4:3, 3:4, 1:1, 21:9. Adapts to the input if not specified.
resolutionNo480p, 720p (default), or 1080p.
enable_web_searchNoEnable web search for real-time context.

How to Use

  1. Upload the input video. Anything longer than 15 s is trimmed to the first 15 s automatically.
  2. Write the edit prompt. Describe the change you want — the prefix Edit the input video. is added for you.
  3. (Optional) Add references. Reference images can constrain style or identity; reference audio can constrain the soundtrack.
  4. (Optional) Set a duration. Auto-detected from the input video length if not provided.
  5. Run. Receive the edited video with synchronized audio.

Pricing

Billed per second across input duration + output duration. Input duration is capped at 15 s.

ResolutionPer second
480p$0.075
720p$0.15
1080p$0.375

Examples (input 5 s, output 5 s = 10 billed seconds):

ResolutionCost
480p$0.75
720p$1.50
1080p$3.75

Examples (input 12 s, output 12 s = 24 billed seconds):

ResolutionCost
480p$1.80
720p$3.60
1080p$9.00

Best Use Cases

  • Style and look transfer — Re-grade footage into a cinematic, vintage, animated, or stylized look.
  • Lighting and weather edits — Change time of day, add rain or snow, swap golden hour for blue hour.
  • Object or background swaps — Replace clothing, props, or environments while keeping motion intact.
  • Marketing variants — Generate quick variations of an existing ad clip without reshooting.

Pro Tips

  • Be specific about what should change and what should stay the same.
  • Mention lighting, mood, color palette, and camera intent for stronger output.
  • Use reference images when you need to lock in a particular character or style.
  • Trim your source video to the most relevant 4-15 s before uploading for the strongest edits.

Notes

  • Inputs longer than 15 s are trimmed to 15 s before editing — billing reflects the trimmed length.
  • Auto-detected duration matches the input length (rounded up), clamped to the 4-15 s range.
  • Native audio generation is included.

Authentication

For authentication details, please refer to the Authentication Guide.

API Endpoints

Submit Task & Query Result


# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/bytedance/seedance-2.0/video-edit" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
    "resolution": "720p",
    "enable_web_search": false
}'

# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"

Parameters

Task Submission Parameters

Request Parameters

ParameterTypeRequiredDefaultRangeDescription
promptstringYes-Describe the edit you want applied to the input video. The prefix "Edit the input video." is added automatically.
videostringYes-URL of the input video to edit. Videos longer than 15s are trimmed to 15s.
reference_imagesarrayNo--Optional reference image URLs to guide the edit (subject identity, style, etc.).
reference_audiosarrayNo--Optional reference audio URLs to guide audio generation.
aspect_ratiostringNo-16:9, 9:16, 4:3, 3:4, 1:1, 21:9Aspect ratio of the output video. Adapts to the input if not specified.
resolutionstringNo720p480p, 720p, 1080pOutput video resolution.
durationintegerNo-4 ~ 15Output video length in seconds (4-15). Auto-detected from the input video if not specified.
enable_web_searchbooleanNofalse-Enable web search for real-time information.

Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
data.idstringUnique identifier for the prediction, Task Id
data.modelstringModel ID used for the prediction
data.outputsarrayArray of URLs to the generated content (empty when status is not completed)
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.has_nsfw_contentsarrayArray of boolean values indicating NSFW detection for each output
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds

Result Request Parameters

ParameterTypeRequiredDefaultDescription
idstringYes-Task ID

Result Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
dataobjectThe prediction data object containing all details
data.idstringUnique identifier for the prediction, the ID of the prediction to get
data.modelstringModel ID used for the prediction
data.outputsobjectArray of URLs to the generated content (empty when status is not completed).
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds
© 2025 WaveSpeedAI. All rights reserved.