Bytedance Seedance V1 Lite Reference To Video
Playground
Try it on WavespeedAI!ByteDance Seedance v1 Lite converts 1 to 4 reference images into high-quality videos with reference-to-video image-to-video generation. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
Features
ByteDance Seedance V1 Lite — Reference-to-Video
Seedance V1 Lite Reference-to-Video is an advanced multi-reference video generation model from ByteDance that creates videos featuring subjects from your reference images. Upload up to 4 reference images of people, animals, or objects, and the model generates a coherent video with those subjects interacting based on your prompt.
Why It Stands Out
- Multi-reference support: Add up to 4 reference images to include multiple subjects in your video.
- Subject preservation: Maintains the identity and appearance of subjects from reference images.
- Prompt-guided action: Describe how subjects should interact or move in the generated video.
- Prompt Enhancer: Built-in AI-powered prompt optimization for better results.
- Camera control: Option to fix camera position for stable, static shots.
- Flexible duration: Generate videos of varying lengths based on your needs.
- Reproducibility: Use the seed parameter to recreate exact results.
Parameters
| Parameter | Required | Description |
|---|---|---|
| prompt | Yes | Text description of the video action and scene. |
| reference_images | Yes | Up to 4 reference images of subjects to include (upload or URL). |
| duration | No | Video length in seconds (default: 5). |
| camera_fixed | No | Fix camera position for stable shots. |
| seed | No | Set for reproducibility; leave empty for random. |
How to Use
- Write a prompt describing the action and scene you want. Use the Prompt Enhancer for AI-assisted optimization.
- Add reference images — upload up to 4 images of subjects (people, animals, objects) to include in the video.
- Set duration — choose how long you want the video to be.
- Enable camera_fixed (optional) — check this for a stable, static camera shot.
- Set a seed (optional) for reproducible results.
- Click Run and wait for your video to generate.
- Preview and download the result.
Best Use Cases
- Character Animation — Animate portraits or character images in custom scenes.
- Pet Videos — Create fun videos featuring your pets in imaginative scenarios.
- Social Media Content — Generate engaging videos with multiple subjects interacting.
- Storytelling — Bring characters together in narrative video clips.
- Marketing & Advertising — Create product videos featuring specific items or mascots.
- Creative Projects — Combine different subjects in unique, creative compositions.
Pricing
| Duration | Price |
|---|---|
| 5 seconds | $0.18 |
| 10 seconds | $0.36 |
| 15 seconds | $0.54 |
| 20 seconds | $0.72 |
Total cost = $0.18 × (duration / 5)
Pro Tips for Best Quality
- Use clear, high-quality reference images with subjects well-visible.
- For best results, use front-facing or 3/4 angle reference photos.
- Be specific in your prompt about how subjects should interact.
- Enable camera_fixed for professional-looking static shots.
- When using multiple subjects, describe their relative positions and actions clearly.
- Fix the seed when iterating to compare different prompt variations.
Notes
- Maximum of 4 reference images per generation.
- Ensure uploaded image URLs are publicly accessible.
- Processing time varies based on duration and current queue load.
- Please ensure your content complies with usage guidelines.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/bytedance/seedance-v1-lite/reference-to-video" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"duration": 5,
"camera_fixed": false
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| prompt | string | Yes | - | The positive prompt for the generation. | |
| reference_images | array | Yes | - | 1 ~ 4 items | A list of images to use as style references. At least 1 image is required. max 4 images. |
| duration | integer | No | 5 | 2 ~ 12 | The duration of the generated media in seconds. |
| camera_fixed | boolean | No | false | - | Whether to fix the camera position. |
| seed | integer | No | - | -1 ~ 2147483647 | The random seed to use for the generation. -1 means a random seed will be used. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data | object | The prediction data object containing all details |
| data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
| data.model | string | Model ID used for the prediction |
| data.outputs | string | Array of URLs to the generated content (empty when status is not completed). |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |