Wan 2.1 I2V 720p LoRA
Playground
Try it on WavespeedAI!Wan 2.1 i2v-720p generates image-to-video outputs at 720p and supports custom LoRA adapters for personalized styles and fine-tuning. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
Features
Wan 2.1 Image-to-Video 720p LoRA
Wan 2.1 Image-to-Video 720p LoRA is a powerful image-to-video generation model that transforms static images into dynamic 720p HD videos. With full LoRA support, apply custom styles, artistic effects, or consistent character appearances to create unique animated content.
Why It Stands Out
- Image-driven generation: Animate any image while preserving its original style and composition.
- LoRA support: Apply custom LoRA models for specific styles, characters, or aesthetics.
- Prompt-guided motion: Describe camera movements, actions, and atmospheric effects.
- Prompt Enhancer: Built-in AI-powered prompt optimization for better results.
- Negative prompt support: Exclude unwanted elements for cleaner outputs.
- HD 720p output: Generate crisp 1280×720 videos with rich detail.
- Fine-tuned control: Adjust guidance scale and flow shift for precise results.
- Reproducibility: Use the seed parameter to recreate exact results.
Parameters
| Parameter | Required | Description |
|---|---|---|
| image | Yes | Source image to animate (upload or public URL). |
| prompt | Yes | Text description of desired motion and style. |
| negative_prompt | No | Elements to avoid in the output. |
| loras | No | LoRA models to apply (path and scale). |
| size | No | Output resolution (default: 1280×720). |
| num_inference_steps | No | Quality/speed trade-off (default: 30). |
| duration | No | Video length: 5 or 10 seconds (default: 5). |
| guidance_scale | No | Prompt adherence strength (default: 5). |
| flow_shift | No | Motion flow control (default: 5). |
| seed | No | Set for reproducibility; -1 for random. |
How to Use
- Upload your source image — drag and drop a file or paste a public URL.
- Write a prompt describing the motion and action you want. Use the Prompt Enhancer for AI-assisted optimization.
- Add LoRAs (optional) — select LoRA models and adjust their scale.
- Add a negative prompt (optional) — specify elements to exclude.
- Adjust parameters — set duration, guidance scale, and other settings as needed.
- Click Run and wait for your video to generate.
- Preview and download the result.
How to Use LoRA
LoRA (Low-Rank Adaptation) lets you apply custom styles without retraining the full model.
- Add LoRA: Enter the LoRA path and adjust the scale (0.0–1.0).
- Recommended LoRAs: Check the interface for suggested LoRAs with preview images.
- Scale adjustment: Higher scale means stronger style effect.
Best Use Cases
- Style Transfer — Convert images to anime, cartoon, or artistic video styles.
- Creative Animation — Apply unique visual effects like crush, melt, or transform.
- Social Media Content — Turn photos into engaging video posts.
- Marketing & Advertising — Animate product images with custom brand styles.
- Artistic Projects — Create unique animated content with specific aesthetics.
Pricing
| Duration | Price |
|---|---|
| 5 seconds | $0.30 |
| 10 seconds | $0.45 |
Pro Tips for Best Quality
- Use high-resolution, well-lit source images for optimal results.
- Be specific in your prompt — describe the action, motion, and effects you want.
- Start with LoRA scale around 0.7–1.0 and adjust based on results.
- Use negative prompts to reduce artifacts like blur, distortion, or unwanted motion.
- Check recommended LoRAs for inspiration and proven style effects.
- Fix the seed when iterating to compare different parameter settings.
Notes
- Ensure uploaded image URLs are publicly accessible.
- Higher num_inference_steps produces better quality but increases generation time.
- Processing time varies based on parameters and current queue load.
- Please ensure your content complies with usage guidelines.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/wan-2.1/i2v-720p-lora" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"loras": [
{
"path": "Remade-AI/Crush",
"scale": 1
}
],
"size": "1280*720",
"num_inference_steps": 30,
"duration": 5,
"guidance_scale": 5,
"flow_shift": 5,
"seed": -1
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| image | string | Yes | - | The image for generating the output. | |
| prompt | string | Yes | - | ||
| negative_prompt | string | No | - | The negative prompt for the generation. | |
| loras | array | No | max 3 items | The LoRA weights for generating the output. | |
| loras[].path | string | Yes | - | Path to the LoRA model | |
| loras[].scale | float | Yes | - | 0.0 ~ 4.0 | Scale of the LoRA model |
| size | string | No | 1280*720 | 1280*720, 720*1280 | The size of the generated media in pixels (width*height). |
| num_inference_steps | integer | No | 30 | 1 ~ 40 | The number of inference steps to perform. |
| duration | integer | No | 5 | 5 ~ 10 | The duration of the generated media in seconds. |
| guidance_scale | number | No | 5 | 0.00 ~ 20.00 | The guidance scale to use for the generation. |
| flow_shift | number | No | 5 | 1.0 ~ 10.0 | The shift value for the timestep schedule for flow matching. |
| seed | integer | No | -1 | -1 ~ 2147483647 | The random seed to use for the generation. -1 means a random seed will be used. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data | object | The prediction data object containing all details |
| data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
| data.model | string | Model ID used for the prediction |
| data.outputs | string | Array of URLs to the generated content (empty when status is not completed). |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |