Wan 2.1 V2V 720p LoRA
Playground
Try it on WavespeedAI!Wan 2.1 V2V 720P LoRA converts source videos into 720P AI-enhanced video-to-video edits with support for custom LoRA personalization. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
Features
Wan 2.1 Video-to-Video 720p LoRA
Wan 2.1 Video-to-Video 720p LoRA is a powerful video transformation model that converts existing videos into new styles, aesthetics, or visual treatments. With full LoRA support, apply custom styles, artistic effects, or consistent character appearances to transform your footage.
Why It Stands Out
- Video-to-video transformation: Convert existing videos into different styles while preserving motion.
- LoRA support: Apply up to 3 custom LoRA models for specific styles, characters, or aesthetics.
- Prompt-guided transformation: Describe the visual style you want to achieve.
- Prompt Enhancer: Built-in AI-powered prompt optimization for better results.
- Negative prompt support: Exclude unwanted elements for cleaner outputs.
- Fine-tuned control: Adjust strength, guidance scale, and flow shift for precise results.
- Reproducibility: Use the seed parameter to recreate exact results.
Parameters
| Parameter | Required | Description |
|---|---|---|
| video | Yes | Source video to transform (upload or public URL). |
| prompt | Yes | Text description of the desired visual style. |
| loras | No | LoRA models to apply (path and scale). Up to 3 LoRAs. |
| negative_prompt | No | Elements to avoid in the output. |
| num_inference_steps | No | Quality/speed trade-off (default: 30). |
| duration | No | Output video length: 5 or 10 seconds (default: 5). |
| strength | No | Transformation intensity (0.0–1.0, default: 0.9). |
| guidance_scale | No | Prompt adherence strength (default: 5). |
| flow_shift | No | Motion flow control (default: 3). |
| seed | No | Set for reproducibility; -1 for random. |
How to Use
- Upload your source video — drag and drop a file or paste a public URL.
- Write a prompt describing the visual style you want. Use the Prompt Enhancer for AI-assisted optimization.
- Add LoRAs (optional) — select up to 3 LoRA models and adjust their scale.
- Add a negative prompt (optional) — specify elements to exclude.
- Adjust parameters — set strength, guidance scale, and other settings as needed.
- Click Run and wait for your video to generate.
- Preview and download the result.
How to Use LoRA
LoRA (Low-Rank Adaptation) lets you apply custom styles without retraining the full model.
- Add LoRA: Enter the LoRA path and adjust the scale (0.0–1.0).
- Multiple LoRAs: Combine up to 3 LoRAs for layered effects.
- Recommended LoRAs: Check the interface for suggested LoRAs with preview images.
Best Use Cases
- Style Transfer — Convert videos to anime, cartoon, cinematic, or artistic styles.
- Visual Effects — Apply consistent effects like flat color, watercolor, or sketch styles.
- Content Repurposing — Transform footage into different visual aesthetics.
- Brand Consistency — Apply branded visual styles across multiple videos.
- Creative Projects — Experiment with unique visual treatments.
Pricing
| Duration | Price |
|---|---|
| 5 seconds | $0.30 |
| 10 seconds | $0.45 |
Pro Tips for Best Quality
- Use lower strength (0.5–0.7) to preserve more of the original video.
- Use higher strength (0.8–0.95) for more dramatic style transformations.
- Start with one LoRA and add more gradually to understand their combined effect.
- Adjust LoRA scale based on how strongly you want the style applied.
- Use negative prompts to reduce artifacts like blur, distortion, or unwanted elements.
- Fix the seed when iterating to compare different parameter settings.
Notes
- Maximum of 3 LoRAs per generation.
- Ensure uploaded video URLs are publicly accessible.
- Processing time varies based on duration and current queue load.
- Please ensure your content complies with usage guidelines.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/wan-2.1/v2v-720p-lora" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"loras": [
{
"path": "motimalu/wan-flat-color-v2",
"scale": 1
}
],
"num_inference_steps": 30,
"duration": 5,
"strength": 0.9,
"guidance_scale": 5,
"flow_shift": 3,
"seed": -1
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| video | string | Yes | - | The video for generating the output. | |
| prompt | string | Yes | - | ||
| loras | array | No | max 3 items | List of LoRAs to apply (max 3). | |
| loras[].path | string | Yes | - | Path to the LoRA model | |
| loras[].scale | float | Yes | - | 0.0 ~ 4.0 | Scale of the LoRA model |
| negative_prompt | string | No | - | The negative prompt for the generation. | |
| num_inference_steps | integer | No | 30 | 1 ~ 40 | The number of inference steps to perform. |
| duration | integer | No | 5 | 5 ~ 10 | The duration of the generated media in seconds. |
| strength | number | No | 0.9 | 0.10 ~ 1.00 | |
| guidance_scale | number | No | 5 | 0.00 ~ 20.00 | The guidance scale to use for the generation. |
| flow_shift | number | No | 3 | 1.0 ~ 10.0 | The shift value for the timestep schedule for flow matching. |
| seed | integer | No | -1 | -1 ~ 2147483647 | The random seed to use for the generation. -1 means a random seed will be used. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data | object | The prediction data object containing all details |
| data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
| data.model | string | Model ID used for the prediction |
| data.outputs | string | Array of URLs to the generated content (empty when status is not completed). |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |