Browse ModelsWavespeed AIWan 2.1 V2V 720p LoRA

Wan 2.1 V2V 720p LoRA

Wan 2.1 V2V 720p LoRA

Playground

Try it on WavespeedAI!

Wan 2.1 V2V 720P LoRA converts source videos into 720P AI-enhanced video-to-video edits with support for custom LoRA personalization. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.

Features

Wan 2.1 Video-to-Video 720p LoRA

Wan 2.1 Video-to-Video 720p LoRA is a powerful video transformation model that converts existing videos into new styles, aesthetics, or visual treatments. With full LoRA support, apply custom styles, artistic effects, or consistent character appearances to transform your footage.


Why It Stands Out

  • Video-to-video transformation: Convert existing videos into different styles while preserving motion.
  • LoRA support: Apply up to 3 custom LoRA models for specific styles, characters, or aesthetics.
  • Prompt-guided transformation: Describe the visual style you want to achieve.
  • Prompt Enhancer: Built-in AI-powered prompt optimization for better results.
  • Negative prompt support: Exclude unwanted elements for cleaner outputs.
  • Fine-tuned control: Adjust strength, guidance scale, and flow shift for precise results.
  • Reproducibility: Use the seed parameter to recreate exact results.

Parameters

ParameterRequiredDescription
videoYesSource video to transform (upload or public URL).
promptYesText description of the desired visual style.
lorasNoLoRA models to apply (path and scale). Up to 3 LoRAs.
negative_promptNoElements to avoid in the output.
num_inference_stepsNoQuality/speed trade-off (default: 30).
durationNoOutput video length: 5 or 10 seconds (default: 5).
strengthNoTransformation intensity (0.0–1.0, default: 0.9).
guidance_scaleNoPrompt adherence strength (default: 5).
flow_shiftNoMotion flow control (default: 3).
seedNoSet for reproducibility; -1 for random.

How to Use

  1. Upload your source video — drag and drop a file or paste a public URL.
  2. Write a prompt describing the visual style you want. Use the Prompt Enhancer for AI-assisted optimization.
  3. Add LoRAs (optional) — select up to 3 LoRA models and adjust their scale.
  4. Add a negative prompt (optional) — specify elements to exclude.
  5. Adjust parameters — set strength, guidance scale, and other settings as needed.
  6. Click Run and wait for your video to generate.
  7. Preview and download the result.

How to Use LoRA

LoRA (Low-Rank Adaptation) lets you apply custom styles without retraining the full model.

  • Add LoRA: Enter the LoRA path and adjust the scale (0.0–1.0).
  • Multiple LoRAs: Combine up to 3 LoRAs for layered effects.
  • Recommended LoRAs: Check the interface for suggested LoRAs with preview images.

Best Use Cases

  • Style Transfer — Convert videos to anime, cartoon, cinematic, or artistic styles.
  • Visual Effects — Apply consistent effects like flat color, watercolor, or sketch styles.
  • Content Repurposing — Transform footage into different visual aesthetics.
  • Brand Consistency — Apply branded visual styles across multiple videos.
  • Creative Projects — Experiment with unique visual treatments.

Pricing

DurationPrice
5 seconds$0.30
10 seconds$0.45

Pro Tips for Best Quality

  • Use lower strength (0.5–0.7) to preserve more of the original video.
  • Use higher strength (0.8–0.95) for more dramatic style transformations.
  • Start with one LoRA and add more gradually to understand their combined effect.
  • Adjust LoRA scale based on how strongly you want the style applied.
  • Use negative prompts to reduce artifacts like blur, distortion, or unwanted elements.
  • Fix the seed when iterating to compare different parameter settings.

Notes

  • Maximum of 3 LoRAs per generation.
  • Ensure uploaded video URLs are publicly accessible.
  • Processing time varies based on duration and current queue load.
  • Please ensure your content complies with usage guidelines.

Authentication

For authentication details, please refer to the Authentication Guide.

API Endpoints

Submit Task & Query Result


# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/wan-2.1/v2v-720p-lora" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
    "loras": [
        {
            "path": "motimalu/wan-flat-color-v2",
            "scale": 1
        }
    ],
    "num_inference_steps": 30,
    "duration": 5,
    "strength": 0.9,
    "guidance_scale": 5,
    "flow_shift": 3,
    "seed": -1
}'

# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"

Parameters

Task Submission Parameters

Request Parameters

ParameterTypeRequiredDefaultRangeDescription
videostringYes-The video for generating the output.
promptstringYes-
lorasarrayNomax 3 itemsList of LoRAs to apply (max 3).
loras[].pathstringYes-Path to the LoRA model
loras[].scalefloatYes-0.0 ~ 4.0Scale of the LoRA model
negative_promptstringNo-The negative prompt for the generation.
num_inference_stepsintegerNo301 ~ 40The number of inference steps to perform.
durationintegerNo55 ~ 10The duration of the generated media in seconds.
strengthnumberNo0.90.10 ~ 1.00
guidance_scalenumberNo50.00 ~ 20.00The guidance scale to use for the generation.
flow_shiftnumberNo31.0 ~ 10.0The shift value for the timestep schedule for flow matching.
seedintegerNo-1-1 ~ 2147483647The random seed to use for the generation. -1 means a random seed will be used.

Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
data.idstringUnique identifier for the prediction, Task Id
data.modelstringModel ID used for the prediction
data.outputsarrayArray of URLs to the generated content (empty when status is not completed)
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.has_nsfw_contentsarrayArray of boolean values indicating NSFW detection for each output
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds

Result Request Parameters

ParameterTypeRequiredDefaultDescription
idstringYes-Task ID

Result Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
dataobjectThe prediction data object containing all details
data.idstringUnique identifier for the prediction, the ID of the prediction to get
data.modelstringModel ID used for the prediction
data.outputsstringArray of URLs to the generated content (empty when status is not completed).
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds
© 2025 WaveSpeedAI. All rights reserved.