Browse ModelsWavespeed AIWan 2.1 V2V 480p LoRA

Wan 2.1 V2V 480p LoRA

Wan 2.1 V2V 480p LoRA

Playground

Try it on WavespeedAI!

WAN 2.1 V2V 480p LoRA generates unlimited 480p video-to-video edits with custom LoRA support for tailored styles. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.

Features

Wan 2.1 V2V 480p LoRA — wavespeed-ai/wan-2.1/v2v-480p-lora

Wan 2.1 V2V 480p LoRA is a video-to-video model designed for prompt-guided edits while preserving the original motion and timing of an input video. Upload a source video, describe what should change, and tune the edit strength to control how closely the output follows the original footage. It supports up to 3 LoRAs to enforce a consistent style, character look, or branded aesthetic across the transformed clip.

Key capabilities

  • Video-to-video transformation anchored to an input video (480p output)
  • Prompt-guided edits while keeping motion continuity and pacing
  • Strength control to balance preservation vs. transformation
  • LoRA support (up to 3) for stable style/identity steering
  • Fine control over motion behavior via flow_shift

Use cases

  • Restyle a video while preserving the original motion (cinematic, anime, illustration looks)
  • Apply a “house style” consistently across multiple clips using LoRAs
  • Enhance mood and lighting (golden hour, noir, neon) without re-animating motion
  • Brand-safe content refresh: keep composition, change textures/colors/details
  • Creative remixing for social clips and rapid iteration

Pricing

DurationPrice per video
5s$0.20
10s$0.30

Inputs

  • video (required): source video to transform
  • prompt (required): what to change and how the result should look
  • negative_prompt (optional): what to avoid (artifacts, jitter, unwanted elements)
  • loras (optional): up to 3 LoRA items for style/identity steering

Parameters

  • num_inference_steps: sampling steps
  • duration: output duration (seconds)
  • strength: how strongly to transform the input video (lower = preserve more; higher = change more)
  • guidance_scale: prompt adherence strength
  • flow_shift: motion/flow behavior tuning
  • seed: random seed (-1 for random; fixed for reproducible results)

LoRA (up to 3 items):

  • loras: list of LoRA entries (max 3)

    • path: owner/model-name or a direct .safetensors URL
    • scale: LoRA strength

Prompting guide (V2V)

To get stable edits, write prompts that separate “preserve” from “change”:

Template: Keep the same camera motion and timing from the input video. Change [style/lighting/wardrobe/environment]. Keep faces natural and consistent. Avoid flicker and warping.

Example prompts

  • Keep the original motion and composition. Make the video candid and cinematic with warm sunlight, natural skin texture, gentle film grain, and soft depth of field.
  • Preserve timing and camera movement. Restyle the scene into a clean anime look with crisp edges and consistent shading, no flicker.
  • Keep the same scene and people. Change the color grade to sunset golden hour, add subtle lens flare, maintain realistic shadows.

Authentication

For authentication details, please refer to the Authentication Guide.

API Endpoints

Submit Task & Query Result


# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/wan-2.1/v2v-480p-lora" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
    "loras": [
        {
            "path": "motimalu/wan-flat-color-v2",
            "scale": 1
        }
    ],
    "num_inference_steps": 30,
    "duration": 5,
    "strength": 0.9,
    "guidance_scale": 5,
    "flow_shift": 3,
    "seed": -1
}'

# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"

Parameters

Task Submission Parameters

Request Parameters

ParameterTypeRequiredDefaultRangeDescription
videostringYes-The video for generating the output.
promptstringYes-
lorasarrayNomax 3 itemsList of LoRAs to apply (max 3).
loras[].pathstringYes-Path to the LoRA model
loras[].scalefloatYes-0.0 ~ 4.0Scale of the LoRA model
negative_promptstringNo-The negative prompt for the generation.
num_inference_stepsintegerNo301 ~ 40The number of inference steps to perform.
durationintegerNo55 ~ 10The duration of the generated media in seconds.
strengthnumberNo0.90.10 ~ 1.00
guidance_scalenumberNo50.00 ~ 20.00The guidance scale to use for the generation.
flow_shiftnumberNo31.0 ~ 10.0The shift value for the timestep schedule for flow matching.
seedintegerNo-1-1 ~ 2147483647The random seed to use for the generation. -1 means a random seed will be used.

Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
data.idstringUnique identifier for the prediction, Task Id
data.modelstringModel ID used for the prediction
data.outputsarrayArray of URLs to the generated content (empty when status is not completed)
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.has_nsfw_contentsarrayArray of boolean values indicating NSFW detection for each output
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds

Result Request Parameters

ParameterTypeRequiredDefaultDescription
idstringYes-Task ID

Result Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
dataobjectThe prediction data object containing all details
data.idstringUnique identifier for the prediction, the ID of the prediction to get
data.modelstringModel ID used for the prediction
data.outputsstringArray of URLs to the generated content (empty when status is not completed).
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds
© 2025 WaveSpeedAI. All rights reserved.