Browse ModelsWavespeed AIWan 2.1 V2V 480p LoRA Ultra Fast

Wan 2.1 V2V 480p LoRA Ultra Fast

Wan 2.1 V2V 480p LoRA Ultra Fast

Playground

Try it on WavespeedAI!

Wan 2.1 V2V 480p is an ultra-fast video-to-video model that generates unlimited AI videos and supports custom LoRAs for personalization. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.

Features

Wan 2.1 V2V 480p LoRA Ultra Fast — wavespeed-ai/wan-2.1/v2v-480p-lora-ultra-fast

Wan 2.1 V2V 480p LoRA Ultra Fast is a speed-optimized video-to-video model for prompt-guided edits while preserving the original motion and timing of an input video. Upload a source video, describe what should change, and tune strength to control how closely the output follows the original footage. It supports up to 3 LoRAs to enforce a consistent style, character look, or branded aesthetic—now with lower latency for rapid iteration.

Key capabilities

  • Ultra-fast video-to-video transformation anchored to an input video (480p output)
  • Prompt-guided edits while keeping motion continuity and pacing
  • Strength control to balance preservation vs. transformation
  • LoRA support (up to 3) for stable style/identity steering
  • Fine control over motion behavior via flow_shift

Use cases

  • Rapid V2V restyling for social clips and creative iteration
  • Apply a consistent “house style” across multiple clips using LoRAs
  • Lighting/mood changes (cinematic grade, neon, golden hour) without re-animating motion
  • Brand-safe refresh: keep composition and timing, update textures/colors/details
  • Quick A/B testing by changing prompts, LoRAs, or seed

Pricing

DurationPrice per video
5s$0.125
10s$0.1875

Inputs

  • video (required): source video to transform
  • prompt (required): what to change and how the result should look
  • negative_prompt (optional): what to avoid (artifacts, jitter, unwanted elements)
  • loras (optional): up to 3 LoRA items for style/identity steering

Parameters

  • num_inference_steps: sampling steps
  • duration: output duration (seconds)
  • strength: how strongly to transform the input video (lower = preserve more; higher = change more)
  • guidance_scale: prompt adherence strength
  • flow_shift: motion/flow behavior tuning
  • seed: random seed (-1 for random; fixed for reproducible results)

LoRA (up to 3 items):

  • loras: list of LoRA entries (max 3)

    • path: owner/model-name or a direct .safetensors URL
    • scale: LoRA strength

Prompting guide (V2V)

Write prompts that explicitly separate preservation from transformation:

Template: Keep the same camera motion and timing from the input video. Change [style/lighting/environment]. Keep faces natural and consistent. Avoid flicker and warping.

Example prompts

  • Keep the original motion and composition. Apply a candid, cinematic look with warm sunlight, soft depth of field, and natural skin texture.
  • Preserve timing and camera movement. Restyle into a clean anime look with consistent shading and no flicker.
  • Keep the same scene and people. Change the color grade to sunset golden hour, add subtle lens flare, maintain realistic shadows.

Authentication

For authentication details, please refer to the Authentication Guide.

API Endpoints

Submit Task & Query Result


# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/wan-2.1/v2v-480p-lora-ultra-fast" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
    "loras": [
        {
            "path": "motimalu/wan-flat-color-v2",
            "scale": 1
        }
    ],
    "num_inference_steps": 30,
    "duration": 5,
    "strength": 0.9,
    "guidance_scale": 5,
    "flow_shift": 3,
    "seed": -1
}'

# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"

Parameters

Task Submission Parameters

Request Parameters

ParameterTypeRequiredDefaultRangeDescription
videostringYes-The video for generating the output.
promptstringYes-
lorasarrayNomax 3 itemsList of LoRAs to apply (max 3).
loras[].pathstringYes-Path to the LoRA model
loras[].scalefloatYes-0.0 ~ 4.0Scale of the LoRA model
negative_promptstringNo-The negative prompt for the generation.
num_inference_stepsintegerNo301 ~ 40The number of inference steps to perform.
durationintegerNo55 ~ 10The duration of the generated media in seconds.
strengthnumberNo0.90.10 ~ 1.00
guidance_scalenumberNo50.00 ~ 20.00The guidance scale to use for the generation.
flow_shiftnumberNo31.0 ~ 10.0The shift value for the timestep schedule for flow matching.
seedintegerNo-1-1 ~ 2147483647The random seed to use for the generation. -1 means a random seed will be used.

Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
data.idstringUnique identifier for the prediction, Task Id
data.modelstringModel ID used for the prediction
data.outputsarrayArray of URLs to the generated content (empty when status is not completed)
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.has_nsfw_contentsarrayArray of boolean values indicating NSFW detection for each output
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds

Result Request Parameters

ParameterTypeRequiredDefaultDescription
idstringYes-Task ID

Result Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
dataobjectThe prediction data object containing all details
data.idstringUnique identifier for the prediction, the ID of the prediction to get
data.modelstringModel ID used for the prediction
data.outputsstringArray of URLs to the generated content (empty when status is not completed).
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds
© 2025 WaveSpeedAI. All rights reserved.