Home/Explore/Wan 2.1 Video Models/wavespeed-ai/wan-2.1/v2v-480p-lora-ultra-fast
video-to-video

video-to-video

Wan 2.1 V2V 480P LoRA Ultra Fast

wavespeed-ai/wan-2.1/v2v-480p-lora-ultra-fast

Wan 2.1 V2V 480p is an ultra-fast video-to-video model that generates unlimited AI videos and supports custom LoRAs for personalization. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.

Hint: You can drag and drop a file or click to upload

Idle

Your request will cost $0.125 per run.

For $10 you can run this model approximately 80 times.

One more thing::

ExamplesView all

README

Wan 2.1 V2V 480p LoRA Ultra Fast — wavespeed-ai/wan-2.1/v2v-480p-lora-ultra-fast

Wan 2.1 V2V 480p LoRA Ultra Fast is a speed-optimized video-to-video model for prompt-guided edits while preserving the original motion and timing of an input video. Upload a source video, describe what should change, and tune strength to control how closely the output follows the original footage. It supports up to 3 LoRAs to enforce a consistent style, character look, or branded aesthetic—now with lower latency for rapid iteration.

Key capabilities

  • Ultra-fast video-to-video transformation anchored to an input video (480p output)
  • Prompt-guided edits while keeping motion continuity and pacing
  • Strength control to balance preservation vs. transformation
  • LoRA support (up to 3) for stable style/identity steering
  • Fine control over motion behavior via flow_shift

Use cases

  • Rapid V2V restyling for social clips and creative iteration
  • Apply a consistent “house style” across multiple clips using LoRAs
  • Lighting/mood changes (cinematic grade, neon, golden hour) without re-animating motion
  • Brand-safe refresh: keep composition and timing, update textures/colors/details
  • Quick A/B testing by changing prompts, LoRAs, or seed

Pricing

DurationPrice per video
5s$0.125
10s$0.1875

Inputs

  • video (required): source video to transform
  • prompt (required): what to change and how the result should look
  • negative_prompt (optional): what to avoid (artifacts, jitter, unwanted elements)
  • loras (optional): up to 3 LoRA items for style/identity steering

Parameters

  • num_inference_steps: sampling steps
  • duration: output duration (seconds)
  • strength: how strongly to transform the input video (lower = preserve more; higher = change more)
  • guidance_scale: prompt adherence strength
  • flow_shift: motion/flow behavior tuning
  • seed: random seed (-1 for random; fixed for reproducible results)

LoRA (up to 3 items):

  • loras: list of LoRA entries (max 3)

    • path: owner/model-name or a direct .safetensors URL
    • scale: LoRA strength

Prompting guide (V2V)

Write prompts that explicitly separate preservation from transformation:

Template: Keep the same camera motion and timing from the input video. Change [style/lighting/environment]. Keep faces natural and consistent. Avoid flicker and warping.

Example prompts

  • Keep the original motion and composition. Apply a candid, cinematic look with warm sunlight, soft depth of field, and natural skin texture.
  • Preserve timing and camera movement. Restyle into a clean anime look with consistent shading and no flicker.
  • Keep the same scene and people. Change the color grade to sunset golden hour, add subtle lens flare, maintain realistic shadows.