Vidu Contest
WaveSpeed.ai
Startseite/Entdecken/Wan 2.1 Video Models/ wavespeed-ai/wan-2.1/v2v-480p-lora
lora-support

lora-support

WAN 2.1 V2V 480P LoRA

wavespeed-ai/wan-2.1/v2v-480p-lora

WAN 2.1 V2V 480p LoRA generates unlimited 480p video-to-video edits with custom LoRA support for tailored styles. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.

Input

Hint: You can drag and drop a file or click to upload

Idle

Ihre Anfrage kostet $0.2 pro Durchlauf.

Für $10 können Sie dieses Modell ungefähr 50 Mal ausführen.

BeispieleAlle anzeigen

README

Wan 2.1 V2V 480p LoRA — wavespeed-ai/wan-2.1/v2v-480p-lora

Wan 2.1 V2V 480p LoRA is a video-to-video model designed for prompt-guided edits while preserving the original motion and timing of an input video. Upload a source video, describe what should change, and tune the edit strength to control how closely the output follows the original footage. It supports up to 3 LoRAs to enforce a consistent style, character look, or branded aesthetic across the transformed clip.

Key capabilities

  • Video-to-video transformation anchored to an input video (480p output)
  • Prompt-guided edits while keeping motion continuity and pacing
  • Strength control to balance preservation vs. transformation
  • LoRA support (up to 3) for stable style/identity steering
  • Fine control over motion behavior via flow_shift

Use cases

  • Restyle a video while preserving the original motion (cinematic, anime, illustration looks)
  • Apply a “house style” consistently across multiple clips using LoRAs
  • Enhance mood and lighting (golden hour, noir, neon) without re-animating motion
  • Brand-safe content refresh: keep composition, change textures/colors/details
  • Creative remixing for social clips and rapid iteration

Pricing

DurationPrice per video
5s$0.20
10s$0.30

Inputs

  • video (required): source video to transform
  • prompt (required): what to change and how the result should look
  • negative_prompt (optional): what to avoid (artifacts, jitter, unwanted elements)
  • loras (optional): up to 3 LoRA items for style/identity steering

Parameters

  • num_inference_steps: sampling steps
  • duration: output duration (seconds)
  • strength: how strongly to transform the input video (lower = preserve more; higher = change more)
  • guidance_scale: prompt adherence strength
  • flow_shift: motion/flow behavior tuning
  • seed: random seed (-1 for random; fixed for reproducible results)

LoRA (up to 3 items):

  • loras: list of LoRA entries (max 3)

    • path: owner/model-name or a direct .safetensors URL
    • scale: LoRA strength

Prompting guide (V2V)

To get stable edits, write prompts that separate “preserve” from “change”:

Template: Keep the same camera motion and timing from the input video. Change [style/lighting/wardrobe/environment]. Keep faces natural and consistent. Avoid flicker and warping.

Example prompts

  • Keep the original motion and composition. Make the video candid and cinematic with warm sunlight, natural skin texture, gentle film grain, and soft depth of field.
  • Preserve timing and camera movement. Restyle the scene into a clean anime look with crisp edges and consistent shading, no flicker.
  • Keep the same scene and people. Change the color grade to sunset golden hour, add subtle lens flare, maintain realistic shadows.