Home/Explore/Wan 2.1 Video Models/wavespeed-ai/wan-2.1/v2v-720p-lora-ultra-fast
video-to-video

video-to-video

Wan 2.1 V2V 720P LoRA Ultra-Fast Model

wavespeed-ai/wan-2.1/v2v-720p-lora-ultra-fast

Wan 2.1 V2V 720p LoRA Ultra-Fast converts videos to 720p with custom LoRA support and lets you generate unlimited AI videos. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.

Hint: You can drag and drop a file or click to upload

Recommend:
motimalu/wan-flat-color-v2
shauray/Origami_WanLora

Idle

Your request will cost $0.225 per run.

For $10 you can run this model approximately 44 times.

One more thing::

ExamplesView all

README

Wan 2.1 V2V 720p LoRA Ultra Fast — wavespeed-ai/wan-2.1/v2v-720p-lora-ultra-fast

Wan 2.1 V2V 720p LoRA Ultra Fast is a speed-optimized video-to-video model that transforms an input video using a text prompt while preserving the original motion and timing. Upload a source video, describe the desired style or changes, and tune strength to balance between “keep the original” and “apply the edit.” It supports up to 3 LoRAs for consistent styling, character look, or branded aesthetics—now with faster turnaround for rapid iteration at 720p.

Key capabilities

  • Ultra-fast video-to-video transformation anchored to an input video (720p output)
  • Prompt-guided edits while keeping motion continuity and pacing
  • Strength control to balance preservation vs. transformation
  • LoRA support (up to 3) for stable style/identity steering across clips
  • Fine motion behavior tuning via flow_shift

Use cases

  • Rapid 720p V2V restyling for social, ads, and creative iteration
  • Apply a consistent “house style” across multiple clips using LoRAs
  • Upgrade mood and color grade (cinematic, warm window light, neon, noir)
  • Brand-safe refresh: keep composition and timing, update textures/colors/details
  • Quick A/B testing by changing prompts, LoRAs, or seed

Pricing

DurationPrice per video
5s$0.225
10s$0.3375

Inputs

  • video (required): source video to transform
  • prompt (required): what to change and how the result should look
  • negative_prompt (optional): what to avoid (artifacts, jitter, unwanted elements)
  • loras (optional): up to 3 LoRA items for style/identity steering

Parameters

  • num_inference_steps: sampling steps
  • duration: output duration (seconds)
  • strength: how strongly to transform the input video (lower = preserve more; higher = change more)
  • guidance_scale: prompt adherence strength
  • flow_shift: motion/flow behavior tuning
  • seed: random seed (-1 for random; fixed for reproducible results)

LoRA (up to 3 items):

  • loras: list of LoRA entries (max 3)

    • path: owner/model-name or a direct .safetensors URL
    • scale: LoRA strength

Prompting guide (V2V + LoRA)

A reliable structure is “preserve + edit + style”:

Template: Keep the original motion and timing. Apply [style/look] and adjust [lighting/colors/textures]. Keep faces natural and stable. Avoid flicker, warping, and jitter.

Example prompts

  • Keep the original motion and composition. Apply a warm, cozy studio look with soft window light, visible dust particles, gentle film grain, and natural skin tones.
  • Preserve camera motion and timing. Restyle the clip into a flat-color illustration look while keeping clean edges and stable shading.
  • Keep the scene and movement. Shift the color grade to golden hour, add subtle bloom and soft shadows, maintain realism.