Wan-2.2/image-to-video-lora enables unlimited image-to-video generation from a single image, producing smooth, cinematic motion with clean detail. Supports custom LoRAs for style and character consistency. Ready-to-use REST inference API, best performance, no cold starts, affordable pricing.
Bereit
$0.2pro Durchlauf·~50 / $10
Wan 2.2 Image-to-Video LoRA is a 14B-parameter I2V model that turns a single image into a short animated shot. Built on WAN AI’s Mixture of Experts (MoE) architecture, it uses high-noise and low-noise experts across denoising timesteps to preserve your source image while adding cinematic motion and atmosphere. It supports LoRA adapters on both the base model and individual noise experts for fine-grained style and character control.
image (required): The starting image. This defines the main character, composition, and base style.
prompt (required): Describe how the scene should evolve: motion, expression, lighting changes, camera work, mood, and story beats.
resolution:
480p
720p
duration:
5 seconds
8 seconds
negative_prompt (optional): Things you want to avoid (e.g., “blurry”, “text on screen”, “distorted face”).
last_image (optional): Target image for the final frame; useful if you want the video to end on a specific pose or state.
seed:
−1 → random seed every run
Wan 2.2 Image-to-Video LoRA supports three kinds of LoRA entry points. Each kind support up to three inputs:
| Resolution | 5 s price | 8 s price |
|---|---|---|
| 480p | $0.20 | $0.32 |
| 720p | $0.35 | $0.56 |