
image-to-video
Idle
Your request will cost $0.2 per run.
For $10 you can run this model approximately 50 times.
One more thing::
Wan 2.2 Image-to-Video LoRA is a 14B-parameter I2V model that turns a single image into a short animated shot. Built on WAN AI’s Mixture of Experts (MoE) architecture, it uses high-noise and low-noise experts across denoising timesteps to preserve your source image while adding cinematic motion and atmosphere. It supports LoRA adapters on both the base model and individual noise experts for fine-grained style and character control.
image (required): The starting image. This defines the main character, composition, and base style.
prompt (required): Describe how the scene should evolve: motion, expression, lighting changes, camera work, mood, and story beats.
resolution:
duration:
negative_prompt (optional): Things you want to avoid (e.g., “blurry”, “text on screen”, “distorted face”).
last_image (optional): Target image for the final frame; useful if you want the video to end on a specific pose or state.
seed:
Wan 2.2 Image-to-Video LoRA supports three kinds of LoRA entry points. Each kind support up to three inputs:
| Resolution | 5 s price | 8 s price |
|---|---|---|
| 480p | $0.20 | $0.32 |
| 720p | $0.35 | $0.56 |
Upload or paste the URL of your source image in the image field.
Write a clear prompt describing how the video should start, evolve, and end (camera moves, actions, lighting changes, mood).
(Optional) Add a negative_prompt to filter unwanted artifacts.
(Optional) Add a last_image if you want the final frame to match a specific target.
Choose resolution (480p or 720p) and duration (5 or 8 seconds).
(Optional) Attach LoRAs:
Set seed:
Run the model, preview the result, and download the video.