Wan 2.2 Image To Video LoRA
Playground
Try it on WavespeedAI!Wan-2.2/image-to-video-lora enables unlimited image-to-video generation from a single image, producing smooth, cinematic motion with clean detail. Supports custom LoRAs for style and character consistency. Ready-to-use REST inference API, best performance, no cold starts, affordable pricing.
Features
Wan 2.2 — Image-to-Video LoRA
Wan 2.2 Image-to-Video LoRA is a 14B-parameter I2V model that turns a single image into a short animated shot. Built on WAN AI’s Mixture of Experts (MoE) architecture, it uses high-noise and low-noise experts across denoising timesteps to preserve your source image while adding cinematic motion and atmosphere. It supports LoRA adapters on both the base model and individual noise experts for fine-grained style and character control.
Why it looks great
- Cinematic aesthetic: Trained for film-like lighting, color, and composition, ideal for music videos, promos, and hero shots.
- Motion that respects your image: Keeps core character and scene layout from the input image while adding natural camera moves and animation.
- Strong semantic understanding: Aligns motion with your prompt so the transformation (e.g., “the cat slowly deflates on stage”) follows your story rather than random noise.
- Expert-level MoE design: High-noise and low-noise experts allow detailed control via specialized LoRAs, giving more consistent style and finer texture.
Controls and Parameters
-
image (required): The starting image. This defines the main character, composition, and base style.
-
prompt (required): Describe how the scene should evolve: motion, expression, lighting changes, camera work, mood, and story beats.
-
resolution:
- 480p
- 720p
-
duration:
- 5 seconds
- 8 seconds
-
negative_prompt (optional): Things you want to avoid (e.g., “blurry”, “text on screen”, “distorted face”).
-
last_image (optional): Target image for the final frame; useful if you want the video to end on a specific pose or state.
-
seed:
- −1 → random seed every run
LoRA Support
Wan 2.2 Image-to-Video LoRA supports three kinds of LoRA entry points. Each kind support up to three inputs:
- loras Global LoRAs that affect the whole denoising process (common for characters, global style, or film stock).
- high_noise_loras LoRAs that act mainly at early timesteps, influencing rough layout, pose, motion arcs, and large shapes.
- low_noise_loras LoRAs that act mainly at late timesteps, shaping fine details like texture, facial features, and micro-lighting.
Pricing
| Resolution | 5 s price | 8 s price |
|---|---|---|
| 480p | $0.20 | $0.32 |
| 720p | $0.35 | $0.56 |
How to Use
-
Upload or paste the URL of your source image in the image field.
-
Write a clear prompt describing how the video should start, evolve, and end (camera moves, actions, lighting changes, mood).
-
(Optional) Add a negative_prompt to filter unwanted artifacts.
-
(Optional) Add a last_image if you want the final frame to match a specific target.
-
Choose resolution (480p or 720p) and duration (5 or 8 seconds).
-
(Optional) Attach LoRAs:
- Click Add Item under loras, high_noise_loras, or low_noise_loras.
- Paste the LoRA path or URL and set the scale.
-
Set seed:
- Use −1 to explore different variations.
- Use a fixed integer to iterate precisely on the same motion.
-
Run the model, preview the result, and download the video.
LoRA Learning Resources
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/wan-2.2/image-to-video-lora" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"resolution": "480p",
"duration": 5,
"seed": -1
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| image | string | Yes | - | The image for generating the output. | |
| prompt | string | Yes | - | The positive prompt for the generation. | |
| resolution | string | No | 480p | 480p, 720p | The resolution of the generated media. |
| negative_prompt | string | No | - | The negative prompt for the generation. | |
| last_image | string | No | - | - | The last image for generating the output. |
| duration | integer | No | 5 | 5, 8 | The duration of the generated media in seconds. |
| loras | array | No | max 3 items | List of LoRAs to apply (max 3). | |
| loras[].path | string | Yes | - | Path to the LoRA model | |
| loras[].scale | float | Yes | - | 0.0 ~ 4.0 | Scale of the LoRA model |
| high_noise_loras | array | No | - | - | List of high noise LoRAs to apply (max 3). |
| low_noise_loras | array | No | - | - | List of low noise LoRAs to apply (max 3). |
| seed | integer | No | -1 | -1 ~ 2147483647 | The random seed to use for the generation. -1 means a random seed will be used. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |