WaveSpeedAI APIWavespeed AIWan 2.2 Image To Video LoRA

Wan 2.2 Image To Video LoRA

Wan 2.2 Image To Video LoRA

Playground

Try it on WavespeedAI!

Wan-2.2/image-to-video-lora enables unlimited image-to-video generation from a single image, producing smooth, cinematic motion with clean detail. Supports custom LoRAs for style and character consistency. Ready-to-use REST inference API, best performance, no cold starts, affordable pricing.

Features

Wan 2.2 — Image-to-Video LoRA

Wan 2.2 Image-to-Video LoRA is a 14B-parameter I2V model that turns a single image into a short animated shot. Built on WAN AI’s Mixture of Experts (MoE) architecture, it uses high-noise and low-noise experts across denoising timesteps to preserve your source image while adding cinematic motion and atmosphere. It supports LoRA adapters on both the base model and individual noise experts for fine-grained style and character control.

Why it looks great

  • Cinematic aesthetic: Trained for film-like lighting, color, and composition, ideal for music videos, promos, and hero shots.
  • Motion that respects your image: Keeps core character and scene layout from the input image while adding natural camera moves and animation.
  • Strong semantic understanding: Aligns motion with your prompt so the transformation (e.g., “the cat slowly deflates on stage”) follows your story rather than random noise.
  • Expert-level MoE design: High-noise and low-noise experts allow detailed control via specialized LoRAs, giving more consistent style and finer texture.

Controls and Parameters

  • image (required): The starting image. This defines the main character, composition, and base style.

  • prompt (required): Describe how the scene should evolve: motion, expression, lighting changes, camera work, mood, and story beats.

  • resolution:

    • 480p
    • 720p
  • duration:

    • 5 seconds
    • 8 seconds
  • negative_prompt (optional): Things you want to avoid (e.g., “blurry”, “text on screen”, “distorted face”).

  • last_image (optional): Target image for the final frame; useful if you want the video to end on a specific pose or state.

  • seed:

    • −1 → random seed every run

LoRA Support

Wan 2.2 Image-to-Video LoRA supports three kinds of LoRA entry points. Each kind support up to three inputs:

  • loras Global LoRAs that affect the whole denoising process (common for characters, global style, or film stock).
  • high_noise_loras LoRAs that act mainly at early timesteps, influencing rough layout, pose, motion arcs, and large shapes.
  • low_noise_loras LoRAs that act mainly at late timesteps, shaping fine details like texture, facial features, and micro-lighting.

Pricing

Resolution5 s price8 s price
480p$0.20$0.32
720p$0.35$0.56

How to Use

  1. Upload or paste the URL of your source image in the image field.

  2. Write a clear prompt describing how the video should start, evolve, and end (camera moves, actions, lighting changes, mood).

  3. (Optional) Add a negative_prompt to filter unwanted artifacts.

  4. (Optional) Add a last_image if you want the final frame to match a specific target.

  5. Choose resolution (480p or 720p) and duration (5 or 8 seconds).

  6. (Optional) Attach LoRAs:

    • Click Add Item under loras, high_noise_loras, or low_noise_loras.
    • Paste the LoRA path or URL and set the scale.
  7. Set seed:

    • Use −1 to explore different variations.
    • Use a fixed integer to iterate precisely on the same motion.
  8. Run the model, preview the result, and download the video.

LoRA Learning Resources

Authentication

For authentication details, please refer to the Authentication Guide.

API Endpoints

Submit Task & Query Result


# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/wan-2.2/image-to-video-lora" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
    "resolution": "480p",
    "duration": 5,
    "seed": -1
}'

# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"

Parameters

Task Submission Parameters

Request Parameters

ParameterTypeRequiredDefaultRangeDescription
imagestringYes-The image for generating the output.
promptstringYes-The positive prompt for the generation.
resolutionstringNo480p480p, 720pThe resolution of the generated media.
negative_promptstringNo-The negative prompt for the generation.
last_imagestringNo--The last image for generating the output.
durationintegerNo55, 8The duration of the generated media in seconds.
lorasarrayNomax 3 itemsList of LoRAs to apply (max 3).
loras[].pathstringYes-Path to the LoRA model
loras[].scalefloatYes-0.0 ~ 4.0Scale of the LoRA model
high_noise_lorasarrayNo--List of high noise LoRAs to apply (max 3).
low_noise_lorasarrayNo--List of low noise LoRAs to apply (max 3).
seedintegerNo-1-1 ~ 2147483647The random seed to use for the generation. -1 means a random seed will be used.

Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
data.idstringUnique identifier for the prediction, Task Id
data.modelstringModel ID used for the prediction
data.outputsarrayArray of URLs to the generated content (empty when status is not completed)
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.has_nsfw_contentsarrayArray of boolean values indicating NSFW detection for each output
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds

Result Request Parameters

© 2025 WaveSpeedAI. All rights reserved.