Browse ModelsWavespeed AIWan 2.2 I2V 720p LoRA

Wan 2.2 I2V 720p LoRA

Wan 2.2 I2V 720p LoRA

Playground

Try it on WavespeedAI!

WAN 2.2 Image-to-Video (i2v) 720p converts images into 720p videos and supports custom LoRAs for style personalization. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.

Features

Wan 2.2 Image-to-Video 720p LoRA

Generate customized 720p HD videos from images with full LoRA support using Wan 2.2. This premium model delivers high-quality output with three LoRA slots for precise style control — perfect for professional content, cinematic scenes, and custom character animation.

Looking for faster processing? Try Wan 2.2 I2V 720p LoRA Ultra Fast for speed-optimized generation.

Why It Looks Great

  • 720p HD output: Sharp, professional-quality video for final deliverables.
  • LoRA support: Apply up to 3 custom LoRAs each for standard, high-noise, and low-noise stages.
  • Premium quality: Standard processing delivers maximum visual fidelity.
  • Start-to-end interpolation: Optionally provide a last frame for smooth transitions.
  • Negative prompt support: Exclude unwanted elements for precise control.
  • Prompt Enhancer: Built-in tool to refine your motion descriptions automatically.
  • Safety Checker: Optional content filtering for appropriate output.

Parameters

ParameterRequiredDescription
imageYesSource/starting image to animate (upload or public URL).
promptYesText description of the motion and action you want.
negative_promptNoElements to avoid in the generated video.
last_imageNoOptional ending frame for start-to-end interpolation (upload or URL).
durationNoVideo length: 5 or 8 seconds. Default: 5.
lorasNoStandard LoRA adapters to apply (up to 3).
high_noise_lorasNoLoRAs applied during high-noise denoising stages (up to 3).
low_noise_lorasNoLoRAs applied during low-noise denoising stages (up to 3).
seedNoRandom seed for reproducibility. Use -1 for random.
Enable Safety CheckerNoToggle content safety filtering.

How to Use

  1. Upload your starting image — drag and drop or paste a public URL.
  2. Write your prompt — describe the motion, style, and action in detail.
  3. Use Prompt Enhancer (optional) — click to enrich your motion description.
  4. Add negative prompt (optional) — specify elements to exclude.
  5. Upload last image (optional) — add an ending frame for interpolation effects.
  6. Set duration — choose 5 or 8 seconds.
  7. Add LoRAs (optional) — click ”+ Add Item” to include custom LoRA adapters.
  8. Set seed (optional) — for reproducible results.
  9. Run — click the button to generate.
  10. Download — preview and save your video.

Pricing

Per 5-second billing based on duration.

DurationCalculationCost
5 seconds5 ÷ 5 × $0.35$0.35
8 seconds8 ÷ 5 × $0.35$0.56

Understanding LoRA Options

This model provides three different LoRA slots that affect different stages of the generation process:

LoRA TypeWhen AppliedBest ForMax Count
lorasThroughout generationGeneral style, character consistency3
high_noise_lorasEarly denoising (high noise)Overall composition, major style elements3
low_noise_lorasLate denoising (low noise)Fine details, textures, finishing touches3

LoRA Strategy Tips

  • Single style: Use standard loras for consistent style throughout.
  • Layered control: Combine high_noise for composition + low_noise for details.
  • Character + Style: Use one slot for character LoRA, another for style LoRA.

Best Use Cases

  • Cinematic Content — Create professional-quality videos with rich detail and atmosphere.
  • Custom Character Animation — Apply character LoRAs for consistent identity across videos.
  • Documentary Style — Generate realistic, immersive scenes with natural motion.
  • Brand Content — Use trained LoRAs for consistent brand visual identity.
  • Final Deliverables — Premium quality suitable for professional distribution.

Example Prompts

  • “A handheld camera walks through a busy Middle Eastern bazaar. People bargain, children play, spices and cloths fill the frame. Rich saturated colors, dynamic shadows, shallow depth of field. Natural dialogue sounds faintly in the background. Organic, documentary feel.”
  • “Cinematic drone shot over misty mountains at sunrise, smooth camera movement, epic scale”
  • “Portrait subject looks up slowly, soft window light shifting across face, intimate mood”
  • “Product showcase with elegant camera orbit, studio lighting, professional commercial style”
  • “Fantasy character casting spell, magical particles swirling, dramatic lighting effects”

Model Comparison

ModelCost (5s)SpeedBest For
I2V 720p LoRA$0.35StandardMaximum quality, final deliverables
I2V 720p LoRA Ultra Fast$0.15FastRapid iteration, testing, high-volume

How to Use LoRAs

For detailed guides on using and training custom LoRAs:

Pro Tips for Best Results

  • Use this model for final deliverables where quality matters most.
  • Combine multiple LoRAs strategically — high_noise for style, low_noise for details.
  • Don’t overload with LoRAs — sometimes 1-2 well-chosen LoRAs work better than many.
  • For documentary/cinematic content, describe camera behavior: “handheld”, “tracking”, “shallow depth of field”.
  • Use the last_image feature for controlled transitions between scenes or states.
  • Test concepts with the Ultra Fast variant first, then generate finals with this model.

Notes

  • If using URLs for images, ensure they are publicly accessible. Preview thumbnails confirm successful loading.
  • Each LoRA slot (loras, high_noise_loras, low_noise_loras) supports up to 3 LoRAs.
  • Enable Safety Checker for content that will be publicly shared.
  • Standard processing takes longer but delivers higher quality than Ultra Fast variants.

Authentication

For authentication details, please refer to the Authentication Guide.

API Endpoints

Submit Task & Query Result


# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/wan-2.2/i2v-720p-lora" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
    "duration": 5,
    "seed": -1
}'

# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"

Parameters

Task Submission Parameters

Request Parameters

ParameterTypeRequiredDefaultRangeDescription
imagestringYes-The image for generating the output.
promptstringYes-The positive prompt for the generation.
negative_promptstringNo-The negative prompt for the generation.
last_imagestringNo--The last image for generating the output.
durationintegerNo55, 8The duration of the generated media in seconds.
lorasarrayNomax 3 itemsList of LoRAs to apply (max 3).
loras[].pathstringYes-Path to the LoRA model
loras[].scalefloatYes-0.0 ~ 4.0Scale of the LoRA model
high_noise_lorasarrayNo--List of high noise LoRAs to apply (max 3).
low_noise_lorasarrayNo--List of low noise LoRAs to apply (max 3).
seedintegerNo-1-1 ~ 2147483647The random seed to use for the generation. -1 means a random seed will be used.

Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
data.idstringUnique identifier for the prediction, Task Id
data.modelstringModel ID used for the prediction
data.outputsarrayArray of URLs to the generated content (empty when status is not completed)
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.has_nsfw_contentsarrayArray of boolean values indicating NSFW detection for each output
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds

Result Request Parameters

ParameterTypeRequiredDefaultDescription
idstringYes-Task ID

Result Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
dataobjectThe prediction data object containing all details
data.idstringUnique identifier for the prediction, the ID of the prediction to get
data.modelstringModel ID used for the prediction
data.outputsstringArray of URLs to the generated content (empty when status is not completed).
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds
© 2025 WaveSpeedAI. All rights reserved.