Wan 2.2 I2V 720p LoRA Ultra Fast
Playground
Try it on WavespeedAI!Wan 2.2 i2v 720P is an ultra-fast Image-to-Video model that generates unlimited AI videos and supports custom LoRAs for personalized outputs. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
Features
Wan 2.2 Image-to-Video 720p LoRA Ultra Fast
Generate customized 720p videos from images with LoRA support using Wan 2.2 Ultra Fast. This powerful model allows you to apply custom LoRA adapters for unique styles, characters, and effects — with three different noise-level options for precise control over how LoRAs influence your output.
Why It Looks Great
- LoRA support: Apply up to 3 custom LoRAs each for standard, high-noise, and low-noise stages.
- Ultra-fast generation: Optimized for speed without sacrificing quality.
- Start-to-end interpolation: Optionally provide a last frame for smooth transitions.
- 720p HD output: Sharp, clean video quality for most digital platforms.
- Negative prompt support: Exclude unwanted elements for precise control.
- Prompt Enhancer: Built-in tool to refine your motion descriptions automatically.
- Safety Checker: Optional content filtering for appropriate output.
Parameters
| Parameter | Required | Description |
|---|---|---|
| image | Yes | Source/starting image to animate (upload or public URL). |
| prompt | Yes | Text description of the motion and action you want. |
| negative_prompt | No | Elements to avoid in the generated video. |
| last_image | No | Optional ending frame for start-to-end interpolation (upload or URL). |
| duration | No | Video length: 5 or 8 seconds. Default: 5. |
| loras | No | Standard LoRA adapters to apply (up to 3). |
| high_noise_loras | No | LoRAs applied during high-noise denoising stages (up to 3). |
| low_noise_loras | No | LoRAs applied during low-noise denoising stages (up to 3). |
| seed | No | Random seed for reproducibility. Use -1 for random. |
| Enable Safety Checker | No | Toggle content safety filtering. |
How to Use
- Upload your starting image — drag and drop or paste a public URL.
- Write your prompt — describe the motion, style, and action in detail.
- Use Prompt Enhancer (optional) — click to enrich your motion description.
- Add negative prompt (optional) — specify elements to exclude.
- Upload last image (optional) — add an ending frame for interpolation effects.
- Set duration — choose 5 or 8 seconds.
- Add LoRAs (optional) — click ”+ Add Item” to include custom LoRA adapters.
- Set seed (optional) — for reproducible results.
- Run — click the button to generate.
- Download — preview and save your video.
Pricing
Per 5-second billing based on duration.
| Duration | Calculation | Cost |
|---|---|---|
| 5 seconds | 5 ÷ 5 × $0.15 | $0.15 |
| 8 seconds | 8 ÷ 5 × $0.15 | $0.24 |
Volume Examples
| Videos | Duration | Total Cost |
|---|---|---|
| 10 | 5s | $1.50 |
| 10 | 8s | $2.40 |
| 50 | 5s | $7.50 |
| 50 | 8s | $12.00 |
Understanding LoRA Options
This model provides three different LoRA slots that affect different stages of the generation process:
| LoRA Type | When Applied | Best For | Max Count |
|---|---|---|---|
| loras | Throughout generation | General style, character consistency | 3 |
| high_noise_loras | Early denoising (high noise) | Overall composition, major style elements | 3 |
| low_noise_loras | Late denoising (low noise) | Fine details, textures, finishing touches | 3 |
LoRA Strategy Tips
- Single style: Use standard
lorasfor consistent style throughout. - Layered control: Combine high_noise for composition + low_noise for details.
- Character + Style: Use one slot for character LoRA, another for style LoRA.
Best Use Cases
- Custom Character Animation — Apply character LoRAs to maintain consistent identity.
- Unique Art Styles — Use style LoRAs for distinctive visual aesthetics.
- Brand-Specific Content — Apply trained LoRAs for consistent brand visuals.
- Creative Experimentation — Combine multiple LoRAs for unique hybrid effects.
- Rapid Style Testing — Ultra-fast generation for quick LoRA iteration.
Example Prompts
- “A street musician plays a violin under a dripping canopy as rain pours around him. Wet pavement reflects neon signs. His soaked clothes cling to him as he sways to the melody. Close-up on fingers and expression. Emotional, rain-soaked urban realism.”
- “Character walks through a fantasy forest, magical particles floating around, ethereal lighting”
- “Subject turns toward camera with dramatic lighting, cinematic color grading”
- “Dynamic action pose with motion blur, stylized animation aesthetic”
- “Gentle portrait movement, soft bokeh background, dreamy atmosphere”
Pro Tips for Best Results
- Start without LoRAs to establish a baseline, then add them incrementally.
- Use high_noise_loras for major style/composition changes, low_noise_loras for subtle refinements.
- Don’t overload with LoRAs — sometimes 1-2 well-chosen LoRAs work better than many.
- Match your prompt language to the LoRA’s training — use trigger words if applicable.
- Test LoRA combinations at 5s duration before committing to longer generations.
- Use negative prompts to counteract unwanted LoRA artifacts.
How to Use LoRAs
For detailed guides on using and training custom LoRAs:
Notes
- If using URLs for images, ensure they are publicly accessible. Preview thumbnails confirm successful loading.
- Each LoRA slot (loras, high_noise_loras, low_noise_loras) supports up to 3 LoRAs.
- LoRA effects are cumulative — adding more LoRAs increases their combined influence.
- Enable Safety Checker for content that will be publicly shared.
- Processing remains ultra-fast even with multiple LoRAs applied.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/wan-2.2/i2v-720p-lora-ultra-fast" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"duration": 5,
"seed": -1
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| image | string | Yes | - | The image for generating the output. | |
| prompt | string | Yes | - | ||
| negative_prompt | string | No | - | The negative prompt for the generation. | |
| last_image | string | No | - | - | The last image for generating the output. |
| duration | integer | No | 5 | 5, 8 | The duration of the generated media in seconds. |
| loras | array | No | max 3 items | List of LoRAs to apply (max 3). | |
| loras[].path | string | Yes | - | Path to the LoRA model | |
| loras[].scale | float | Yes | - | 0.0 ~ 4.0 | Scale of the LoRA model |
| high_noise_loras | array | No | - | - | List of high noise LoRAs to apply (max 3). |
| low_noise_loras | array | No | - | - | List of low noise LoRAs to apply (max 3). |
| seed | integer | No | -1 | -1 ~ 2147483647 | The random seed to use for the generation. -1 means a random seed will be used. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data | object | The prediction data object containing all details |
| data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
| data.model | string | Model ID used for the prediction |
| data.outputs | string | Array of URLs to the generated content (empty when status is not completed). |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |