Wan 2.1 T2V 720p LoRA Ultra Fast
Playground
Try it on WavespeedAI!WAN 2.1 Text-to-Video 720P delivers unlimited ultra-fast videos from text prompts and supports custom LoRAs for personalized styles. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
Features
Wan 2.1 Text-to-Video 720p LoRA Ultra Fast
Wan 2.1 Text-to-Video 720p LoRA Ultra Fast is a lightning-fast text-to-video generation model with full LoRA support. Generate HD 720p videos from text descriptions in seconds, with custom styles and effects — perfect for rapid iteration and high-volume content creation.
Why It Stands Out
- Ultra-fast processing: Optimized for speed without sacrificing quality.
- LoRA support: Apply custom LoRA models for specific styles and effects.
- HD 720p output: Generate crisp 1280×720 videos with rich detail.
- Prompt Enhancer: Built-in AI-powered prompt optimization for better results.
- Negative prompt support: Exclude unwanted elements for cleaner outputs.
- Fine-tuned control: Adjust guidance scale and flow shift for precise results.
- Reproducibility: Use the seed parameter to recreate exact results.
Parameters
| Parameter | Required | Description |
|---|---|---|
| prompt | Yes | Text description of the video you want to generate. |
| negative_prompt | No | Elements to avoid in the output. |
| loras | No | LoRA models to apply (path and scale). |
| size | No | Output resolution: 1280×720 (default: 1280×720). |
| num_inference_steps | No | Quality/speed trade-off (default: 30). |
| duration | No | Video length: 5 or 10 seconds (default: 5). |
| guidance_scale | No | Prompt adherence strength (default: 5). |
| flow_shift | No | Motion flow control (default: 5). |
| seed | No | Set for reproducibility; -1 for random. |
How to Use
- Write a prompt describing the scene, action, and style you want. Use the Prompt Enhancer for AI-assisted optimization.
- Add a negative prompt (optional) — specify elements to exclude.
- Add LoRAs (optional) — select LoRA models and adjust their scale. Check recommended LoRAs for inspiration.
- Set duration — choose 5 or 10 seconds.
- Adjust parameters (optional) — fine-tune guidance scale and flow shift.
- Click Run and wait for your video to generate.
- Preview and download the result.
How to Use LoRA
LoRA (Low-Rank Adaptation) lets you apply custom styles without retraining the full model.
- Add LoRA: Enter the LoRA path and adjust the scale (0.0–1.0).
- Recommended LoRAs: Check the interface for suggested LoRAs with preview images (e.g., Fire effects).
- Scale adjustment: Higher scale means stronger style effect.
Best Use Cases
- Rapid Prototyping — Quickly test video concepts with custom styles.
- Visual Effects — Apply effects like fire, water, smoke with specialized LoRAs.
- Social Media Content — Create stylized videos for TikTok, Reels, and Shorts.
- Batch Processing — Generate multiple videos efficiently at scale.
- Creative Exploration — Experiment with different LoRA combinations.
Pricing
| Duration | Price |
|---|---|
| 5 seconds | $0.225 |
| 10 seconds | $0.3375 |
Pro Tips for Best Quality
- Be detailed in your prompt — describe subject, action, environment, lighting, and mood.
- Use LoRAs to apply specific visual effects like fire, explosions, or weather.
- Start with LoRA scale around 0.7–1.0 and adjust based on results.
- Use negative prompts to reduce artifacts like blur, distortion, or unwanted motion.
- Check recommended LoRAs for proven style effects.
- Fix the seed when iterating to compare different LoRA combinations.
Notes
- Processing time is optimized for speed — expect quick turnaround.
- Higher num_inference_steps produces better quality but increases generation time.
- Please ensure your prompts comply with content guidelines.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/wan-2.1/t2v-720p-lora-ultra-fast" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"loras": [
{
"path": "Remade-AI/Fire",
"scale": 1
}
],
"size": "1280*720",
"num_inference_steps": 30,
"duration": 5,
"guidance_scale": 5,
"flow_shift": 5,
"seed": -1
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| prompt | string | Yes | - | The positive prompt for the generation. | |
| negative_prompt | string | No | - | The negative prompt for the generation. | |
| loras | array | No | max 3 items | List of LoRAs to apply (max 3). | |
| loras[].path | string | Yes | - | Path to the LoRA model | |
| loras[].scale | float | Yes | - | 0.0 ~ 4.0 | Scale of the LoRA model |
| size | string | No | 1280*720 | 1280*720, 720*1280 | The size of the generated media in pixels (width*height). |
| num_inference_steps | integer | No | 30 | 1 ~ 40 | The number of inference steps to perform. |
| duration | integer | No | 5 | 5 ~ 10 | The duration of the generated media in seconds. |
| guidance_scale | number | No | 5 | 0.00 ~ 20.00 | The guidance scale to use for the generation. |
| flow_shift | number | No | 5 | 1.0 ~ 10.0 | The shift value for the timestep schedule for flow matching. |
| seed | integer | No | -1 | -1 ~ 2147483647 | The random seed to use for the generation. -1 means a random seed will be used. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data | object | The prediction data object containing all details |
| data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
| data.model | string | Model ID used for the prediction |
| data.outputs | string | Array of URLs to the generated content (empty when status is not completed). |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |