Browse ModelsWavespeed AIWan 2.1 T2V 720p LoRA

Wan 2.1 T2V 720p LoRA

Wan 2.1 T2V 720p LoRA

Playground

Try it on WavespeedAI!

Wan 2.1 Text-to-Video 720P creates 720P videos from text prompts and supports custom LoRAs for personalized styles. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.

Features

Wan 2.1 Text-to-Video 720p LoRA

Generate stunning videos from text descriptions with Wan 2.1 Text-to-Video 720p LoRA. This powerful model transforms your written prompts into high-quality 720p videos with smooth motion and cinematic quality — plus full LoRA support for custom styles, characters, and aesthetics.

Why It Stands Out

  • Pure text-to-video generation: No source image needed — describe your vision and watch it come to life.
  • LoRA support: Load custom LoRA models to apply specific styles, maintain character consistency, or match brand aesthetics.
  • Prompt-guided creation: Control scenes, actions, camera movements, and atmosphere through natural language.
  • Negative prompt support: Exclude unwanted elements for cleaner, more controlled outputs.
  • Flexible duration: Generate 5-second or 10-second clips depending on your needs.
  • Reproducibility: Use the seed parameter to recreate exact results or iterate on variations.

Pricing

DurationPrice
5 seconds$0.30
10 seconds$0.45

Parameters

ParameterRequiredDescription
promptYesText description of the video you want to generate.
negative_promptNoElements to avoid in the generated video.
lora_urlNoURL to your custom LoRA model file.
lora_strengthNoLoRA influence strength (typically 0.5–1.0).
sizeNoOutput resolution (default: 1280×720).
num_inference_stepsNoQuality/speed trade-off (default: 30).
durationNoVideo length in seconds: 5 or 10 (default: 5).
guidance_scaleNoPrompt adherence strength (default: 5).
flow_shiftNoMotion intensity control (default: 5).
seedNoSet for reproducibility; -1 for random.

How to Use

  1. Write a prompt describing the scene, action, and style you want.
  2. Add a LoRA (optional) — paste the URL to your custom LoRA and set the strength.
  3. Set parameters — adjust duration, guidance scale, and other settings as needed.
  4. Add a negative prompt (optional) to exclude unwanted elements.
  5. Click Run and wait for your video to generate.
  6. Preview and download the result.

How to Use LoRA

LoRA (Low-Rank Adaptation) lets you customize the model’s output style without retraining the full model.

  • Use your LoRA: Host your .safetensors file at a public URL and paste it into the lora_url field.
  • Train your LoRA: Learn how to create custom LoRAs in our guide: Train Your Own LoRA Model

Common LoRA use cases: consistent character appearance, specific art styles, brand-aligned aesthetics, anime/cartoon styles.

Best Use Cases

  • Social Media Content — Create scroll-stopping video content from scratch.
  • Marketing & Advertising — Produce concept videos and ad creatives without filming.
  • Storytelling & Animation — Generate scenes for short films, music videos, or narrative projects.
  • Game & App Development — Create promotional trailers and UI animations.
  • Personalized Content — Use custom LoRAs for branded or character-consistent videos.

Pro Tips for Best Quality

  • Be specific in your prompt — describe subject, action, environment, lighting, and camera movement.
  • Use negative prompts to reduce common artifacts: blur, distortion, jitter, or watermarks.
  • Start with lower inference steps (20–25) for quick previews, then increase for final renders.
  • When using LoRA, start with strength around 0.7 and adjust based on results.
  • Fix the seed when iterating to isolate the effect of parameter changes.

Notes

  • Ensure any LoRA URLs are publicly accessible.
  • Processing time varies based on duration and current queue load.
  • Please ensure your prompts comply with content guidelines.

Authentication

For authentication details, please refer to the Authentication Guide.

API Endpoints

Submit Task & Query Result


# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/wan-2.1/t2v-720p-lora" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
    "loras": [
        {
            "path": "Remade-AI/Fire",
            "scale": 1
        }
    ],
    "size": "1280*720",
    "num_inference_steps": 30,
    "duration": 5,
    "guidance_scale": 5,
    "flow_shift": 5,
    "seed": -1
}'

# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"

Parameters

Task Submission Parameters

Request Parameters

ParameterTypeRequiredDefaultRangeDescription
promptstringYes-The positive prompt for the generation.
negative_promptstringNo-The negative prompt for the generation.
lorasarrayNomax 3 itemsList of LoRAs to apply (max 3).
loras[].pathstringYes-Path to the LoRA model
loras[].scalefloatYes-0.0 ~ 4.0Scale of the LoRA model
sizestringNo1280*7201280*720, 720*1280The size of the generated media in pixels (width*height).
num_inference_stepsintegerNo301 ~ 40The number of inference steps to perform.
durationintegerNo55 ~ 10The duration of the generated media in seconds.
guidance_scalenumberNo50.00 ~ 20.00The guidance scale to use for the generation.
flow_shiftnumberNo51.0 ~ 10.0The shift value for the timestep schedule for flow matching.
seedintegerNo-1-1 ~ 2147483647The random seed to use for the generation. -1 means a random seed will be used.

Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
data.idstringUnique identifier for the prediction, Task Id
data.modelstringModel ID used for the prediction
data.outputsarrayArray of URLs to the generated content (empty when status is not completed)
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.has_nsfw_contentsarrayArray of boolean values indicating NSFW detection for each output
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds

Result Request Parameters

ParameterTypeRequiredDefaultDescription
idstringYes-Task ID

Result Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
dataobjectThe prediction data object containing all details
data.idstringUnique identifier for the prediction, the ID of the prediction to get
data.modelstringModel ID used for the prediction
data.outputsstringArray of URLs to the generated content (empty when status is not completed).
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds
© 2025 WaveSpeedAI. All rights reserved.