WaveSpeedAI APIWan T2V 480p LoRA

Wan T2V 480p LoRA

Turbo-charged inference for Wan 2.1 14B. Unleashing high-res 480p text-to-video prowess with cutting-edge suite of video foundation models, LoRA effect added

Features

wan-2.1/t2v-480p-lora is an open-source AI video generation model developed by Alibaba Cloud, designed for text-to-video tasks. It incorporates Low-Rank Adaptation (LoRA) techniques, enabling efficient fine-tuning of pre-trained models to generate videos with specified effects from reference images. The 14-billion-parameter professional version excels in generating complex motions and simulating physical dynamics, delivering exceptional performance.

Built upon a causal 3D Variational Autoencoder (VAE) and Video Diffusion Transformer architecture, wan-2.1/t2v-480p-lora efficiently models spatiotemporal dependencies. In the authoritative VBench evaluation, the 14B version achieved a leading score of 86.22%, surpassing models like Sora, Luma, and Pika, and securing the top position. The model is available on Wavespeed AI, providing convenient access for developers.

Key Features

  • LoRA-Enhanced Adaptability: Incorporates Low-Rank Adaptation to enable efficient fine-tuning, allowing the model to adapt to specific tasks or styles with fewer trainable parameters and reduced GPU memory usage.
  • High-Quality 480p Video Generation: Capable of producing visually appealing 480p videos from text prompts, maintaining clarity and motion consistency.
  • Efficient Performance on Consumer-Grade GPUs: Optimized to run on widely available hardware, requiring only 8.19 GB of VRAM, and capable of generating a 5-second 480p video on an RTX 4090 in approximately 4 minutes.
  • Multilingual Text Rendering: Supports the generation of videos containing both Chinese and English text, expanding its applicability across different languages and regions.

ComfyUI

wan-2.1/t2v-480p-lora is also available on ComfyUI, providing local inference capabilities through a node-based workflow, ensuring flexible and efficient image generation on your system.

Limitations

  • Creative Content Focus: Designed primarily for creative video synthesis; not intended for generating factually accurate or reliable content.
  • Potential Biases: As with any data-driven model, outputs may reflect biases present in the training data.
  • Input Sensitivity: The quality and consistency of generated videos depend significantly on the quality of the input text; subtle variations may lead to output variability.
  • Resolution Constraint: Currently optimized for 480p video generation; higher resolutions may not be supported or may result in reduced stability.

Out-of-Scope Use

The model and its derivatives may not be used in any way that violates applicable national, federal, state, local, or international law or regulation, including but not limited to:

  • Exploiting, harming, or attempting to exploit or harm minors, including solicitation, creation, acquisition, or dissemination of child exploitative content.
  • Generating or disseminating verifiably false information with the intent to harm others.
  • Creating or distributing personal identifiable information that could be used to harm an individual.
  • Harassing, abusing, threatening, stalking, or bullying individuals or groups.
  • Producing non-consensual nudity or illegal pornographic content.
  • Making fully automated decisions that adversely affect an individual’s legal rights or create binding obligations.
  • Facilitating large-scale disinformation campaigns.

Accelerated Inference

Our accelerated inference approach leverages advanced optimization technology from WavespeedAI. This innovative fusion technique significantly reduces computational overhead and latency, enabling rapid image generation without compromising quality. The entire system is designed to efficiently handle large-scale inference tasks while ensuring that real-time applications achieve an optimal balance between speed and accuracy. For further details, please refer to the blog post.

Authentication

For authentication details, please refer to the Authentication Guide.

API Endpoints

Submit Task & Query Result


# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/wan-2.1/t2v-480p-lora" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
    "prompt": "B0x13ng Boxing video sexy female boxers, athletic women, intense fight, boxing gloves, sweat, slow motion, fierce expressions, toned bodies, action scene, dramatic lighting, cinematic",
    "negative_prompt": "",
    "loras": [
        {
            "path": "Remade-AI/Zoom-Call",
            "scale": 1
        }
    ],
    "size": "832*480",
    "num_inference_steps": 30,
    "duration": 5,
    "guidance_scale": 5,
    "flow_shift": 3,
    "seed": -1,
    "enable_safety_checker": true
}'

# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"

Parameters

Task Submission Parameters

Request Parameters

ParameterTypeRequiredDefaultRangeDescription
promptstringYesB0x13ng Boxing video sexy female boxers, athletic women, intense fight, boxing gloves, sweat, slow motion, fierce expressions, toned bodies, action scene, dramatic lighting, cinematic-The prompt for generating the output.
negative_promptstringNo--The negative prompt for generating the output.
lorasarrayNo[]max 3 itemsThe LoRA weights for generating the output.
loras[].pathstringYes-Path to the LoRA model
loras[].scalefloatYes-0.0 ~ 4.0Scale of the LoRA model
sizestringNo832*480832*480, 480*832The size of the output.
num_inference_stepsintegerNo301 ~ 40The number of inference steps.
durationintegerNo55 ~ 10Generate video duration length seconds.
guidance_scalenumberNo51.01 ~ 10.00The guidance scale for generation.
flow_shiftnumberNo31.0 ~ 10.0The shift value for the timestep schedule for flow matching.
seedintegerNo-1-1 ~ 9999999999The seed for random number generation.
enable_safety_checkerbooleanNotrue-Whether to enable the safety checker.

Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
data.idstringUnique identifier for the prediction, Task Id
data.modelstringModel ID used for the prediction
data.outputsarrayArray of URLs to the generated content (empty when status is not completed)
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.has_nsfw_contentsarrayArray of boolean values indicating NSFW detection for each output
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds

Result Query Parameters

Result Request Parameters

ParameterTypeRequiredDefaultDescription
idstringYes-Task ID

Result Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
dataobjectThe prediction data object containing all details
data.idstringUnique identifier for the prediction
data.modelstringModel ID used for the prediction
data.outputsarrayArray of URLs to the generated content (empty when status is not completed)
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.has_nsfw_contentsarrayArray of boolean values indicating NSFW detection for each output
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds
© 2025 WaveSpeedAI. All rights reserved.