wavespeed-ai/wan-2.1/i2v-720p

Wan2.1 I2V-14B model is capable of generating 720P high-definition videos from images

image-to-video

hot

preview
Whether to enable the safety checker.

Idle

https://d2g64w682n9w0w.cloudfront.net/media/images/1745147318706298578_exG3ZWSO.webp

Your request will cost $0.3 per video,
For $1 you can run this model approximately 3 times.

ExamplesView more examples

README

Wan2.1-i2v-720p is an open-source AI video generation model developed by Alibaba Cloud, designed for image-to-video tasks. The 14-billion-parameter professional version excels in generating complex motions and simulating physical dynamics, delivering exceptional performance. Built upon a causal 3D Variational Autoencoder (VAE) and Video Diffusion Transformer architecture, Wan2.1-i2v-720p efficiently models spatiotemporal dependencies. In the authoritative VBench evaluation, the 14B version achieved a leading score of 86.22%, surpassing models like Sora, Luma, and Pika, and securing the top position. The model is available on Wavespeed AI, providing convenient access for developers.

Key Features

  • High-Resolution Video Output: Specifically engineered to transform images into sharp 720p videos, delivering superior visual quality and enhanced motion diversity.
  • State-of-the-Art Performance: Consistently outperforms existing open-source and commercial solutions across multiple benchmarks, setting a new standard in video generation.
  • Consumer-Grade GPU Compatibility: Optimized for efficiency, the model runs effectively on widely available hardware, ensuring broad accessibility for creative developers.
  • Accelerated Inference: Our innovative acceleration technology drastically reduces latency and computational overhead, enabling very fast video generation without compromising quality.

ComfyUI

wan-2.1/i2v-720p is also available on ComfyUI, providing local inference capabilities through a node-based workflow, ensuring flexible and efficient image generation on your system.

Limitations

  • Creative Focus: wan-2.1/i2v-720p is designed for creative video synthesis from images and is not intended to generate factually reliable content.
  • Inherent Biases: As with any data-driven model, it may reflect biases present in the training dataset.
  • Input Sensitivity: The quality and consistency of the generated videos depend significantly on the quality of the input image; subtle variations in the source material may result in output variability.
  • Task Scope: This model is exclusively built for image-to-video conversion at high resolution and does not support additional video generation tasks such as text-to-video or video editing.

Out-of-Scope Use

The model and its derivatives may not be used in any way that violates applicable national, federal, state, local, or international law or regulation, including but not limited to:

  • Exploiting, harming, or attempting to exploit or harm minors, including solicitation, creation, acquisition, or dissemination of child exploitative content.
  • Generating or disseminating verifiably false information with the intent to harm others.
  • Creating or distributing personal identifiable information that could be used to harm an individual.
  • Harassing, abusing, threatening, stalking, or bullying individuals or groups.
  • Producing non-consensual nudity or illegal pornographic content.
  • Making fully automated decisions that adversely affect an individual’s legal rights or create binding obligations.
  • Facilitating large-scale disinformation campaigns.

Accelerated Inference

Our accelerated inference approach leverages advanced optimization technology from WavespeedAI. This innovative fusion technique significantly reduces computational overhead and latency, enabling rapid image generation without compromising quality. The entire system is designed to efficiently handle large-scale inference tasks while ensuring that real-time applications achieve an optimal balance between speed and accuracy. For further details, please refer to the blog post.