Home/Explore/Wan 2.1 Video Models/wavespeed-ai/wan-flf2v

image-to-video

wavespeed-ai/wan-flf2v

Wan-2.1 flf2v generates dynamic videos by intelligently bridging a given first frame to a desired end frame through smooth, coherent motion sequences.

Doc

Hint: You can drag and drop a file or click to upload

preview

Hint: You can drag and drop a file or click to upload

preview
If set to true, the safety checker will be enabled.

Idle

Your request will cost $0.3 per run.

For $10 you can run this model approximately 33 times.

One more thing:

ExamplesView all

README

Wan FLF2V

Wan FLF2V (First-Last Frame Video Generation) is an open-source video generation model developed by the Alibaba Tongyi Wanxiang team. Its open-source license is Apache 2.0. Users only need to provide two images as the starting and ending frames, and the model automatically generates intermediate transition frames, outputting a logically coherent and naturally flowing 720p high-definition video.

Core Technical Highlights

  • Precise First-Last Frame Control: The matching rate of first and last frames reaches 98%, defining video boundaries through starting and ending scenes, intelligently filling intermediate dynamic changes to achieve scene transitions and object morphing effects.
  • Stable and Smooth Video Generation: Using CLIP semantic features and cross-attention mechanisms, the video jitter rate is reduced by 37% compared to similar models, ensuring natural and smooth transitions.
  • Multi-functional Creative Capabilities: Supports dynamic embedding of Chinese and English subtitles, generation of anime/realistic/fantasy and other styles, adapting to different creative needs.
  • 720p HD Output: Directly generates 1280×720 resolution videos without post-processing, suitable for social media and commercial applications.
  • Open-source Ecosystem Support: Model weights, code, and training framework are fully open-sourced, supporting deployment on mainstream AI platforms.

Technical Principles and Architecture

  • DiT Architecture: Based on diffusion models and Diffusion Transformer architecture, combined with Full Attention mechanism to optimize spatiotemporal dependency modeling, ensuring video coherence.
  • 3D Causal Variational Encoder: Wan-VAE technology compresses HD frames to 1/128 size while retaining subtle dynamic details, significantly reducing memory requirements.
  • Three-stage Training Strategy: Starting from 480P resolution pre-training, gradually upgrading to 720P, balancing generation quality and computational efficiency through phased optimization.