Wan 2.6: 15s AI videos with cinematic coherence and perfect lip-sync.

Alibaba's next-gen video model — smarter prompts, enhanced audio sync, and unmatched character consistency.

Have a try

Text to Video
Image to Video
Reference to Video
Prompt
Create

Key Features

Multi-Shot Narrative Generation

Most open-source video models generate a single continuous clip, often lacking structure or consistency. WAN 2.6 introduces a major breakthrough with its ability to generate multi-shot narratives directly from simple prompts.

Get started

Prompt

The scene unfolds in first-person POV inside a bright, refined modern kitchen. Natural daylight pours across walnut flooring and matte gray cabinetry, giving the space a calm and polished atmosphere. The viewer takes three to four slow, steady steps forward while holding an empty celadon-green porcelain bowl with both black-gloved hands. Ahead stands a built-in double-door refrigerator. The left door features a softly glowing dispenser slot, with faint vapor curling from its edges. When the viewer reaches the refrigerator and lifts the bowl beneath the outlet, a gentle mechanical hum begins. From the small dispenser opening, the plating sequence unfolds with precise, almost ritualistic elegance. First, a smooth stream of deep orange lobster bisque flows into the bowl, circling and rippling as it settles. Moments later, tender pieces of lobster claw and tail meat descend into the center, their pink-red surfaces glistening in the hot broth. A thin ribbon of cream follows, tracing a delicate spiral across the bisque. Finally, micro herbs and tiny gold flakes drift down, completing the dish with a soft visual flourish. The celadon glaze of the bowl reflects the bright natural light, while the warm tones of the bisque shimmer gently on the surface. Subtle sounds fill the space: soft footsteps on the wooden floor, the quiet friction of gloves against the bowl, the rising hum of the refrigerator, the thick pour of bisque hitting the ceramic, the gentle plop of lobster pieces, the light drizzle of cream, and the faint sprinkle of herbs and flakes. Altogether, the moment blends mechanical precision with the warmth and intimacy of fine dining, presented through the calm rhythm of first-person ASMR realism.

Final outcome

Reference-Based Video Generation

WAN 2.6 supports video-reference generation, allowing users to guide the model with an input video.

Get started

Prompt

character1 is eating dinner with character2 in a restaurant

Final outcome

15-Second Long Video Generation

Many open-source models are limited to producing very short videos, typically only 2–5 seconds, restricting narrative depth. WAN 2.6 breaks this barrier by supporting videos up to 15 seconds long.

Get started

Prompt

Generate an approximately 15-second cohesive narrative video. Story: A medieval knight awakens on a storm-swept meadow after a fierce battle. First 5 seconds: A slow circling shot reveals his mud-covered armor, scattered debris, and lingering flashes of lightning in the dark sky. Middle 5 seconds: The knight rises, grasping a sword embedded in the ground. The camera pulls upward from a low angle, emphasizing the determination in his eyes. Final 5 seconds: He begins running toward a distant ruined castle wall as the camera follows in a handheld-style tracking motion, tall grass brushing past the lens to create dynamic depth of field. Maintain scene continuity, natural body motion, and cinematic epic atmosphere throughout.

Final outcome

Articles about Wan 2.6

Q & A

What input formats do you support?
Common video formats (e.g., MP4/MOV) are supported. For best results, use a clear, front-facing subject with stable lighting.
Does it preserve identity and background?
It prioritizes identity consistency and scene coherence, while applying the requested facial motion and lip movement.
Can I control emotion and speaking style?
Yes. You can guide intensity (calm/neutral/energetic), tempo, and expression strength via prompts and/or reference audio.
Can it handle multiple people/faces in one frame?
It works best when the speaking subject is clear and consistently visible. Heads-up: crowded scenes or frequent occlusions can cause drift—consider cropping or focusing on the target face.