WaveSpeed.ai
首页/探索/Wan 2.1 Video Models/wavespeed-ai/wan-2.1/i2v-720p-ultra-fast
image-to-video

image-to-video

Wan 2.1 Image-To-Video 720P Ultra-Fast

wavespeed-ai/wan-2.1/i2v-720p-ultra-fast

WAN 2.1 Image-to-Video (i2v) 720P Ultra-Fast converts images into 720P videos with ultra-fast inference and supports unlimited AI video generation for high-throughput workflows. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.

Input

Hint: You can drag and drop a file or click to upload

preview

Idle

您的请求将花费 $0.225 每次运行。

使用 $10 您可以运行此模型大约 44 次。

还有一件事::

示例查看全部

README

Wan 2.1 I2V 720p Ultra Fast — wavespeed-ai/wan-2.1/i2v-720p-ultra-fast

Wan 2.1 I2V 720p Ultra Fast is a fast image-to-video model that animates a single reference image into a short clip guided by your prompt. It’s optimized for quick turnaround at 720p while keeping the input image as the visual anchor—ideal for rapid storyboarding, motion exploration, and production-friendly iteration.

Key capabilities

  • Image-to-video (I2V) generation at 720p
  • Strong image anchoring for subject consistency
  • Prompt-driven motion and camera direction (follow, dolly, orbit, pan/tilt)
  • Tunable motion behavior with guidance and flow controls
  • Great for quick drafts, fast iterations, and high-throughput generation

Use cases

  • Turn key art into short motion clips for ads, social posts, and previews
  • Animate characters and products while keeping the original look
  • Test different camera moves (push-in, rise, follow, orbit) from the same image
  • Generate multiple motion variants quickly for editing and selection
  • Previsualization for longer or higher-cost renders

Pricing

ResolutionDurationPrice per runEffective price per second
720p5s$0.225$0.045/s
720p10s$0.338$0.034/s

Inputs

  • image (required): reference image that anchors subject and composition
  • prompt (required): describe action + camera + environment motion
  • negative_prompt (optional): suppress artifacts like blur, jitter, distortions

Parameters

  • size: output resolution preset (e.g., 1280×720)
  • duration: video length (commonly 5s or 10s)
  • num_inference_steps: more steps can improve detail and motion stability
  • guidance_scale: prompt adherence strength (higher = follows prompt more)
  • flow_shift: motion behavior tuning (useful for more/less dynamic motion)
  • seed: set for reproducible results (-1 for random)

Prompting guide (I2V)

Write prompts like a shot list:

  • Subject + action: what the subject does over time
  • Camera movement: “camera rises”, “follows behind”, “slow dolly-in”, “orbit”
  • Motion constraints: “smooth motion”, “stable framing”, “no jitter”
  • Environment: weather, particles, crowd, lighting shifts for realism

Example prompt

LEGO minifigure snowboarding on a mountain ridge, carving smooth S-turns while staying sideways on the board. The camera starts low in front, then rises and follows as the rider moves left and right across the slope. Snow sprays from the board, wind-blown powder in the air, dynamic action, smooth motion, stable framing, cinematic winter lighting.