Home/Explore/Hunyuan Video Models/wavespeed-ai/hunyuan-video-1.5/image-to-video
image-to-video

image-to-video

HunyuanVideo-1.5 Text-To-Video| Lightweight Image To Video Generation | WaveSpeedAI

wavespeed-ai/hunyuan-video-1.5/image-to-video

HunyuanVideo-1.5 (i2v) is a lightweight 8.3B parameter image-to-video model that generates high-quality videos from images with top-tier visual quality and motion coherence. Optimized for fast inference on consumer-grade GPUs. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.

Hint: You can drag and drop a file or click to upload

preview

Idle

Your request will cost $0.1 per run.

For $1 you can run this model approximately 10 times.

One more thing::

ExamplesView all

README

HunyuanVideo-1.5 Image-to-Video

HunyuanVideo-1.5 is Tencent’s lightweight, state-of-the-art video generation model. The image-to-video variant on WaveSpeedAI lets you animate a single still image into a smooth, cinematic clip guided by your text prompt, while keeping the original visual style and character identity stable.

Key features

  • High-quality image-to-video generation with strong motion coherence
  • Lightweight 8.3B-parameter design for fast inference
  • Multiple resolutions: 480p, 720p
  • Video durations: 5 s, 8 s, and 10 s

Limits and performance

  • Input: single image (any reasonable resolution; automatically resized/preprocessed)
  • Output: short video clip at selected resolution, duration, and aspect ratio
  • Recommended duration: up to 10 seconds per clip
  • Best performance with clear, well-lit images and a prompt that specifies motion, camera behavior, and mood

Pricing

ResolutionPrice per second
480p$0.02 / s
720p$0.04 / s

How to use

  1. Upload your input image (this becomes the starting frame of the video).
  2. Enter a prompt describing the motion, camera movement, environment changes, and overall mood.
  3. Choose the resolution: 480p, 720p.
  4. Select the aspect ratio (16:9 for landscape or 9:16 for vertical/mobile).
  5. Choose the duration: 5, 8, or 10 seconds.
  6. Optionally set the seed for reproducibility.
  7. Run the job and wait for processing.
  8. Preview the generated video and download it from the WaveSpeedAI dashboard.

Tips for best results

  • Use a clean, high-resolution input image; avoid heavy compression and motion blur.
  • In the prompt, specify both what moves (hair, clothes, camera, background elements) and what stays stable (character pose, framing).
  • Mention camera behavior explicitly (e.g., “slow push-in,” “handheld shake,” “static camera with subtle parallax”).
  • Shorter durations (5–8 s) tend to produce the most coherent motion for complex scenes.
  • For a series of related clips, reuse the same seed and similar prompts to keep style and identity consistent.

Notes

HunyuanVideo-1.5 I2V is ideal for creators who want fast, controllable animation from still images without heavyweight hardware. It can be combined with high-end image models on WaveSpeedAI (such as Nano Banana Pro or Seedream v4) for a full pipeline: generate a keyframe with an image model, then bring it to life with HunyuanVideo.