Steady Dancer

Steady Dancer

Playground

Try it on WavespeedAI!

SteadyDancer is a 14B-parameter human image animation framework that transforms static images into coherent dance videos. Features first-frame preservation, robust identity consistency, and temporal coherence for realistic motion generation. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.

Features

wavespeed-ai/steady-dancer — Image-to-Video Motion Transfer

Steady Dancer is WaveSpeedAI’s motion-transfer model: you upload a character image and a driving video, and it generates a new clip where your character follows the motion from the video while keeping a stable face, outfit, and overall identity. Ideal for dance edits, cosplay previews, and social short-form content.


What is SteadyDancer?

SteadyDancer is a 14-billion parameter human image animation framework that converts static images into coherent dance motion videos. Built on diffusion models, it uses an Image-to-Video paradigm with key innovations for high-quality animation.

✨ Highlights

  • Image-driven identity – Uses your uploaded image as the main reference for face, outfit, and body shape.
  • Video-driven motion – Copies camera movement and body motion from the driving video.
  • Stability-focused – Designed to keep faces, limbs, and outfit details consistent across frames.
  • Resolution choices – Output at 480p for quick previews or 720p for higher-quality clips.
  • Prompt-guided style (optional) – Add a short text prompt to nudge colour, atmosphere, or style, or leave blank for neutral transfer.

🧩 Parameters

  • image* – Required. The character / subject image to insert into the motion.
  • video* – Required. Driving video whose motion and camera you want to reuse.
  • prompt – Optional text description for style / mood (e.g. “cinematic lighting, soft film grain, vivid colours”).
  • resolution – Output resolution: 480p or 720p.
  • seed-1 for random; any other integer for reproducible results.

💰 Pricing

Pricing is based on video length, resolution, and billed in 5-second blocks, with:

  • Minimum billable length: 5 seconds
  • Maximum billable length: 120 seconds (anything longer is charged as 120 s)
  • Base price: $0.15 per 5 seconds at 480p

Effective rates:

ResolutionEffective price per second5 s clip10 s clip60 s clip120 s clip (cap)
480p$0.03 / s$0.15$0.30$1.80$3.60
720p$0.06 / s (×2)$0.30$0.60$3.60$7.20

Internally, the system:

  • Takes your video duration (capped at 120 s),
  • Rounds it into 5-second blocks,
  • Multiplies by the base price, and
  • Applies a ×2 multiplier for 720p.

🚀 How to Use

  1. Upload image – choose the face / character you want to animate.
  2. Upload video – select the motion source clip.
  3. (Optional) Enter a prompt to guide overall look and mood.
  4. Choose resolution (start with 480p for fast tests; switch to 720p for final export).
  5. (Optional) Set a fixed seed if you want to reproduce or slightly tweak the same take later.
  6. Click Run and download the generated video once completed.

  • Dance and performance remixes using a static character or avatar.
  • Cosplay or outfit previews based on a single photo.
  • VTuber / virtual idol short clips for social platforms.
  • Quick pre-viz for ad concepts or character motion tests.

💡 Tips & Notes

  • For best results, keep framing similar between the image and driving video (e.g. both full-body or both mid-shot).
  • Avoid extremely fast motion, strong occlusions, or very busy backgrounds in the driving video for first tests.
  • If faces look unstable, try a clearer input image or reduce extreme camera shake in the driving clip.

Reference


Try other models and see the difference

  • fun-control — A playful motion-remix model built on Alibaba’s Wan 2.2, for controllable character and camera movement from simple prompts.
  • wan-animate — A general animation model powered by Alibaba’s Wan 2.2, turning text or images into smooth, high-quality short videos.

Authentication

For authentication details, please refer to the Authentication Guide.

API Endpoints

Submit Task & Query Result


# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/steady-dancer" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
    "resolution": "480p",
    "seed": -1
}'

# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"

Parameters

Task Submission Parameters

Request Parameters

ParameterTypeRequiredDefaultRangeDescription
imagestringYes-The image for generating the output.
videostringYes-The video for generating the output.
promptstringNo-The positive prompt for the generation.
resolutionstringNo480p480p, 720pThe resolution of the output video.
seedintegerNo-1-1 ~ 2147483647The random seed to use for the generation. -1 means a random seed will be used.

Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
data.idstringUnique identifier for the prediction, Task Id
data.modelstringModel ID used for the prediction
data.outputsarrayArray of URLs to the generated content (empty when status is not completed)
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.has_nsfw_contentsarrayArray of boolean values indicating NSFW detection for each output
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds

Result Request Parameters

© 2025 WaveSpeedAI. All rights reserved.