Browse ModelsKwaivgiKwaivgi Kling V2.6 Pro Image To Video

Kwaivgi Kling V2.6 Pro Image To Video

Kwaivgi Kling V2.6 Pro Image To Video

Playground

Try it on WavespeedAI!

Kling 2.6 Pro delivers top-tier image-to-video generation with smooth motion, cinematic visuals, accurate prompt adherence, and native audio for ready-to-share clips. Ready-to-use REST inference API, best performance, no cold starts, affordable pricing.

Features

Kling 2.6 Audio — Image-to-Video

Kling 2.6 Audio Image-to-Video adds audio–video co-generation to Kling’s strong visual pipeline. You start from a still image, write a prompt, and the model produces a short clip where motion, camera, sound effects and voice all feel like one coherent scene.


🌟 Model Highlights

  • Audio + video in one pass – First Kling version that jointly generates visuals and soundtrack.
  • Character-synced voices – Speech and reactions that match the on-screen subject and timing.
  • Scene-aware sound design – Ambient noise and SFX that follow what happens in the frame.
  • Image-driven motion – Uses your input image as the starting frame and builds motion from there.

🧩 Parameters

  • image* – Source frame to animate (URL or upload). Use a sharp, well-lit image.

  • prompt* – Describe scene motion and audio: camera moves, actions, voice style, ambience, SFX.

  • sound – Toggle audio–video co-generation on/off. When off, you get silent video only.

  • duration – Currently supports 5s and 10s clips.

  • negative_prompt – Things to avoid in both visuals and audio, e.g. watermark, logo, text, distortion.

  • cfg_scale – Guidance strength slider (default 0.5):

    • Lower values → Looser, more natural motion, image has more influence.
    • Higher values → Closer adherence to prompt wording, but can look more “forced”.

🎯 Typical Use Cases

  • Launch / promo videos with native-sounding, character-synced voiceover.
  • Storytelling shorts where camera, action and sound must feel perfectly integrated.
  • Product explainers that need both clear visuals and natural narration.
  • Cinematic social posts with immersive ambience and SFX built in.

💰 Pricing

ModeLengthPrice
No Audio5 s$0.35
No Audio10 s$0.70
With Audio5 s$0.70
With Audio10 s$1.40

🚀 How to Use

  1. Upload the image you want to animate.

  2. Write a prompt describing:

    • how the camera should move,
    • what the characters do,
    • and, if with_audio is enabled, the voice tone and soundscape (e.g. “low, calm narrator, soft city ambience, subtle whooshes on cuts”).
  3. (Optional) Add a negative_prompt for elements you don’t want (visual or audio).

  4. Adjust cfg_scale: start from the default; increase only if the model is not following your prompt enough.

  5. Choose duration (5s or 10s), then toggle sound as needed.

  6. Click Run to generate; tweak prompt / cfg_scale / seed and re-run for alternates.


🔎 Tips

  • Keep the image and prompt aligned – don’t describe a totally different scene from the uploaded frame.
  • For strong lip-sync and performance, explicitly mention who is speaking and what kind of voice you want.
  • Start with the default cfg_scale; push it up slowly if the motion or sound doesn’t match your description.
  • Use negative_prompt to reduce logos, watermarks, heavy text, or unwanted artefacts in stylised shots.

Authentication

For authentication details, please refer to the Authentication Guide.

API Endpoints

Submit Task & Query Result


# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/kwaivgi/kling-v2.6-pro/image-to-video" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
    "cfg_scale": 0.5,
    "sound": true,
    "duration": 5
}'

# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"

Parameters

Task Submission Parameters

Request Parameters

ParameterTypeRequiredDefaultRangeDescription
promptstringYes-The positive prompt for the generation.
negative_promptstringNo-The negative prompt for the generation.
imagestringYes-Supported image formats:.jpg /.jpeg /.png The size of the image file should not exceed 10MB, the width and height of the image should be no less than 300px, and the aspect ratio of the image should be between 1:2.5 and 2.5:1
cfg_scalenumberNo0.50.00 ~ 1.00Flexibility in video generation; The higher the value, the lower the model’s degree of flexibility, and the stronger the relevance to the user’s prompt.
soundbooleanNotrue-Whether sound is generated simultaneously when generating a video
voice_listarrayNo--List of tones referenced when generating videos. When the voice_id parameter is not empty the voice ID, the video generation task will be billed based on the with voice generation” metric.
durationintegerNo55, 10The duration of the generated media in seconds.

Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
data.idstringUnique identifier for the prediction, Task Id
data.modelstringModel ID used for the prediction
data.outputsarrayArray of URLs to the generated content (empty when status is not completed)
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.has_nsfw_contentsarrayArray of boolean values indicating NSFW detection for each output
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds

Result Request Parameters

ParameterTypeRequiredDefaultDescription
idstringYes-Task ID

Result Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
dataobjectThe prediction data object containing all details
data.idstringUnique identifier for the prediction, the ID of the prediction to get
data.modelstringModel ID used for the prediction
data.outputsstringArray of URLs to the generated content (empty when status is not completed).
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds
© 2025 WaveSpeedAI. All rights reserved.