Introducing Kuaishou Kling LipSync Text-to-Video on WaveSpeedAI
Kling TextToVideo by Kwaivgi creates videos with lifelike lip movements that precisely sync to input text for natural speaking visuals. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
Introducing Kuaishou Kling V1.6 Multi I2V Pro on WaveSpeedAI
Kling 1.6 Multi Pro boosts image-to-video generation by 195% vs Kling 1.5, with improved prompt understanding, physics and visuals. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
Introducing Kuaishou Kling V2.6 Create Voice on WaveSpeedAI
Kling 2.6 Create Voice is a model can generate custom voice. Upload an audio file to create a custom voice that can be used with the voice control feature in V2.6 video generation. The audio should be clean, noise-free, with a single voice, and duration between 5-30 seconds. Built for stable product
Introducing Kuaishou Kling V1.6 Multi I2V Standard on WaveSpeedAI
Kling v1.6 Image-to-Video delivers 195% better results than Kling 1.5, with better prompt understanding, physics, and visual effects. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
Introducing Kuaishou Kling V2.1 I2V Pro Start End Frame on WaveSpeedAI
Kling v2.1 I2V Pro Start-End Frame generates cinematic Image-to-Video clips with precise start/end frame control, enhanced visual fidelity, and dynamic camera motion. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
Introducing Kuaishou Kling Video O1 Std Text-to-Video on WaveSpeedAI
Kling Omni Video O1 (Standard) is Kuaishou's first unified multi-modal video model with MVL (Multi-modal Visual Language) technology. Text-to-Video mode generates cinematic videos from text prompts with subject consistency, natural physics simulation, and precise semantic understanding. Ready-to-use
Introducing Lightricks LTX 2 Retake on WaveSpeedAI
LTX-2 Retake performs targeted retakes on any section of a video—replace visuals, audio, or both—while preserving timing and continuity with $0.1 per output video second. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
Introducing Luma Ray 2 Flash T2V on WaveSpeedAI
Luma Ray 2 Flash turns text into high-quality videos with flexible sizes and built-in prompt optimization for precise outputs. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
Introducing Luma Ray 2 T2V on WaveSpeedAI
Luma Ray 2 is a Text-to-Video model that creates high-quality videos from text prompts, with advanced prompt optimization and support for various video sizes. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
Introducing MiniMax Hailuo 02 Fast on WaveSpeedAI
Hailuo 02 Fast is a minimax image-to-video model that creates high-quality 6s and 10s clips at 512p for creators and marketers. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
Introducing WaveSpeedAI Steady Dancer on WaveSpeedAI
SteadyDancer is a 14B-parameter human image animation framework that transforms static images into coherent dance videos. Features first-frame preservation, robust identity consistency, and temporal coherence for realistic motion generation. Ready-to-use REST inference API, best performance, no cold
Introducing MiniMax Speech 2.5 Hd Preview on WaveSpeedAI
MiniMax Speech 2.5 HD Preview offers HD TTS with enhanced multilingual expressiveness, accurate voice cloning, and 40-language support. Ready-to-use REST API, best performance, no coldstarts, affordable pricing.