WaveSpeedAI

WaveSpeedAI Blog

Latest news on AI image and video generation models

Magi-1 Now Live on WaveSpeedAI: A New Benchmark in Open-Source Video Generation

Magi-1 Now Live on WaveSpeedAI: A New Benchmark in Open-Source Video Generation

Magi-1, the groundbreaking open-source video generation model by Sand AI, is now available on WaveSpeedAI for real-time inference and API deployment.

5 min read
Vidu 2.0 Now Live on WaveSpeedAI: A New Generation of Controllable Video Generation Models

Vidu 2.0 Now Live on WaveSpeedAI: A New Generation of Controllable Video Generation Models

We’re excited to introduce the full suite of Vidu 2.0 models—now available on WaveSpeedAI. Designed by Shengshu Technology, these models bring a new level of controllability, visual fidelity, and temporal consistency to open-source video generation.

5 min read
SkyReels-V1 Now Live on WaveSpeedAI: Human-Centric Video Generation for the Cinematic Age

SkyReels-V1 Now Live on WaveSpeedAI: Human-Centric Video Generation for the Cinematic Age

SkyReels V1, the world's first open-source human-centric video foundation model, is now live on WaveSpeedAI

5 min read
Framepack Now Live on WaveSpeedAI: High-Fidelity Image-to-Video via Autoregressive Generation

Framepack Now Live on WaveSpeedAI: High-Fidelity Image-to-Video via Autoregressive Generation

Framepack, a cutting-edge autoregressive image-to-video model from lllyasviel (the creator of ControlNet), is now available on WaveSpeedAI. Framepack redefines how static images become videos—by generating each frame based on the one before, it produces smoother motion, greater temporal consistency, and more coherent narratives than traditional methods.

5 min read
WAN-2.1 FLF2V Now Live on WaveSpeedAI: From Keyframes to Cinematic Motion

WAN-2.1 FLF2V Now Live on WaveSpeedAI: From Keyframes to Cinematic Motion

We’re excited to bring another powerful addition to the WAN-2.1 lineup: WAN-2.1 FLF2V (First-Last Frame-to-Video) is now live on WaveSpeedAI. This model introduces a novel approach to video generation by taking a starting frame and a target end frame, and then generating a smooth, dynamic video that transitions between the two. Instead of interpolating or morphing, WAN-FLF2V creates realistic motion informed by text prompts, style, and visual coherence.

5 min read
Five New FLUX Models Live on WaveSpeedAI: From Creative Variations to Precision Control

Five New FLUX Models Live on WaveSpeedAI: From Creative Variations to Precision Control

This week, we're introducing five new FLUX model endpoints—each tailored for a specific creative or structural task. Whether you're looking to generate consistent image variations, apply advanced style transfers, or control generation with edge or depth maps, these models expand your creative toolkit with precision and speed.

5 min read
MMAudio V2 Now Live on WaveSpeedAI: Multimodal Audio Generation for Video and Text Inputs

MMAudio V2 Now Live on WaveSpeedAI: Multimodal Audio Generation for Video and Text Inputs

MMAudio V2, a cutting-edge model designed to generate high-quality, synchronized audio from both video and text inputs. This advancement opens new possibilities in multimedia content creation, enhancing the realism and immersion of generated media

5 min read
Hunyuan3D 2.0 Now Live on WaveSpeedAI: Revolutionizing High-Resolution Textured 3D Asset Generation

Hunyuan3D 2.0 Now Live on WaveSpeedAI: Revolutionizing High-Resolution Textured 3D Asset Generation

WaveSpeedAI is excited to announce support for Hunyuan3D 2.0, a groundbreaking open-source 3D generation model with 17 billion parameters. This state-of-the-art model delivers exceptional 3D asset generation, making it ideal for creators, researchers, and enterprises seeking high-performance generative AI solutions.

5 min read
HiDream-I1 Now Live on WaveSpeedAI: The Next-Gen Open-Source Image Generation Model

HiDream-I1 Now Live on WaveSpeedAI: The Next-Gen Open-Source Image Generation Model

WaveSpeedAI is excited to announce support for HiDream-I1, a groundbreaking open-source image generative foundation model with 17 billion parameters. This state-of-the-art model delivers exceptional image quality and prompt adherence, making it ideal for creators, researchers, and enterprises seeking high-performance generative AI solutions.

5 min read
Real-ESRGAN Now Live on WaveSpeedAI: Revolutionizing Image Enhancement

Real-ESRGAN Now Live on WaveSpeedAI: Revolutionizing Image Enhancement

Real-ESRGAN, a powerful tool for enhancing image quality, now available on WaveSpeedAI. Developed by Tencent's ARC Lab, this model excels at transforming low-resolution images into high-resolution masterpieces, even when the exact nature of the image degradation is unknown.

5 min read
WaveSpeedAI X DataCrunch: FLUX Real-Time Image Inference on B200

WaveSpeedAI X DataCrunch: FLUX Real-Time Image Inference on B200

WaveSpeedAI has teamed up with European GPU cloud provider DataCrunch to achieve a breakthrough in generative image and video model deployment. By optimizing the open-weight FLUX-dev model on DataCrunch’s cutting-edge NVIDIA B200 GPU, our collaboration delivers up to 6× faster image inference compared to industry-standard baselines.

5 min read
Kling V1.6 Now Live on WaveSpeedAI

Kling V1.6 Now Live on WaveSpeedAI

Discover the groundbreaking Kling V1.6 i2v model on WaveSpeedAI, enabling high-quality video generation with unprecedented ease and efficiency. Explore its features, use cases, and why WaveSpeedAI is the ideal platform for your creative needs.

5 min read