WaveSpeedAI Blog

Home
banner

WaveSpeedAI X DataCrunch: FLUX Real-Time Image Inference on B200

WaveSpeedAI has teamed up with European GPU cloud provider DataCrunch to achieve a breakthrough in generative image and video model deployment. By optimizing the open-weight FLUX-dev model on DataCrunch’s cutting-edge NVIDIA B200 GPU, our collaboration delivers up to 6× faster image inference compared to industry-standard baselines.

2025-04-09

HunyuanCustom Now Live on WaveSpeedAI: Rivals Top Closed-Source Solutions!

HunyuanCustom Now Live on WaveSpeedAI: Rivals Top Closed-Source Solutions!

We’re excited to announce HunyuanCustom, An open-source, multimodal-driven architecture for customized video generation, now available on WaveSpeedAI.

2025-05-10

SDXL Now Live on WaveSpeedAI: 2 diffusion-based text-to-image Generative Models

SDXL Now Live on WaveSpeedAI: 2 diffusion-based text-to-image Generative Models

We’re excited to introduce 2 Stable Diffusion XL models now available on WaveSpeedAI: sdxl and sdxl-lora.

2025-05-09

FLUX-dev Arrives on WaveSpeedAI: 12B Parameter AI Model Redefining Text-to-Image Generation

FLUX-dev Arrives on WaveSpeedAI: 12B Parameter AI Model Redefining Text-to-Image Generation

FLUX-dev, the latest model launched in WaveSpeedAI, has new creative possibilities in text-to-image generation with 12B parameters and open-source flexibility. This blog will help you learn more about the model and its potential.

2025-05-09

HiDream-E1-Full Now Available on WaveSpeedAI

HiDream-E1-Full Now Available on WaveSpeedAI

We’re excited to announce the launch of HiDream-E1-Full, a powerful image editing model from HiDream AI, now available on WaveSpeedAI.

2025-04-30

InstantCharacter Model Launches on WaveSpeed: Ushering in a New Era of Personalized Character Generation

InstantCharacter Model Launches on WaveSpeed: Ushering in a New Era of Personalized Character Generation

We’re excited to announce that Tencent AI Lab’s latest innovation — InstantCharacter, a state-of-the-art personalized character generation model — is now officially live on the WaveSpeed platform. Built upon a scalable Diffusion Transformer framework, InstantCharacter offers high fidelity, strong generalization, and fine-grained text controllability, setting a new benchmark in character generation technology.

2025-04-28

Step1X-Edit: Setting a New Standard for Open-Source Image Editing

Step1X-Edit: Setting a New Standard for Open-Source Image Editing

In the field of image editing, users are increasingly demanding high-quality and user-friendly solutions. While closed-source multimodal models like GPT-4o and Gemini 2 Flash deliver strong image editing capabilities, open-source options have often lagged behind in performance.

2025-04-27

Magi-1 Now Live on WaveSpeedAI: A New Benchmark in Open-Source Video Generation

Magi-1 Now Live on WaveSpeedAI: A New Benchmark in Open-Source Video Generation

Magi-1, the groundbreaking open-source video generation model by Sand AI, is now available on WaveSpeedAI for real-time inference and API deployment.

2025-04-25

Vidu 2.0 Now Live on WaveSpeedAI: A New Generation of Controllable Video Generation Models

Vidu 2.0 Now Live on WaveSpeedAI: A New Generation of Controllable Video Generation Models

We’re excited to introduce the full suite of Vidu 2.0 models—now available on WaveSpeedAI. Designed by Shengshu Technology, these models bring a new level of controllability, visual fidelity, and temporal consistency to open-source video generation.

2025-04-24

SkyReels-V1 Now Live on WaveSpeedAI: Human-Centric Video Generation for the Cinematic Age

SkyReels-V1 Now Live on WaveSpeedAI: Human-Centric Video Generation for the Cinematic Age

SkyReels V1, the world’s first open-source human-centric video foundation model, is now live on WaveSpeedAI

2025-04-22

WAN-2.1 FLF2V Now Live on WaveSpeedAI: From Keyframes to Cinematic Motion

WAN-2.1 FLF2V Now Live on WaveSpeedAI: From Keyframes to Cinematic Motion

We’re excited to bring another powerful addition to the WAN-2.1 lineup: WAN-2.1 FLF2V (First-Last Frame-to-Video) is now live on WaveSpeedAI. This model introduces a novel approach to video generation by taking a starting frame and a target end frame, and then generating a smooth, dynamic video that transitions between the two. Instead of interpolating or morphing, WAN-FLF2V creates realistic motion informed by text prompts, style, and visual coherence.

2025-04-21

Framepack Now Live on WaveSpeedAI: High-Fidelity Image-to-Video via Autoregressive Generation

Framepack Now Live on WaveSpeedAI: High-Fidelity Image-to-Video via Autoregressive Generation

Framepack, a cutting-edge autoregressive image-to-video model from lllyasviel (the creator of ControlNet), is now available on WaveSpeedAI. Framepack redefines how static images become videos—by generating each frame based on the one before, it produces smoother motion, greater temporal consistency, and more coherent narratives than traditional methods.

2025-04-21

MMAudio V2 Now Live on WaveSpeedAI: Multimodal Audio Generation for Video and Text Inputs

MMAudio V2 Now Live on WaveSpeedAI: Multimodal Audio Generation for Video and Text Inputs

MMAudio V2, a cutting-edge model designed to generate high-quality, synchronized audio from both video and text inputs. This advancement opens new possibilities in multimedia content creation, enhancing the realism and immersion of generated media

2025-04-18

Five New FLUX Models Live on WaveSpeedAI: From Creative Variations to Precision Control

Five New FLUX Models Live on WaveSpeedAI: From Creative Variations to Precision Control

This week, we're introducing five new FLUX model endpoints—each tailored for a specific creative or structural task. Whether you're looking to generate consistent image variations, apply advanced style transfers, or control generation with edge or depth maps, these models expand your creative toolkit with precision and speed.

2025-04-18

Hunyuan3D 2.0 Now Live on WaveSpeedAI: Revolutionizing High-Resolution Textured 3D Asset Generation

Hunyuan3D 2.0 Now Live on WaveSpeedAI: Revolutionizing High-Resolution Textured 3D Asset Generation

WaveSpeedAI is excited to announce support for Hunyuan3D 2.0, a groundbreaking open-source 3D generation model with 17 billion parameters. This state-of-the-art model delivers exceptional 3D asset generation, making it ideal for creators, researchers, and enterprises seeking high-performance generative AI solutions.

2025-04-12

HiDream-I1 Now Live on WaveSpeedAI: The Next-Gen Open-Source Image Generation Model

HiDream-I1 Now Live on WaveSpeedAI: The Next-Gen Open-Source Image Generation Model

WaveSpeedAI is excited to announce support for HiDream-I1, a groundbreaking open-source image generative foundation model with 17 billion parameters. This state-of-the-art model delivers exceptional image quality and prompt adherence, making it ideal for creators, researchers, and enterprises seeking high-performance generative AI solutions.

2025-04-11

Real-ESRGAN Now Live on WaveSpeedAI: Revolutionizing Image Enhancement

Real-ESRGAN Now Live on WaveSpeedAI: Revolutionizing Image Enhancement

Real-ESRGAN, a powerful tool for enhancing image quality, now available on WaveSpeedAI. Developed by Tencent's ARC Lab, this model excels at transforming low-resolution images into high-resolution masterpieces, even when the exact nature of the image degradation is unknown.

2025-04-10

Kling V1.6 Now Live on WaveSpeedAI

Kling V1.6 Now Live on WaveSpeedAI

Discover the groundbreaking Kling V1.6 i2v model on WaveSpeedAI, enabling high-quality video generation with unprecedented ease and efficiency. Explore its features, use cases, and why WaveSpeedAI is the ideal platform for your creative needs.

2025-04-06

MiniMax Video-01 Now Live on WaveSpeedAI

MiniMax Video-01 Now Live on WaveSpeedAI

Discover the groundbreaking MiniMax Video 01 model on WaveSpeedAI, enabling high-quality video generation with unprecedented ease and efficiency. Explore its features, use cases, and why WaveSpeedAI is the ideal platform for your creative needs.

2025-04-06

Ghibli Now Live on WaveSpeedAI

Ghibli Now Live on WaveSpeedAI

Discover the groundbreaking Ghibli model on WaveSpeedAI, enabling high-quality video generation with unprecedented ease and efficiency. Explore its features, use cases, and why WaveSpeedAI is the ideal platform for your creative needs.

2025-04-04

WaveSpeedAI Now Supports ComfyUI: AI-Powered Image and Video with Custom Nodes

WaveSpeedAI Now Supports ComfyUI: AI-Powered Image and Video with Custom Nodes

WaveSpeedAI now fully supports ComfyUI! This integration brings a new level of flexibility and power to AI-driven content creation. With our custom nodes, users can seamlessly call the WaveSpeedAI API directly within ComfyUI.

2025-03-31

WaveSpeed's Solution in Accelerating FLUX.1-dev Image Generation

WaveSpeed's Solution in Accelerating FLUX.1-dev Image Generation

Discover how WaveSpeed has optimized the FLUX.1-dev model for faster image generation without compromising quality. Explore our innovative techniques and performance metrics.

2025-03-27

Accelerating HunyuanVideo Inference with ParaAttention: A WaveSpeed Breakthrough

Accelerating HunyuanVideo Inference with ParaAttention: A WaveSpeed Breakthrough

Discover how WaveSpeed has optimized the HunyuanVideo model for faster video generation without compromising quality. Explore our innovative techniques and performance metrics.

2025-03-26