How to Use the WaveSpeedAI JavaScript SDK
The WaveSpeedAI JavaScript SDK provides a simple way to integrate AI image and video generation into your Node.js and TypeScript applications. This guide covers everything you need to get started.
Prerequisites
Before you begin, make sure you have:
- Node.js 18+ installed on your machine
- A WaveSpeedAI API key from wavespeed.ai/settings/api-keys
Installation
Install the SDK using npm:
npm install wavespeed
Or with yarn:
yarn add wavespeed
Setting Up Authentication
The SDK needs your API key to authenticate requests. You have two options:
Option 1: Environment Variable (Recommended)
Set the WAVESPEED_API_KEY environment variable:
export WAVESPEED_API_KEY="your-api-key-here"
Then use the SDK directly:
import wavespeed from "wavespeed";
const output = await wavespeed.run("wavespeed-ai/z-image/turbo", { prompt: "Cat" });
Option 2: Pass API Key Directly
Import the Client class and pass your API key to the constructor:
import { Client } from "wavespeed";
const client = new Client("your-api-key-here");
Generating Your First Image
Here’s a complete example that generates an image using Z-Image Turbo:
import wavespeed from "wavespeed";
const output = await wavespeed.run(
"wavespeed-ai/z-image/turbo",
{ prompt: "A serene mountain landscape at sunset with golden light" }
);
console.log(output["outputs"][0]); // URL to the generated image
The run() method handles the entire workflow: submitting the request, polling for completion, and returning the result.
Uploading Files
For workflows that require input images (like image-to-video), use the upload() method to get a URL that WaveSpeedAI can access:
import wavespeed from "wavespeed";
// Upload a local image file
const imageUrl = await wavespeed.upload("./my-image.png");
// Use the uploaded image for video generation
const video = await wavespeed.run(
"wavespeed-ai/wan-2.1/image-to-video",
{
image: imageUrl,
prompt: "Camera slowly zooms in while clouds move in the background"
}
);
console.log(video["outputs"][0]); // URL to the generated video
Configuration Options
Client Options
Configure retry behavior when initializing the client:
import { Client } from "wavespeed";
const client = new Client(process.env.WAVESPEED_API_KEY, {
maxRetries: 3, // Max retries for failed requests
maxConnectionRetries: 5, // Max retries for connection errors
retryInterval: 1.0 // Seconds between retries
});
Run Options
Configure individual run() calls:
import wavespeed from "wavespeed";
const output = await wavespeed.run(
"wavespeed-ai/z-image/turbo",
{ prompt: "A cute orange cat wearing a tiny hat" },
{
timeout: 60, // Max seconds to wait for completion
pollInterval: 0.5, // Seconds between status checks
enableSyncMode: true // Use synchronous mode if available
}
);
Working with Different Models
Text-to-Image
Generate images from text descriptions:
import wavespeed from "wavespeed";
const output = await wavespeed.run(
"wavespeed-ai/z-image/turbo",
{
prompt: "A futuristic cityscape with flying cars and neon lights",
size: "1024x1024"
}
);
Image-to-Video
Transform static images into videos:
import wavespeed from "wavespeed";
const imageUrl = await wavespeed.upload("./landscape.jpg");
const video = await wavespeed.run(
"wavespeed-ai/wan-2.1/image-to-video",
{
image: imageUrl,
prompt: "Gentle wind blowing through the trees"
}
);
Text-to-Video
Generate videos directly from text:
import wavespeed from "wavespeed";
const video = await wavespeed.run(
"wavespeed-ai/wan-2.1/t2v-480p",
{
prompt: "A golden retriever running through a field of flowers"
}
);
TypeScript Usage
The SDK includes TypeScript definitions. Here’s a type-safe example:
import wavespeed from "wavespeed";
interface GenerationOutput {
outputs: string[];
timings?: Record<string, number>;
}
const output: GenerationOutput = await wavespeed.run(
"wavespeed-ai/z-image/turbo",
{ prompt: "An astronaut riding a horse on Mars" }
);
const imageUrl: string = output.outputs[0];
console.log(imageUrl);
Error Handling
For production applications, configure retries and handle errors gracefully:
import { Client } from "wavespeed";
const client = new Client(process.env.WAVESPEED_API_KEY, {
maxRetries: 3,
maxConnectionRetries: 5,
retryInterval: 1.0
});
try {
const output = await client.run(
"wavespeed-ai/z-image/turbo",
{ prompt: "A beautiful sunset over the ocean" },
{ timeout: 120 }
);
console.log("Generated:", output.outputs[0]);
} catch (error) {
console.error("Generation failed:", error.message);
}
Resources
- GitHub Repository: github.com/WaveSpeedAI/wavespeed-javascript
- npm Package: npmjs.com/package/wavespeed
- API Documentation: docs.wavespeed.ai
- Model Library: wavespeed.ai/models
Start building with WaveSpeedAI today and bring AI-powered image and video generation to your JavaScript applications.
Related Articles

How to Use the WaveSpeedAI Python SDK

Best AI Inference Platform in 2026: WaveSpeedAI vs Replicate vs Fal.ai vs Novita AI vs Runware vs Atlas Cloud

WaveSpeedAI vs Hedra: Which AI Video Platform is Best?

Best Adobe Firefly Alternative in 2026: WaveSpeedAI for AI Image Generation

Best AI Image Editors in 2026: Professional Photo Editing with AI
