Introducing WaveSpeedAI LTX 2.3 Video Extend on WaveSpeedAI
Extend Your Videos Seamlessly with LTX-2.3 Video Extend
Creating compelling video content often means working with clips that are just a few seconds too short. Maybe your drone shot ended before the landscape fully revealed itself, or your product demo needs a few more beats to land the message. Until now, extending video footage meant reshooting, using clunky editing workarounds, or settling for abrupt cuts.
LTX-2.3 Video Extend changes that entirely. Now available on WaveSpeedAI, this model generates additional frames that naturally continue your existing footage — matching motion, style, and atmosphere so seamlessly that viewers can’t tell where the original ends and the extension begins.
What Is LTX-2.3 Video Extend?
LTX-2.3 Video Extend is built on the LTX-2.3 foundation model, a DiT-based (Diffusion Transformer) architecture designed for high-quality video generation with strong temporal consistency. While the broader LTX-2.3 family includes text-to-video and image-to-video capabilities, the Video Extend variant focuses on a specific and highly practical task: taking an existing video clip and generating new frames that naturally continue the content.
Upload a video, specify how many additional seconds you need (anywhere from 1 to 20), and optionally describe how the scene should evolve. The model analyzes the motion patterns, visual style, lighting, and composition of your source footage, then generates a continuation that flows naturally from the last frame.
The result is extended footage that preserves the original video’s character — camera movement continues on its trajectory, objects maintain their momentum, and environmental details remain consistent.
Key Features
-
Temporal consistency: The model doesn’t just generate plausible frames — it generates frames that are coherent continuations of your specific source video. Motion trajectories, lighting changes, and scene dynamics carry forward naturally.
-
Flexible duration control: Extend your videos by as little as 1 second or as much as 20 seconds per generation. This granular control lets you add precisely what you need without wasting resources.
-
Optional prompt guidance: While the model can extend video without any text input, you can provide prompts describing how the scene should evolve. Want the camera to pan left? A character to start walking? Describe it, and the model incorporates your direction into the continuation.
-
Built-in prompt enhancer: Not sure how to describe the continuation you want? The integrated prompt enhancer automatically refines your descriptions for better results.
-
No cold starts on WaveSpeedAI: The model is always warm and ready. Submit your request and get results without waiting for infrastructure to spin up — critical when you’re iterating on creative projects and need fast feedback loops.
Real-World Use Cases
Social Media and Short-Form Content
A 4-second clip of a sunset is beautiful but scrolls by too fast on Instagram or TikTok. Extend it to 15 seconds and you have a full post that holds attention. Content creators can transform brief moments captured on a phone into polished, full-length videos without reshooting.
Film and Video Production
B-roll is the backbone of professional video, and there’s never enough of it. Extend establishing shots, landscape footage, or atmospheric clips to fill gaps in your edit. A 3-second drone flyover becomes a 20-second sequence that carries an entire scene transition.
Product Demos and Marketing
Product videos need precise timing to sync with voiceovers and on-screen text. When a clip runs short, extending it by a few seconds maintains visual flow without awkward pauses or jump cuts. This is especially useful for e-commerce product showcases where smooth, continuous footage builds buyer confidence.
Loop Creation
Generate additional frames to bridge the gap between a video’s end and its beginning, creating seamless loops for digital signage, background videos on websites, or ambient displays.
Storytelling and Animation
Animators and digital storytellers can extend key scenes, giving moments room to breathe. A character’s reaction shot, a dramatic pause, or a slow reveal can all be stretched to serve the narrative without additional manual frame work.
Getting Started on WaveSpeedAI
Getting up and running takes just a few lines of code with the WaveSpeed Python SDK:
import wavespeed
output = wavespeed.run(
"wavespeed-ai/ltx-2.3/video-extend",
{
"video": "https://your-video-url.com/clip.mp4",
"duration": 10,
"prompt": "The camera continues to push forward through the forest as sunlight filters through the canopy"
},
)
print(output["outputs"][0])
That’s it. No infrastructure to configure, no GPU instances to manage, no cold start delays. You provide the video and parameters, WaveSpeedAI handles the rest.
Pricing
LTX-2.3 Video Extend uses straightforward per-second billing:
| Extension Duration | Cost |
|---|---|
| 1 second | $0.10 |
| 5 seconds | $0.50 |
| 10 seconds | $1.00 |
| 20 seconds | $2.00 |
At $0.10 per second, you can experiment freely — test a 1-second extension to validate the approach, then scale up to longer durations once you’re satisfied with the direction.
Tips for Best Results
- Provide clear source footage: Videos with defined motion direction and consistent lighting give the model the strongest signal for continuation.
- Start short, then go long: Test with a 2-3 second extension before committing to a 20-second generation. This lets you iterate quickly and cheaply.
- Use descriptive prompts: Even though prompts are optional, they significantly improve results. Describe camera movement, subject behavior, and environmental changes.
- Think about motion trajectory: The model excels when there’s a clear direction to continue — a moving camera, a walking subject, flowing water, or drifting clouds.
Push Your Creative Boundaries
LTX-2.3 Video Extend removes one of the most persistent friction points in video production: the constraint of fixed clip length. Whether you’re a solo creator stretching phone footage into polished content or a production team extending carefully captured B-roll, the model delivers natural, coherent continuations at a price point that makes experimentation practical.
Try LTX-2.3 Video Extend on WaveSpeedAI today and start turning your short clips into the footage you actually need.


