WaveSpeedAI APIGoogleGoogle Veo3.1 Fast Video Extend

Google Veo3.1 Fast Video Extend

Google Veo3.1 Fast Video Extend

Playground

Try it on WavespeedAI!

Extend Veo 3.1 videos in 7-second steps with the Fast endpoint—quick, coherent continuation that preserves style and motion, output as a single merged clip. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.

Features

Veo 3.1 — Video Extend (Fast)

Veo 3.1 Video Extend (Fast) lets you quickly append a 7-second continuation to an existing Veo-generated video. You pass the previous output as video plus a new prompt, and the model returns one merged clip containing both the original segment and the new extension—optimised for low latency and faster iteration.


⚡ What makes the Fast endpoint different

  • Optimised for speed: Same continuation logic as standard Video Extend, but tuned for faster turnaround so you can iterate more quickly.
  • True continuation, not regeneration: Works only with Veo 3.1 outputs, preserving look, motion style, and aspect ratio.
  • Fixed extension block: Each call adds +7 seconds, and you can chain extensions up to 20 times.
  • Single merged deliverable: The API always returns one video file (input + extension) with a maximum length of 148 seconds.

🎯 Designed For

  • Rapid story development: Test different “next beats” on top of a strong base shot without re-generating the whole sequence.
  • Ad and social variations: Quickly create extended cuts, alternate endings, or extra moments for trailers, promos, and Shorts/Reels.
  • Fast production feedback loops: Respond to review notes by extending just the part that needs more room.

🧩 Input / Output Constraints

  • Input must be Veo-generated: Upload only videos that were created by Veo 3.1 (text-to-video or image-to-video).

  • Input limits:

    • Maximum input duration: 141 seconds
    • Supported aspect ratios: 16:9 or 9:16
    • Supported resolutions: 720p or 1080p
  • Extension limits:

    • Each run adds +7 seconds
    • Up to 20 extensions in a chain
    • Maximum final length (original + extensions): 148 seconds

Output: a single MP4 video at the same resolution and aspect ratio as the input clip.


🚀 How to Use

  1. Generate a base clip with Veo 3.1 text-to-video or image-to-video.

  2. Call Veo 3.1 — Video Extend (Fast) and pass that clip as video.

  3. Write an extension prompt describing what should happen next:

    • Action (what characters or objects do)
    • Camera behaviour (pan, zoom, dolly, cut feel)
    • Mood and visual style (lighting, atmosphere, pacing)
  4. Submit the job and download the merged extended video.

  5. To continue the story, feed the new output back into the same endpoint (respecting the 148s cap).


✍️ Prompt Tips

  • Think of the prompt as a “next shot” or “next beat”:

    action + camera + mood + continuity anchors

    Example: “Continue from previous shot, camera slowly cranes up, the character walks toward the glowing city skyline, warm cinematic lighting, smooth motion.”

  • Mention what should stay consistent:

    • Main character, wardrobe, time of day, general colour palette.
  • If audio continuity matters, ensure the last second of the input clip has usable audio; the extension will build from that context.

  • Plan your narrative around:

    • 7-second extension blocks, and
    • the 141s input / 148s total hard limits.

💰 Pricing

Flat, simple pricing:

  • $1.05 per run

Each run appends one 7-second fast extension to your clip and returns a new merged video.

Approximate cost per 7-second block:

  • ≈ $0.15 per second of added footage.

🔁 More Model Options

  • alibaba/wan-2.5/video-extend Extend Alibaba WAN 2.5 AI videos with smooth, style-consistent continuations, perfect for story-driven ads, trailers, and long-form generative video workflows.

  • google/veo3.1/video-extend Google Veo 3.1 Fast video extend rapidly adds high-quality extra segments to Veo-generated clips, ideal for Shorts, Reels, and other fast-turnaround AI video content.

Authentication

For authentication details, please refer to the Authentication Guide.

API Endpoints

Submit Task & Query Result


# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/google/veo3.1-fast/video-extend" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
    "resolution": "1080p"
}'

# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"

Parameters

Task Submission Parameters

Request Parameters

ParameterTypeRequiredDefaultRangeDescription
videostringYes-The video to use for the generation.
promptstringNo-The positive prompt for the generation.
resolutionstringNo1080p720p, 1080pVideo resolution.
negative_promptstringNo-The negative prompt for the generation.
seedintegerNo--1 ~ 2147483647The random seed to use for the generation.

Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
data.idstringUnique identifier for the prediction, Task Id
data.modelstringModel ID used for the prediction
data.outputsarrayArray of URLs to the generated content (empty when status is not completed)
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.has_nsfw_contentsarrayArray of boolean values indicating NSFW detection for each output
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds

Result Request Parameters

ParameterTypeRequiredDefaultDescription
idstringYes-Task ID

Result Response Parameters

ParameterTypeDescription
codeintegerHTTP status code (e.g., 200 for success)
messagestringStatus message (e.g., “success”)
dataobjectThe prediction data object containing all details
data.idstringUnique identifier for the prediction, the ID of the prediction to get
data.modelstringModel ID used for the prediction
data.outputsstringArray of URLs to the generated content (empty when status is not completed).
data.urlsobjectObject containing related API endpoints
data.urls.getstringURL to retrieve the prediction result
data.statusstringStatus of the task: created, processing, completed, or failed
data.created_atstringISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”)
data.errorstringError message (empty if no error occurred)
data.timingsobjectObject containing timing details
data.timings.inferenceintegerInference time in milliseconds
© 2025 WaveSpeedAI. All rights reserved.