Void Video Inpainting Mask
Playground
Try it on WavespeedAI!VOID Video Inpainting removes objects from videos using mask-guided inpainting. Supports quad-mask or auto-generated SAM-3 masks, optional Pass 2 refinement for temporal consistency, adjustable denoising steps, guidance scale, and temporal window size. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
Features
VOID Video Inpainting — Object Removal
VOID Video Inpainting removes objects or people from video footage and fills the background with realistic, temporally consistent content. Describe what to remove and what the background should look like — the model handles the rest, with optional mask video input for precise control.
Why Choose This?
-
Text-driven object removal Describe the object or person to remove in plain language — no manual masking required. The model uses SAM-3 to auto-generate a mask from your text description.
-
Custom mask video support Upload a pre-prepared VOID-style quadmask or simple binary mask video for precise, frame-accurate removal control.
-
Background inpainting Describe the desired background after removal — the model fills the gap with contextually appropriate, motion-consistent content.
-
Pass 2 refinement Enable enable_pass2_refinement for additional warped-noise refinement that improves temporal consistency on longer clips.
-
Fine-grained generation control Adjust inference steps, guidance scale, denoising strength, and temporal window size for precise output control.
Parameters
| Parameter | Required | Description |
|---|---|---|
| video | Yes | Input video containing the object to remove (URL). |
| prompt | Yes | Text description of the desired background after object removal. |
| mask_video | No | Mask video URL. Supports VOID quadmask (4 grayscale values) or simple binary mask. Auto-generated if omitted. |
| mask_prompt | No | Text description of what to mask/remove. Used to auto-generate a mask when mask_video is not provided. |
| enable_pass2_refinement | No | Run Pass 2 warped-noise refinement for improved temporal consistency. Slower but higher quality. Default: false. |
| negative_prompt | No | Negative prompt to guide generation away from undesired outputs. |
| num_inference_steps | No | Number of denoising steps. Range: 1–50. Default: 30. Higher = better quality, slower. |
| guidance_scale | No | Classifier-free guidance scale. Range: 0–20. Default: 1. |
| strength | No | Denoising strength. Range: 0–1. Default: 1 (full denoising). |
| num_frames | No | Temporal window size. Valid values: 69, 77, 85, …, 197. Default: 85. |
| seed | No | Random seed for reproducible results. |
Mask Video Format
The mask_video supports two formats:
- VOID quadmask (recommended): 4 grayscale values — 0 = object to remove, 63 = overlap region, 127 = affected area, 255 = background to keep.
- Simple binary mask: 0 = remove, 255 = keep.
If mask_video is not provided, a mask is auto-generated from mask_prompt using SAM-3.
How to Use
- Upload your video — provide the source clip containing the object to remove.
- Write your prompt — describe what the background should look like after the object is removed.
- Provide mask input — either upload a mask_video for precise control, or provide a mask_prompt to auto-generate the mask.
- Enable Pass 2 (optional) — check enable_pass2_refinement for improved temporal consistency on longer clips.
- Adjust generation settings (optional) — tune inference steps, guidance scale, strength, and num_frames as needed.
- Add negative prompt (optional) — specify elements to avoid in the inpainted output.
- Set seed (optional) — fix the seed to reproduce a specific result.
- Submit — generate, preview, and download your object-removed video.
Pricing
| Pass 2 Refinement | Mask Video | Cost |
|---|---|---|
| No | No (auto) | $0.05 |
| Yes | No (auto) | $0.10 |
| No | Yes | $0.10 |
| Yes | Yes | $0.15 |
Billing Rules
- Base cost: $0.05 (without Pass 2)
- Pass 2 surcharge: ×2 base cost when enabled
- Mask video surcharge: +$0.05 when a mask_video is provided
Best Use Cases
- Film & video post-production — Remove unwanted objects, crew members, or equipment from footage.
- Social media content — Clean up backgrounds by removing distracting elements before publishing.
- Product video cleanup — Remove staging props, logos, or unwanted foreground elements from product footage.
- Content repurposing — Strip specific elements from existing footage to repurpose clips for new contexts.
Pro Tips
- Provide a mask_video for the most accurate, frame-precise removal — especially for fast-moving or partially occluded subjects.
- If using mask_prompt for auto-generation, be specific about the object to remove (e.g. “the person on the left” rather than just “person”).
- Write a detailed background prompt describing texture, lighting, and environment for more coherent fill results.
- Enable Pass 2 refinement for clips longer than a few seconds where temporal consistency matters most.
- Use a fixed seed when iterating on prompt or mask changes to isolate the effect of each adjustment.
Notes
- Both video and prompt are required fields; all other parameters are optional.
- If mask_video is omitted, mask_prompt should be provided to guide automatic mask generation.
- Valid num_frames values are: 69, 77, 85, 93, 101 … up to 197 (increments of 8 after 85).
- Ensure video and mask_video URLs are publicly accessible.
- mask_video or mask_prompt must chose one to input.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/void-video-inpainting/mask" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"enable_pass2_refinement": false,
"num_inference_steps": 30,
"guidance_scale": 1,
"strength": 1,
"num_frames": 85
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| prompt | string | Yes | - | Text description of the desired background after object removal. | |
| video | string | Yes | - | URL of the input video containing the object to remove. | |
| mask_video | string | No | - | - | URL of a mask video for the removal target. For best results this should be a VOID-style quadmask video with 4 grayscale values: 0=object to remove, 63=overlap, 127=affected region, 255=background to keep. A simple binary mask (0=remove, 255=keep) also works. If omitted, the app will generate a temporary mask video from `mask_prompt` using the SAM-3 video endpoint and convert it to a quadmask automatically. |
| mask_prompt | string | No | - | - | Text description of what should be masked in the input video, such as the object or person to remove. Used to generate a temporary mask video with SAM-3 when `quad_mask_video_url` is not provided. |
| enable_pass2_refinement | boolean | No | false | - | Run VOID Pass 2 warped-noise refinement after Pass 1. This is slower but can improve temporal consistency on longer clips. |
| negative_prompt | string | No | - | Negative prompt to guide generation away from undesired outputs. | |
| num_inference_steps | integer | No | 30 | 1 ~ 50 | Number of denoising steps. Higher values improve quality but increase latency. |
| guidance_scale | number | No | 1 | 0 ~ 20 | Classifier-free guidance scale. |
| strength | number | No | 1 | 0 ~ 1 | Denoising strength. 1.0 means full denoising. |
| num_frames | integer | No | 85 | 1 ~ 197 | Temporal window size for inference. The backend snaps this to the nearest CogVideoX-safe value that works with temporal compression and patching. Valid outputs are 69, 77, 85, ..., 197. |
| seed | integer | No | - | -1 ~ 2147483647 | Random seed for reproducibility. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data | object | The prediction data object containing all details |
| data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
| data.model | string | Model ID used for the prediction |
| data.outputs | string | Array of URLs to the generated content (empty when status is not completed). |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |