Sam3 Video Rle
Playground
Try it on WavespeedAI!SAM 3 Video RLE is a unified foundation model for prompt-based segmentation in video. Track and segment objects across frames using text, points, or boxes, returning RLE encoded masks for efficient processing. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
Features
SAM3 Video Segmentation RLE
SAM3 Video Segmentation RLE is an advanced video segmentation model based on Meta’s Segment Anything Model 3. It tracks and segments objects across video frames and returns masks in RLE (Run-Length Encoding) format — ideal for programmatic processing, automated pipelines, and integration with downstream workflows.
Why Choose This?
-
Video object tracking Segment and track objects consistently across all video frames.
-
RLE output format Returns compact Run-Length Encoded mask data for efficient storage and processing.
-
Multiple prompt types Segment objects using text prompts, point prompts, box prompts, or any combination.
-
Multi-object tracking Track multiple objects using comma-separated prompts (e.g., “person, cloth”).
-
Prompt Enhancer Built-in tool to automatically improve your text prompts for better results.
-
Optional mask visualization Toggle apply_mask to preview segmentation on the video.
Parameters
| Parameter | Required | Description |
|---|---|---|
| video | Yes | Source video to segment (upload or URL) |
| prompt | Yes | Text description of the object(s) to segment |
| point_prompts | No | Point coordinates to identify the target object |
| box_prompts | No | Bounding box coordinates to identify the target object |
| apply_mask | No | Apply mask overlay to the video output |
How to Use
- Upload your video — drag and drop or paste a URL.
- Write your prompt — describe the object(s) to track (e.g., “the man”, “person, cloth”).
- Add point/box prompts (optional) — click ”+ Add Item” for precise targeting.
- Enable apply_mask (optional) — check to visualize masks on the video.
- Run — submit and receive RLE-encoded segmentation data.
Output Format
The model returns RLE (Run-Length Encoding) data for each frame in JSON format, enabling efficient programmatic processing.
Decoding RLE in Python
from pycocotools import mask as mask_utils
rle_data = {"counts": "146301 3 147834 11 ...", "size": [height, width]}
binary_mask = mask_utils.decode(rle_data) # Returns numpy arrayPricing
| Duration | Cost |
|---|---|
| Per 5 seconds | $0.05 |
| 1 minute | $0.60 |
| 5 minutes | $3.00 |
| 10 minutes | $6.00 |
Billing Rules
- Rate: $0.05 per 5 seconds
- Minimum charge: 5 seconds
- Maximum duration: 10 minutes (600 seconds)
- Billed duration: Video length rounded up in 5-second units
Best Use Cases
- Video Annotation — Generate frame-by-frame segmentation masks for training data.
- Object Tracking Pipelines — Integrate tracking data into automated workflows.
- Video Editing Automation — Extract masks for programmatic video processing.
- Computer Vision — Track objects across frames for CV applications.
- VFX Pipelines — Generate rotoscoping masks for visual effects work.
Pro Tips
- Use comma-separated prompts to track multiple objects (e.g., “person, car, dog”).
- Use this model when you need programmatic access to frame-by-frame mask data.
- Use SAM3 Video if you need direct video output.
- RLE format is compatible with pycocotools for easy decoding.
- Combine text prompts with point/box prompts for more accurate tracking.
Notes
- Maximum video duration is 10 minutes per job.
- Output is RLE-encoded JSON data, not a video file.
- For longer videos, split into segments and process separately.
- Use pycocotools or similar libraries to decode RLE data.
Related Models
- SAM3 Video — Same segmentation with direct video output.
- SAM3 Image RLE — Image segmentation with RLE output.
- SAM3 Image — Image segmentation with image output.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/sam3-video-rle" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"point_prompts": [],
"box_prompts": [],
"apply_mask": true
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| video | string | Yes | - | Video URL for segmented | |
| prompt | string | Yes | - | Text prompt for segmentation. Use commas to track multiple objects (e.g., 'person, cloth'). | |
| point_prompts | array | No | [] | - | List of point coordinates to mark specific locations for segmentation (foreground or background) |
| box_prompts | array | No | [] | - | List of bounding boxes to define rectangular regions for segmentation |
| apply_mask | boolean | No | true | - | Whether to apply mask to video |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data | object | The prediction data object containing all details |
| data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
| data.model | string | Model ID used for the prediction |
| data.outputs | string | Array of RLE encoded mask data (empty when status is not completed). |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |