Seedance 1.5 Pro is Live Now!Try Now!
Home/Explore/Video Edit/wavespeed-ai/sam3-video
video-to-video

video-to-video

SAM3 Video

wavespeed-ai/sam3-video

SAM3 Video is a unified foundation model for prompt-based video segmentation. Provide text, point, box, or mask prompts and the model segments and tracks targets across frames with strong temporal consistency. Supports concept-level (“segment anything with concepts”) and multi-object masks for editing, analytics, and VFX. Ready-to-use REST inference API with fast response, no cold starts, and affordable pricing.

Hint: You can drag and drop a file or click to upload

Whether to apply mask to video

Idle

Your request will cost $0.05 per run.

For $1 you can run this model approximately 20 times.

One more thing::

ExamplesView all

README

WaveSpeedAI SAM3 Video Video-to-Video

SAM3 Video (wavespeed-ai/sam3-video) is a prompt-based video segmentation and mask-guided editing model. You provide a video plus a short text instruction (and optionally enable mask application), and the model segments/targets the requested subject(s) across frames with strong temporal consistency.

It’s a practical fit for object-focused video edits like background cleanup, removing unwanted elements, or isolating subjects for downstream compositing—especially on short-to-medium clips with clear subjects.

Key capabilities

  • Prompt-based target selection (concept prompts) Identify what to edit/segment using natural language (e.g., “the woman”, “person”, “red car”) without manually drawing masks frame-by-frame.

  • Multi-object targeting in one run Track multiple object categories by listing them in the prompt (comma-separated), producing consistent targets across frames.

  • Mask-guided region control via apply_mask Toggle whether the model applies the mask to the video output for tighter, more controllable edits.

  • Temporal consistency for video workflows Designed to keep results stable across frames, reducing flicker/drift compared with per-frame processing.

  • Editing-oriented use cases Works well for object removal and background cleanup when your prompt clearly specifies what should change and what should stay.

Parameters and how to use

  • video: (required) Input video file or a public URL.
  • prompt: (required) Text instruction for segmentation/editing. Use commas to target multiple objects (e.g., person, cloth).
  • apply_mask: Whether to apply the mask to the video (boolean). Default: true.

Prompt

Write prompts like you’re describing what to target and (if applicable) what the edit intent is.

Tips:

  • Prefer short, concrete nouns for targeting: person, woman, car, dog, shirt.
  • For multiple targets, use comma-separated labels: person, backpack, bicycle.
  • If you’re doing cleanup/removal, include keep-constraints to preserve look: “remove the person in the background, keep lighting unchanged”

Examples:

  • The woman
  • person, cloth
  • remove the person in the background, keep lighting unchanged

Media (Videos)

  • Provide video as either:

    • an uploaded file, or
    • a public URL the service can fetch.
  • Pricing/processing uses a billed duration clamp of 5–600 seconds, so very short clips are billed as 5s, and very long clips are treated as 600s.

Other parameters

  • apply_mask

    • true: apply the model’s mask to the output video (recommended when you want tighter control over the edited region).
    • false: run without applying the mask (useful when you want the model’s edits without explicit masking).

After you finish configuring the parameters, click Run, preview the result, and iterate if needed.

Pricing

Per-run cost depends on video duration (billed duration is clamped to 5–600 seconds), charged in 5-second units at $0.05 per 5s.

Billed durationCost per run
5s$0.05
10s$0.10
600s (max)$6.00

Notes

  • Best results come from stable footage with clear subject separation and minimal heavy motion blur.
  • Turn on apply_mask when you need more precise, localized control (especially in cluttered scenes).
  • If results drift or pick the wrong target, refine the prompt (more specific noun/descriptor) or reduce to fewer targets per run.

Related Models