NEW YEAR SALE: Get 15% Extra Credits, up to $150.Top Up Now!
Home/Explore/AI Generation Assist Tools/wavespeed-ai/molmo2/video-content-moderator
feature

feature

Molmo2-4B Video Content Moderator

wavespeed-ai/molmo2/video-content-moderator

Molmo2-4B Video Content Moderator analyzes video content for safety, appropriateness, and policy compliance. Detects violence, nudity, gore, and other harmful visual content in videos using an open-source vision-language model. Ready-to-use REST inference API, best performance, no cold starts, affordable pricing.

Hint: You can drag and drop a file or click to upload

Idle

{ "hate": false, "sexual": false, "violence": false, "harassment": false, "sexual/minors": false }

Your request will cost $0.003 per run.

For $1 you can run this model approximately 333 times.

ExamplesView all

README

Molmo2 Video Content Moderator

Automatically screen videos for harmful content with Molmo2 Video Content Moderator. This AI-powered moderation tool analyzes videos and returns safety classifications for harassment, hate speech, sexual content, and violence — essential for content platforms, user-generated content, and compliance workflows.

Need image moderation? Try Molmo2 Image Content Moderator for image analysis.

Why It Works Great

  • Comprehensive detection: Screens for harassment, hate, sexual content, and violence.
  • Child safety: Dedicated detection for content involving minors.
  • Fast processing: Quick analysis with results in seconds.
  • JSON output: Clean, structured results for easy integration.
  • Ultra-affordable: Just $0.003 per video — 333 videos for $1.
  • Custom criteria: Optional text input for additional context.

Parameters

ParameterRequiredDescription
videoYesVideo to analyze (upload or public URL).
textNoOptional context or custom moderation criteria.

How to Use

  1. Upload your video — drag and drop or paste a public URL.
  2. Add context (optional) — provide additional text for custom criteria.
  3. Run — click the button to analyze.
  4. Review results — check the JSON output for safety classifications.

Pricing

Flat rate per video analyzed.

OutputCost
Per video$0.003
100 videos$0.30
1,000 videos$3.00

Output Format

The model returns a JSON object with boolean flags for each content category:

{
  "harassment": false,
  "hate": false,
  "sexual": false,
  "sexual/minors": false,
  "violence": false
}

Detection Categories

CategoryDescription
harassmentBullying, intimidation, or targeted abuse
hateHate speech, discrimination, or prejudice
sexualAdult sexual content or nudity
sexual/minorsAny sexual content involving minors
violenceGraphic violence, gore, or harmful acts

Best Use Cases

  • Content Platforms — Screen user-uploaded videos before publishing.
  • Social Media — Moderate video content at scale.
  • E-commerce — Review product videos for policy compliance.
  • Education — Ensure learning content is age-appropriate.
  • Compliance — Meet content moderation requirements efficiently.
  • Pre-screening — Filter content before human review.

Pro Tips for Best Results

  • Use as a first-pass filter before human moderation for edge cases.
  • Integrate via API for automated moderation pipelines.
  • At $0.003 per video, batch processing is extremely cost-effective.
  • Combine with image moderation for comprehensive content screening.
  • Use the text parameter to provide context for borderline content.

Notes

  • If using a URL, ensure it is publicly accessible.
  • Processing time scales with video length.
  • Returns boolean values — true indicates detected content.
  • Designed for automated workflows with JSON output.
  • Consider human review for flagged content or edge cases.