NEW YEAR SALE: Get 15% Extra Credits, up to $150.Top Up Now!
Home/Explore/AI Generation Assist Tools/wavespeed-ai/molmo2/image-content-moderator
feature

feature

Molmo2-4B Image Content Moderator

wavespeed-ai/molmo2/image-content-moderator

Molmo2-4B Image Content Moderator: Analyze image content for safety, appropriateness, and policy compliance. Detects violence, nudity, gore, and other harmful visual content. Open-source vision-language model. Ready-to-use REST API, no cold starts, affordable pricing.

Hint: You can drag and drop a file or click to upload

preview
If set to true, the function will wait for the result to be generated and uploaded before returning the response. It allows you to get the result directly in the response. This property is only available through the API.

Idle

{ "hate": false, "sexual": false, "violence": false, "harassment": false, "sexual/minors": false }

Your request will cost $0.003 per run.

For $1 you can run this model approximately 333 times.

ExamplesView all

README

Molmo2 Image Content Moderator

Automatically screen images for harmful content with Molmo2 Image Content Moderator. This AI-powered moderation tool analyzes images and returns safety classifications for harassment, hate speech, sexual content, and violence — essential for content platforms, user-generated content, and compliance workflows.

Need video moderation? Try Molmo2 Video Content Moderator for video analysis.

Why It Works Great

  • Comprehensive detection: Screens for harassment, hate, sexual content, and violence.
  • Child safety: Dedicated detection for content involving minors.
  • Instant results: Fast processing with results in seconds.
  • JSON output: Clean, structured results for easy integration.
  • Ultra-affordable: Just $0.003 per image — 333 images for $1.
  • Custom criteria: Optional text input for additional context.

Parameters

ParameterRequiredDescription
imageYesImage to analyze (upload or public URL).
textNoOptional context or custom moderation criteria.

How to Use

  1. Upload your image — drag and drop or paste a public URL.
  2. Add context (optional) — provide additional text for custom criteria.
  3. Run — click the button to analyze.
  4. Review results — check the JSON output for safety classifications.

Pricing

Flat rate per image analyzed.

OutputCost
Per image$0.003
100 images$0.30
1,000 images$3.00

Output Format

The model returns a JSON object with boolean flags for each content category:

{
  "harassment": false,
  "hate": false,
  "sexual": false,
  "sexual/minors": false,
  "violence": false
}

Detection Categories

CategoryDescription
harassmentBullying, intimidation, or targeted abuse
hateHate speech, discrimination, or prejudice
sexualAdult sexual content or nudity
sexual/minorsAny sexual content involving minors
violenceGraphic violence, gore, or harmful imagery

Best Use Cases

  • Content Platforms — Screen user-uploaded images before publishing.
  • Social Media — Moderate image content at scale.
  • E-commerce — Review product images for policy compliance.
  • Dating Apps — Filter inappropriate profile photos.
  • Forums & Communities — Ensure uploaded images meet guidelines.
  • Pre-screening — Filter content before human review.

Pro Tips for Best Results

  • Use as a first-pass filter before human moderation for edge cases.
  • Integrate via API for automated moderation pipelines.
  • At $0.003 per image, batch processing is extremely cost-effective.
  • Combine with video moderation for comprehensive content screening.
  • Use the text parameter to provide context for borderline content.

Notes

  • If using a URL, ensure it is publicly accessible.
  • Processing is near-instant for most images.
  • Returns boolean values — true indicates detected content.
  • Designed for automated workflows with JSON output.
  • Consider human review for flagged content or edge cases.