NEW YEAR SALE: Get 15% Extra Credits, up to $150.Top Up Now!
Home/Explore/AI Generation Assist Tools/wavespeed-ai/molmo2/text-content-moderator
feature

feature

Molmo2-4B Text Content Moderator

wavespeed-ai/molmo2/text-content-moderator

Molmo2-4B Text Content Moderator: Analyze text content for safety, appropriateness, and policy compliance. Detects hate speech, violence, sexual content, and other harmful categories. Open-source vision-language model. Ready-to-use REST API, no cold starts, affordable pricing.

If set to true, the function will wait for the result to be generated and uploaded before returning the response. It allows you to get the result directly in the response. This property is only available through the API.

Idle

{ "hate": false, "sexual": false, "violence": false, "harassment": false, "sexual/minors": false }

Your request will cost $0.003 per run.

For $1 you can run this model approximately 333 times.

ExamplesView all

README

Molmo2 Text Content Moderator

Automatically screen text for harmful content with Molmo2 Text Content Moderator. This AI-powered moderation tool analyzes text and returns safety classifications for harassment, hate speech, sexual content, and violence — essential for chat platforms, user-generated content, and compliance workflows.

Related Models

Why It Works Great

  • Comprehensive detection: Screens for harassment, hate, sexual content, and violence.
  • Child safety: Dedicated detection for content involving minors.
  • Instant results: Near-instant processing for real-time moderation.
  • JSON output: Clean, structured results for easy integration.
  • Ultra-affordable: Just $0.003 per request — 333 requests for $1.
  • Simple integration: Single text input for straightforward API calls.

Parameters

ParameterRequiredDescription
textYesText content to analyze for harmful content.

How to Use

  1. Enter your text — paste or type the content to analyze.
  2. Run — click the button to analyze.
  3. Review results — check the JSON output for safety classifications.

Pricing

Flat rate per text analyzed.

OutputCost
Per request$0.003
100 requests$0.30
1,000 requests$3.00

Output Format

The model returns a JSON object with boolean flags for each content category:

{
  "harassment": false,
  "hate": false,
  "sexual": false,
  "sexual/minors": false,
  "violence": false
}

Detection Categories

CategoryDescription
harassmentBullying, intimidation, or targeted abuse
hateHate speech, discrimination, or prejudice
sexualAdult sexual content or explicit language
sexual/minorsAny sexual content involving minors
violenceThreats, graphic violence descriptions, or harmful content

Best Use Cases

  • Chat Platforms — Moderate messages in real-time.
  • Comments & Reviews — Screen user comments before publishing.
  • Forums & Communities — Ensure posts meet community guidelines.
  • Social Media — Filter text content at scale.
  • Customer Support — Flag abusive messages automatically.
  • Content Pipelines — Pre-screen text before human review.

Pro Tips for Best Results

  • Use as a first-pass filter before human moderation for edge cases.
  • Integrate via API for real-time chat moderation.
  • At $0.003 per request, high-volume moderation is extremely cost-effective.
  • Combine with image and video moderation for comprehensive screening.
  • Process messages as they arrive for instant feedback.

Notes

  • Processing is near-instant for real-time applications.
  • Returns boolean values — true indicates detected content.
  • Designed for automated workflows with JSON output.
  • Consider human review for flagged content or edge cases.
  • Works with any language text input.