Molmo2-4B Video Content Moderator analyzes video content for safety, appropriateness, and policy compliance. Detects violence, nudity, gore, and other harmful visual content in videos using an open-source vision-language model. Ready-to-use REST inference API, best performance, no cold starts, affordable pricing.
Siap
{
"hate": false,
"sexual": false,
"violence": false,
"harassment": false,
"sexual/minors": false
}$0.003per run·~333 / $1
Automatically screen videos for harmful content with Molmo2 Video Content Moderator. This AI-powered moderation tool analyzes videos and returns safety classifications for harassment, hate speech, sexual content, and violence — essential for content platforms, user-generated content, and compliance workflows.
Need image moderation? Try Molmo2 Image Content Moderator for image analysis.
| Parameter | Required | Description |
|---|---|---|
| video | Yes | Video to analyze (upload or public URL). |
| text | No | Optional context or custom moderation criteria. |
Flat rate per video analyzed.
| Output | Cost |
|---|---|
| Per video | $0.003 |
| 100 videos | $0.30 |
| 1,000 videos | $3.00 |
The model returns a JSON object with boolean flags for each content category:
{
"harassment": false,
"hate": false,
"sexual": false,
"sexual/minors": false,
"violence": false
}
| Category | Description |
|---|---|
| harassment | Bullying, intimidation, or targeted abuse |
| hate | Hate speech, discrimination, or prejudice |
| sexual | Adult sexual content or nudity |
| sexual/minors | Any sexual content involving minors |
| violence | Graphic violence, gore, or harmful acts |