Molmo2-4B Image Content Moderator: Analyze image content for safety, appropriateness, and policy compliance. Detects violence, nudity, gore, and other harmful visual content. Open-source vision-language model. Ready-to-use REST API, no cold starts, affordable pricing.
Siap
{
"hate": false,
"sexual": false,
"violence": false,
"harassment": false,
"sexual/minors": false
}$0.003per run·~333 / $1
Automatically screen images for harmful content with Molmo2 Image Content Moderator. This AI-powered moderation tool analyzes images and returns safety classifications for harassment, hate speech, sexual content, and violence — essential for content platforms, user-generated content, and compliance workflows.
Need video moderation? Try Molmo2 Video Content Moderator for video analysis.
| Parameter | Required | Description |
|---|---|---|
| image | Yes | Image to analyze (upload or public URL). |
| text | No | Optional context or custom moderation criteria. |
Flat rate per image analyzed.
| Output | Cost |
|---|---|
| Per image | $0.003 |
| 100 images | $0.30 |
| 1,000 images | $3.00 |
The model returns a JSON object with boolean flags for each content category:
{
"harassment": false,
"hate": false,
"sexual": false,
"sexual/minors": false,
"violence": false
}
| Category | Description |
|---|---|
| harassment | Bullying, intimidation, or targeted abuse |
| hate | Hate speech, discrimination, or prejudice |
| sexual | Adult sexual content or nudity |
| sexual/minors | Any sexual content involving minors |
| violence | Graphic violence, gore, or harmful imagery |