Molmo2 Image Content Moderator
Playground
Try it on WavespeedAI!Molmo2-4B Image Content Moderator: Analyze image content for safety, appropriateness, and policy compliance. Detects violence, nudity, gore, and other harmful visual content. Open-source vision-language model. Ready-to-use REST API, no cold starts, affordable pricing.
Features
Molmo2 Image Content Moderator
Automatically screen images for harmful content with Molmo2 Image Content Moderator. This AI-powered moderation tool analyzes images and returns safety classifications for harassment, hate speech, sexual content, and violence — essential for content platforms, user-generated content, and compliance workflows.
Need video moderation? Try Molmo2 Video Content Moderator for video analysis.
Why It Works Great
- Comprehensive detection: Screens for harassment, hate, sexual content, and violence.
- Child safety: Dedicated detection for content involving minors.
- Instant results: Fast processing with results in seconds.
- JSON output: Clean, structured results for easy integration.
- Ultra-affordable: Just $0.003 per image — 333 images for $1.
- Custom criteria: Optional text input for additional context.
Parameters
| Parameter | Required | Description |
|---|---|---|
| image | Yes | Image to analyze (upload or public URL). |
| text | No | Optional context or custom moderation criteria. |
How to Use
- Upload your image — drag and drop or paste a public URL.
- Add context (optional) — provide additional text for custom criteria.
- Run — click the button to analyze.
- Review results — check the JSON output for safety classifications.
Pricing
Flat rate per image analyzed.
| Output | Cost |
|---|---|
| Per image | $0.003 |
| 100 images | $0.30 |
| 1,000 images | $3.00 |
Output Format
The model returns a JSON object with boolean flags for each content category:
{
"harassment": false,
"hate": false,
"sexual": false,
"sexual/minors": false,
"violence": false
}Detection Categories
| Category | Description |
|---|---|
| harassment | Bullying, intimidation, or targeted abuse |
| hate | Hate speech, discrimination, or prejudice |
| sexual | Adult sexual content or nudity |
| sexual/minors | Any sexual content involving minors |
| violence | Graphic violence, gore, or harmful imagery |
Best Use Cases
- Content Platforms — Screen user-uploaded images before publishing.
- Social Media — Moderate image content at scale.
- E-commerce — Review product images for policy compliance.
- Dating Apps — Filter inappropriate profile photos.
- Forums & Communities — Ensure uploaded images meet guidelines.
- Pre-screening — Filter content before human review.
Pro Tips for Best Results
- Use as a first-pass filter before human moderation for edge cases.
- Integrate via API for automated moderation pipelines.
- At $0.003 per image, batch processing is extremely cost-effective.
- Combine with video moderation for comprehensive content screening.
- Use the text parameter to provide context for borderline content.
Notes
- If using a URL, ensure it is publicly accessible.
- Processing is near-instant for most images.
- Returns boolean values — true indicates detected content.
- Designed for automated workflows with JSON output.
- Consider human review for flagged content or edge cases.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/molmo2/image-content-moderator" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"enable_sync_mode": false
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| image | string | Yes | - | Image URL to moderate and analyze for safety compliance. Supports JPEG, PNG, WebP formats. | |
| text | string | No | - | - | Optional text prompt or question about the image content for contextual analysis. |
| enable_sync_mode | boolean | No | false | - | If set to true, the function will wait for the result to be generated and uploaded before returning the response. It allows you to get the result directly in the response. This property is only available through the API. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data | object | The prediction data object containing all details |
| data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
| data.model | string | Model ID used for the prediction |
| data.outputs | string | Array of moderation results with safety scores and detected visual categories. |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |