Molmo2 Video Content Moderator
Playground
Try it on WavespeedAI!Molmo2-4B Video Content Moderator analyzes video content for safety, appropriateness, and policy compliance. Detects violence, nudity, gore, and other harmful visual content in videos using an open-source vision-language model. Ready-to-use REST inference API, best performance, no cold starts, affordable pricing.
Features
Molmo2 Video Content Moderator
Automatically screen videos for harmful content with Molmo2 Video Content Moderator. This AI-powered moderation tool analyzes videos and returns safety classifications for harassment, hate speech, sexual content, and violence — essential for content platforms, user-generated content, and compliance workflows.
Need image moderation? Try Molmo2 Image Content Moderator for image analysis.
Why It Works Great
- Comprehensive detection: Screens for harassment, hate, sexual content, and violence.
- Child safety: Dedicated detection for content involving minors.
- Fast processing: Quick analysis with results in seconds.
- JSON output: Clean, structured results for easy integration.
- Ultra-affordable: Just $0.003 per video — 333 videos for $1.
- Custom criteria: Optional text input for additional context.
Parameters
| Parameter | Required | Description |
|---|---|---|
| video | Yes | Video to analyze (upload or public URL). |
| text | No | Optional context or custom moderation criteria. |
How to Use
- Upload your video — drag and drop or paste a public URL.
- Add context (optional) — provide additional text for custom criteria.
- Run — click the button to analyze.
- Review results — check the JSON output for safety classifications.
Pricing
Flat rate per video analyzed.
| Output | Cost |
|---|---|
| Per video | $0.003 |
| 100 videos | $0.30 |
| 1,000 videos | $3.00 |
Output Format
The model returns a JSON object with boolean flags for each content category:
{
"harassment": false,
"hate": false,
"sexual": false,
"sexual/minors": false,
"violence": false
}Detection Categories
| Category | Description |
|---|---|
| harassment | Bullying, intimidation, or targeted abuse |
| hate | Hate speech, discrimination, or prejudice |
| sexual | Adult sexual content or nudity |
| sexual/minors | Any sexual content involving minors |
| violence | Graphic violence, gore, or harmful acts |
Best Use Cases
- Content Platforms — Screen user-uploaded videos before publishing.
- Social Media — Moderate video content at scale.
- E-commerce — Review product videos for policy compliance.
- Education — Ensure learning content is age-appropriate.
- Compliance — Meet content moderation requirements efficiently.
- Pre-screening — Filter content before human review.
Pro Tips for Best Results
- Use as a first-pass filter before human moderation for edge cases.
- Integrate via API for automated moderation pipelines.
- At $0.003 per video, batch processing is extremely cost-effective.
- Combine with image moderation for comprehensive content screening.
- Use the text parameter to provide context for borderline content.
Notes
- If using a URL, ensure it is publicly accessible.
- Processing time scales with video length.
- Returns boolean values — true indicates detected content.
- Designed for automated workflows with JSON output.
- Consider human review for flagged content or edge cases.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/molmo2/video-content-moderator" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| video | string | Yes | - | Video URL to moderate and analyze for safety compliance. Supports MP4, MOV, WebM formats. | |
| text | string | No | - | - | Optional text prompt or question about the video content for contextual analysis. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data | object | The prediction data object containing all details |
| data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
| data.model | string | Model ID used for the prediction |
| data.outputs | string | Array of moderation results with safety scores and detected visual categories across video frames. |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |