Content Moderator Image
Playground
Try it on WavespeedAI!Image Content Moderator provides automated image moderation to detect and flag policy-violating or inappropriate images for automation. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
Features
Content Moderator — Image
Ensure your images meet safety and compliance standards with WaveSpeed AI’s Content Moderator. This fast, affordable moderation tool analyzes images for policy violations, inappropriate content, and safety concerns — essential for platforms, applications, and workflows that handle user-generated content.
Why It Works Well
- Fast analysis: Quick moderation results for high-volume workflows.
- Comprehensive detection: Identifies various types of inappropriate or unsafe content.
- Text context support: Optionally include associated text for more accurate moderation decisions.
- Ultra-affordable: At just $0.001 per image, scale moderation without breaking the budget.
- Simple integration: Minimal parameters make it easy to add to any pipeline.
Parameters
| Parameter | Required | Description |
|---|---|---|
| image | Yes | Image to moderate (upload or public URL). |
| text | No | Optional associated text for additional context in moderation. |
How to Use
- Upload your image — drag and drop or paste a public URL.
- Add text context (optional) — include any associated text that should be considered.
- Run — click the button to analyze.
- Review results — check the moderation output for any flagged content.
Pricing
Flat rate per moderation request.
| Output | Cost |
|---|---|
| Per image | $0.001 |
Best Use Cases
- User-Generated Content — Screen uploads before publishing to your platform.
- Social Media & Communities — Maintain safe spaces by filtering inappropriate images.
- E-commerce — Ensure product listings meet marketplace content policies.
- Content Pipelines — Add automated safety checks to media processing workflows.
- AI Output Screening — Verify generated images comply with safety guidelines before delivery.
Pro Tips for Best Results
- Include associated text when available — it helps provide context for more accurate moderation.
- Use in automated pipelines for consistent, scalable content screening.
- Combine with human review for edge cases or appeals.
- Set up batch processing for high-volume moderation needs.
- If using URLs, ensure they are publicly accessible for successful analysis.
Notes
- If using a URL for the image, ensure it is publicly accessible.
- Moderation results should be used as guidance — consider human review for borderline cases.
- Processing is typically very fast, suitable for real-time moderation workflows.
- The text field can provide valuable context for images with ambiguous content.
Authentication
For authentication details, please refer to the Authentication Guide.
API Endpoints
Submit Task & Query Result
# Submit the task
curl --location --request POST "https://api.wavespeed.ai/api/v3/wavespeed-ai/content-moderator/image" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}" \
--data-raw '{
"enable_sync_mode": false
}'
# Get the result
curl --location --request GET "https://api.wavespeed.ai/api/v3/predictions/${requestId}/result" \
--header "Authorization: Bearer ${WAVESPEED_API_KEY}"
Parameters
Task Submission Parameters
Request Parameters
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
| image | string | No | - | Image to be moderated. | |
| text | string | No | - | - | Text to be moderated. |
| enable_sync_mode | boolean | No | false | - | If set to true, the function will wait for the result to be generated and uploaded before returning the response. It allows you to get the result directly in the response. This property is only available through the API. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data.id | string | Unique identifier for the prediction, Task Id |
| data.model | string | Model ID used for the prediction |
| data.outputs | array | Array of URLs to the generated content (empty when status is not completed) |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.has_nsfw_contents | array | Array of boolean values indicating NSFW detection for each output |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |
Result Request Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| id | string | Yes | - | Task ID |
Result Response Parameters
| Parameter | Type | Description |
|---|---|---|
| code | integer | HTTP status code (e.g., 200 for success) |
| message | string | Status message (e.g., “success”) |
| data | object | The prediction data object containing all details |
| data.id | string | Unique identifier for the prediction, the ID of the prediction to get |
| data.model | string | Model ID used for the prediction |
| data.outputs | object | Array of objects containing the moderation outputs (empty when status is not completed). |
| data.urls | object | Object containing related API endpoints |
| data.urls.get | string | URL to retrieve the prediction result |
| data.status | string | Status of the task: created, processing, completed, or failed |
| data.created_at | string | ISO timestamp of when the request was created (e.g., “2023-04-01T12:34:56.789Z”) |
| data.error | string | Error message (empty if no error occurred) |
| data.timings | object | Object containing timing details |
| data.timings.inference | integer | Inference time in milliseconds |