Image Moderation API avatar
Image Moderation API

Pricing

Pay per usage

Go to Store
Image Moderation API

Image Moderation API

Developed by

Sentinel Moderation

Maintained by Community

Uses advanced AI models to analyze and classify user-generated content in real time. It detects harmful or inappropriate content, providing category-level flags and confidence scores to help you enforce community guidelines and keep your platform safe.

0.0 (0)

Pricing

Pay per usage

0

Monthly users

2

Runs succeeded

>99%

Last modified

13 days ago

🖼️ AI Image Moderation Actor

This Apify Actor leverages Sentinel Moderation's AI-powered API to analyze and flag images containing inappropriate, unsafe, or policy-violating content. It can detect NSFW material, violence, graphic content, and more — helping you maintain a safe and compliant platform.


📥 Input Schema

The actor expects the following JSON input:

1{
2  "apiKey": "sample-api-key",
3  "image": "https://example.com/path-to-image.jpg"
4}
  • apiKey (string, required): Your API key from SentinelModeration.com.
  • image (string, required): A publicly accessible image URL to analyze.

📤 Output

The actor returns a moderation result in the following structure:

1[
2  {
3    "flagged": false,
4    "categories": {
5      "harassment": false,
6      "harassment/threatening": false,
7      "sexual": false,
8      "hate": false,
9      "hate/threatening": false,
10      "illicit": false,
11      "illicit/violent": false,
12      "self-harm/intent": false,
13      "self-harm/instructions": false,
14      "self-harm": false,
15      "sexual/minors": false,
16      "violence": false,
17      "violence/graphic": false
18    },
19    "category_scores": {
20      "harassment": 0.000048,
21      "harassment/threatening": 0.0000066,
22      "sexual": 0.000039,
23      "hate": 0.0000142,
24      "hate/threatening": 0.0000008,
25      "illicit": 0.000022,
26      "illicit/violent": 0.000019,
27      "self-harm/intent": 0.0000011,
28      "self-harm/instructions": 0.0000010,
29      "self-harm": 0.0000020,
30      "sexual/minors": 0.000010,
31      "violence": 0.000016,
32      "violence/graphic": 0.0000056
33    },
34    "error": null
35  }
36]
  • flagged: true if any category crosses a moderation threshold.
  • categories: A true/false map indicating which categories were flagged.
  • category_scores: Confidence scores (0.0 to 1.0) for each category.
  • error: Present only in test mode or if no valid API key is provided.

🧠 Categories Detected

The image is scanned for content under a wide range of moderation labels:

  • Harassment / Threats
  • Sexual content (including minors)
  • Hate speech (including threats)
  • Illicit activity
  • Self-harm
  • Violence / Graphic content

🔐 Getting an API Key

To receive real results, get your API key from Sentinel Moderation:

  1. Visit sentinelmoderation.com
  2. Sign up and generate your API key
  3. Use the key in the apiKey field of your input

✅ Example Use Cases

  • Flagging NSFW content in profile photos or uploads
  • Moderating image submissions on forums or marketplaces
  • Pre-screening media in chat apps or social platforms
  • Complying with platform-specific safety guidelines

Let me know if you also want to add support for uploading images directly instead of via URL!

Pricing

Pricing model

Pay per usage

This Actor is paid per platform usage. The Actor is free to use, and you only pay for the Apify platform usage.