
Image Moderation API
Pricing
Pay per usage

Image Moderation API
Uses advanced AI models to analyze and classify user-generated content in real time. It detects harmful or inappropriate content, providing category-level flags and confidence scores to help you enforce community guidelines and keep your platform safe.
0.0 (0)
Pricing
Pay per usage
0
Total users
2
Monthly users
2
Runs succeeded
>99%
Last modified
a month ago
🖼️ AI Image Moderation Actor
This Apify Actor leverages Sentinel Moderation's AI-powered API to analyze and flag images containing inappropriate, unsafe, or policy-violating content. It can detect NSFW material, violence, graphic content, and more — helping you maintain a safe and compliant platform.
📥 Input Schema
The actor expects the following JSON input:
{"apiKey": "sample-api-key","image": "https://example.com/path-to-image.jpg"}
apiKey
(string, required): Your API key from SentinelModeration.com.image
(string, required): A publicly accessible image URL to analyze.
📤 Output
The actor returns a moderation result in the following structure:
[{"flagged": false,"categories": {"harassment": false,"harassment/threatening": false,"sexual": false,"hate": false,"hate/threatening": false,"illicit": false,"illicit/violent": false,"self-harm/intent": false,"self-harm/instructions": false,"self-harm": false,"sexual/minors": false,"violence": false,"violence/graphic": false},"category_scores": {"harassment": 0.000048,"harassment/threatening": 0.0000066,"sexual": 0.000039,"hate": 0.0000142,"hate/threatening": 0.0000008,"illicit": 0.000022,"illicit/violent": 0.000019,"self-harm/intent": 0.0000011,"self-harm/instructions": 0.0000010,"self-harm": 0.0000020,"sexual/minors": 0.000010,"violence": 0.000016,"violence/graphic": 0.0000056},"error": null}]
flagged
:true
if any category crosses a moderation threshold.categories
: A true/false map indicating which categories were flagged.category_scores
: Confidence scores (0.0 to 1.0) for each category.error
: Present only in test mode or if no valid API key is provided.
🧠 Categories Detected
The image is scanned for content under a wide range of moderation labels:
- Harassment / Threats
- Sexual content (including minors)
- Hate speech (including threats)
- Illicit activity
- Self-harm
- Violence / Graphic content
🔐 Getting an API Key
To receive real results, get your API key from Sentinel Moderation:
- Visit sentinelmoderation.com
- Sign up and generate your API key
- Use the key in the
apiKey
field of your input
✅ Example Use Cases
- Flagging NSFW content in profile photos or uploads
- Moderating image submissions on forums or marketplaces
- Pre-screening media in chat apps or social platforms
- Complying with platform-specific safety guidelines
Let me know if you also want to add support for uploading images directly instead of via URL!