Safe Image Moderation
Pricing
from $10.00 / 1,000 results
Safe Image Moderation
Screen images for adult, violent, racy, medical, and spoof content. Returns confidence scores for each category. Bring your own API key, pay only $0.01 per image. Perfect for UGC apps, e-commerce, social platforms, and ad networks.
Pricing
from $10.00 / 1,000 results
Rating
0.0
(0)
Developer

Marielise
Actor stats
0
Bookmarked
1
Total users
1
Monthly active users
16 hours ago
Last modified
Categories
Share
Image Content Moderation
Screen images for adult, violent, racy, medical, and spoof content using vision-capable LLMs. Returns confidence scores for each category with token usage tracking.
What it does
Analyzes images for inappropriate content across five categories:
- Adult - Explicit adult content
- Violence - Violent or graphic content
- Racy - Suggestive but not explicit content
- Medical - Medical or graphic imagery
- Spoof - Memes, edited, or manipulated images
Returns a boolean safe verdict plus confidence scores (0-1) for each category.
Input
Provide an image URL or base64 data. API keys are set via environment variables.
{"imageUrl": "https://example.com/photo.jpg","model": "openai:gpt-4o"}
Or with base64:
{"imageBase64": "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNk...","model": "openai:gpt-4o"}
Model Selection
Use provider:model format to specify any vision-capable model:
{"model": "openai:gpt-4o"}
Supported providers and example models:
| Provider | Example Models | Environment Variable |
|---|---|---|
openai | openai:gpt-4o, openai:gpt-4o-mini, openai:gpt-4-turbo | OPENAI_API_KEY |
anthropic | anthropic:claude-sonnet-4-20250514, anthropic:claude-3-5-sonnet-20241022 | ANTHROPIC_API_KEY |
google | google:gemini-2.0-flash, google:gemini-1.5-pro | GOOGLE_API_KEY |
groq | groq:meta-llama/llama-4-scout-17b-16e-instruct | GROQ_API_KEY |
Default: openai:gpt-4o
Custom Thresholds
Override thresholds per category (0-1 scale, higher = stricter):
{"imageUrl": "https://example.com/photo.jpg","model": "openai:gpt-4o-mini","config": {"adult": 0.3,"violence": 0.5,"racy": 0.7}}
Defaults:
adult: 1 (very strict)violence: 1 (very strict)racy: 1 (very strict)medical: 0.5spoof: 0.5
Set to 0 to skip that category.
Output
{"safe": true,"flagged": [],"scores": {"adult": 0.1,"violence": 0.0,"racy": 0.3,"medical": 0.0,"spoof": 0.1},"configApplied": {"adult": 1,"violence": 1,"racy": 1,"medical": 0.5,"spoof": 0.5},"imageUrl": "https://example.com/photo.jpg","processedAt": "2024-12-21T10:30:00.000Z","usage": {"tokens": { "input": 1250, "output": 85, "total": 1335 },"cost": 0.003825}}
When content is flagged:
{"safe": false,"flagged": ["adult", "racy"],"scores": {"adult": 0.9,"violence": 0.0,"racy": 0.7,"medical": 0.0,"spoof": 0.1}}
API Keys
Set API keys as environment variables:
| Provider | Environment Variable |
|---|---|
openai:* | OPENAI_API_KEY |
anthropic:* | ANTHROPIC_API_KEY |
google:* | GOOGLE_API_KEY |
groq:* | GROQ_API_KEY |
Get your API keys:
- OpenAI: platform.openai.com/api-keys
- Anthropic: console.anthropic.com
- Google: aistudio.google.com/apikey
- Groq: console.groq.com
Use Cases
- User-Generated Content - Filter uploads before publishing
- E-commerce - Validate product images
- Social Platforms - Automated content screening
- Ad Networks - Ensure ad creative compliance
- Media Libraries - Bulk image classification
API Reference
Input
| Field | Type | Required | Description |
|---|---|---|---|
imageUrl | string | One of | Public URL of image to analyze |
imageBase64 | string | One of | Base64-encoded image data |
model | string | No | LLM model in provider:model format (default: openai:gpt-4o) |
config | object | No | Override thresholds per category (0-1) |
Output
| Field | Type | Description |
|---|---|---|
safe | boolean | true if no categories exceed thresholds |
flagged | string[] | Categories that exceeded thresholds |
scores | object | Confidence scores (0-1) for each category |
configApplied | object | Config used (defaults + your overrides) |
imageUrl | string | Original image URL (if provided) |
processedAt | string | ISO timestamp |
usage.tokens | object | Token counts (input, output, total) |
usage.cost | number | Estimated cost in USD |
Limits
- Images must be under 20MB
- Supported formats: JPEG, PNG, GIF, BMP, WEBP