Profanity Detection
Pricing
from $0.10 / 1,000 requests
Profanity Detection
Analyze text content for profanity, offensive language, and inappropriate content with Machine Learning. Detects harmful language, provides risk scoring, and helps maintain safe user environments by filtering inappropriate content.
Pricing
from $0.10 / 1,000 requests
Rating
0.0
(0)
Developer

Greip
Actor stats
0
Bookmarked
2
Total users
0
Monthly active users
10 days ago
Last modified
Categories
Share
Greip's Profanity Detection
A robust Apify Actor that detects offensive and inappropriate language in text using the Greip Profanity Detection API. This tool helps safeguard your website or app by screening user inputs for profanity and other harmful content, maintaining a positive user environment and protecting your brand.
π Features
- ML-Powered Detection: Advanced Machine Learning algorithms to analyze text content
- Risk Scoring: Numerical risk scores from 0-3 (0=safe, 1=high-risk, 2=medium-risk, 3=low-risk)
- Bad Words Identification: Detailed list of detected profane words when ML detection is used
- Real-time Analysis: Fast processing with execution time tracking
- Structured Output: Clean, well-formatted JSON results stored in datasets
Input Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
text | string | β | Text content to analyze for profanity |
userID | string | β | Optional user identifier for tracking |
Output Format
The Actor outputs detailed analysis information for each text input:
{"text": "This is a sample text to analyze","isSafe": true,"riskScore": 0,"totalBadWords": 0,"isML": true,"badWords": [],"timestamp": "2026-01-14T10:30:00.000Z"}
Output Field Descriptions
Profanity Analysis Fields:
text: The original text that was analyzedisSafe: Boolean indicating whether the text is safe (true) or contains profanity (false)riskScore: Risk assessment score:0= Completely safe text1= High-risk text (most dangerous profanity)2= Medium-risk text (moderate profanity)3= Low-risk text (mild profanity)
totalBadWords: Total number of profane words detected (only available whenisMLis true)isML: Boolean indicating whether Machine Learning detection was usedbadWords: Array of detected profane words (only populated when ML detection finds issues)timestamp: ISO 8601 timestamp when the analysis was processed
Use Cases
- Content Moderation: Screen user comments, posts, and messages before publication
- User Registration: Detect offensive usernames during account creation
- Chat Systems: Real-time filtering of chat messages and communication
- Review Systems: Moderate product reviews and feedback
- Social Media: Monitor user-generated content for community guidelines compliance
Your feedback
We're always working on improving the performance of our Actors. If you have any technical feedback for the profanity detection actor or found a bug, please create an issue in the Issues tab.