Hate Speech & Anti-Semitic Detection API
Pricing
Pay per usage
Hate Speech & Anti-Semitic Detection API
Detect anti-semitic and hate speech in short texts. Obfuscation-aware (leet speak, homoglyphs), bilingual EN+HE, batch support up to 100 texts.
Pricing
Pay per usage
Rating
0.0
(0)
Developer

Yosef N
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
2 days ago
Last modified
Categories
Share
Hate Detection API
Detect anti-semitic content and hate speech in short texts. Obfuscation-aware, bilingual (English + Hebrew), with single and batch analysis.
What does Hate Detection API do?
Hate Detection API classifies short texts (up to 280 characters — the length of a social media post) for anti-semitic content and general hate speech. It returns a structured decision (allow, flag, or block) along with confidence scores for both categories.
The classifier is obfuscation-aware, meaning it recognizes common evasion techniques such as leet speak (h4te), homoglyphs (visually similar Unicode characters), and spaced letters (h a t e). It supports English and Hebrew natively, with additional coverage for Arabic, Russian, and German.
Designed for content moderation pipelines, community platforms, and social media monitoring tools.
Actions
| Action | Description |
|---|---|
analyze | Analyze a single text for hate speech and anti-semitic content |
analyze_batch | Analyze up to 100 texts in one request |
Input
analyze — single text
{"action": "analyze","text": "Text content to analyze here","threshold": 0.8}
| Field | Type | Default | Description |
|---|---|---|---|
text | string | required | Text to analyze (1–280 characters) |
threshold | float (0.0–1.0) | 0.8 | Score above which the text is blocked. Lower values = stricter moderation. |
analyze_batch — batch analysis
{"action": "analyze_batch","texts": [{ "id": "post-001", "text": "First post to analyze" },{ "id": "post-002", "text": "Second post to analyze" }],"threshold": 0.8}
| Field | Type | Default | Description |
|---|---|---|---|
texts | object[] | required | List of items to analyze (1–100 items) |
texts[].id | string | required | Unique identifier for this item |
texts[].text | string | required | Text to analyze (1–280 characters) |
threshold | float (0.0–1.0) | 0.8 | Blocking threshold applied to all items |
Output
Single analyze
{"decision": "block","scores": {"anti_semitic": 0.94,"hate_speech": 0.87},"confidence_label": "critical"}
Batch analyze
{"results": [{"id": "post-001","decision": "allow","scores": { "anti_semitic": 0.02, "hate_speech": 0.05 },"confidence_label": "low"},{"id": "post-002","decision": "flag","scores": { "anti_semitic": 0.61, "hate_speech": 0.45 },"confidence_label": "medium"}]}
Decision values
| Decision | Meaning |
|---|---|
allow | Text is clean; no significant signals detected |
flag | Borderline content; recommend human review |
block | Score exceeds threshold; recommend blocking or removal |
Confidence labels
| Label | Score range |
|---|---|
low | 0.0 – 0.3 |
medium | 0.3 – 0.6 |
high | 0.6 – 0.85 |
critical | 0.85 – 1.0 |
Features
- Three-tier moderation decisions —
allow/flag/blockfor easy pipeline integration - Anti-semitism and hate speech scores — two independent scores per text
- Obfuscation detection — recognizes leet speak, homoglyphs, and spaced characters
- Bilingual — English and Hebrew, with Arabic, Russian, and German support
- Adjustable threshold — tune strictness to your moderation policy
- Batch mode — analyze up to 100 texts per request
- No GPU required — heuristic classifier with fast cold starts
Limitations
- Maximum text length: 280 characters per item
- Maximum batch size: 100 texts per request
- The classifier uses heuristics and lexicon-based methods; it is not a fine-tuned LLM and may produce false positives or false negatives on highly nuanced content
- Best suited for social media post-length content; longer documents should be split into chunks
Usage on Apify
This actor is billed per Apify platform compute units. Analysis is fast — batch runs of 100 texts typically complete in under 10 seconds.