Reddit Keyword Monitor Alerts | Posts + Comments
Pricing
Pay per event
Reddit Keyword Monitor Alerts | Posts + Comments
Monitor Reddit keywords, search queries, and subreddits with stateful diffing, new post/comment alerts, and webhook delivery.
Pricing
Pay per event
Rating
0.0
(0)
Developer
太郎 山田
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
15 hours ago
Last modified
Categories
Share
🚨 Reddit Keyword Monitor Alerts
Focused Reddit keyword and subreddit monitor built for recurring alerts, snapshot diffing, and webhook handoff.
Store Quickstart
Run this actor with your target input. Results appear in the Apify Dataset and can be piped to webhooks for real-time delivery. Use dryRun to validate before committing to a schedule.
Key Features
- 📰 News-first extraction — Pulls clean headlines, bodies, bylines, and timestamps from public feeds
- 🌐 Multi-source support — Aggregate multiple feeds or sites per run for unified downstream processing
- ⏱️ Freshness tracking — Timestamps and delta detection highlight net-new content between runs
- 📡 Webhook push delivery — Stream alerts to Slack/Discord/email as soon as new content lands
- 🔒 Public-data only — No login walls — transparent compliance with source terms
Use Cases
| Who | Why |
|---|---|
| Developers | Automate recurring data fetches without building custom scrapers |
| Data teams | Pipe structured output into analytics warehouses |
| Ops teams | Monitor changes via webhook alerts |
| Product managers | Track competitor/market signals without engineering time |
Input
| Field | Type | Default | Description |
|---|---|---|---|
| keywords | array | — | Plain-text keywords for global Reddit monitoring. Used for post search plus recent comment scanning. |
| searchQueries | array | — | Reddit search queries to monitor for new posts. Query-only routes do not search comments because public Reddit JSON does |
| subreddits | array | — | Subreddits to monitor for new posts and comments (example: javascript). |
| routes | array | — | Optional JSON objects for combined subreddit+keyword or subreddit+query routing. |
| monitorComments | boolean | true | When enabled, scan recent comment streams where public endpoints support it. |
| postLimit | integer | 25 | How many recent posts to inspect per route. |
| commentLimit | integer | 50 | How many recent comments to inspect per route. |
| sort | string | "new" | Sort used for post endpoints. For recurring monitoring, "new" is usually best. |
Input Example
{"monitorComments": true,"postLimit": 25,"commentLimit": 50,"sort": "new","time": "day","timeoutMs": 15000,"delayMs": 1200,"snapshotKey": "reddit-keyword-monitor-snapshots","maxSnapshotItems": 5000,"emitOnFirstRun": false,"delivery": "dataset","dryRun": false}
Output
| Field | Type | Description |
|---|---|---|
meta | object | |
alerts | array | |
errors | array |
Output Example
{"meta": {"generatedAt": "2026-04-10T17:06:25.060Z","snapshotKey": "reddit-keyword-monitor-snapshots","firstRun": true,"emitOnFirstRun": false,"routeCount": 7,"observedItems": 188,"alertCount": 0,"errorCount": 0,"blockedCount": 0,"suppressedOnFirstRun": 188,"notes": [],"delivery": "dataset"},"alerts": [],"errors": []}
API Usage
Run this actor programmatically using the Apify API. Replace YOUR_API_TOKEN with your token from Apify Console → Settings → Integrations.
cURL
curl -X POST "https://api.apify.com/v2/acts/taroyamada~reddit-keyword-monitor-alerts/run-sync-get-dataset-items?token=YOUR_API_TOKEN" \-H "Content-Type: application/json" \-d '{ "monitorComments": true, "postLimit": 25, "commentLimit": 50, "sort": "new", "time": "day", "timeoutMs": 15000, "delayMs": 1200, "snapshotKey": "reddit-keyword-monitor-snapshots", "maxSnapshotItems": 5000, "emitOnFirstRun": false, "delivery": "dataset", "dryRun": false }'
Python
from apify_client import ApifyClientclient = ApifyClient("YOUR_API_TOKEN")run = client.actor("taroyamada/reddit-keyword-monitor-alerts").call(run_input={"monitorComments": true,"postLimit": 25,"commentLimit": 50,"sort": "new","time": "day","timeoutMs": 15000,"delayMs": 1200,"snapshotKey": "reddit-keyword-monitor-snapshots","maxSnapshotItems": 5000,"emitOnFirstRun": false,"delivery": "dataset","dryRun": false})for item in client.dataset(run["defaultDatasetId"]).iterate_items():print(item)
JavaScript / Node.js
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });const run = await client.actor('taroyamada/reddit-keyword-monitor-alerts').call({"monitorComments": true,"postLimit": 25,"commentLimit": 50,"sort": "new","time": "day","timeoutMs": 15000,"delayMs": 1200,"snapshotKey": "reddit-keyword-monitor-snapshots","maxSnapshotItems": 5000,"emitOnFirstRun": false,"delivery": "dataset","dryRun": false});const { items } = await client.dataset(run.defaultDatasetId).listItems();console.log(items);
Tips & Limitations
- Use
snapshotKeyto persist seen-item state across runs so only new items are pushed. - For high-volume feeds, limit
maxItemsper run and increase schedule frequency instead. - Webhook delivery payloads are compact — parse on receiver side for routing to multiple channels.
- Combine this actor with
article-content-extractorfor full-text bodies when feeds are title-only. - Run against your own staging feed first to validate filter keywords before production alerts.
FAQ
How often should I run this?
Hourly for breaking-news watchlists, daily for curated digests. Use Apify Schedules.
Does it deduplicate across runs?
Yes, via the snapshotKey persistence. Previously-seen items are skipped unless content changed.
Can I export to a database?
Use webhook delivery or pull from Apify Dataset API directly into Postgres/BigQuery/Snowflake.
How do I filter by keyword?
Most actors expose a watchKeywords or filterKeywords array — matches are flagged in the output with highlight metadata.
Can this work with paywalled content?
No — this actor only processes publicly accessible feed/article URLs. Paywalled content is out of scope.
Related Actors
News & Content cluster — explore related Apify tools:
- 📰 Google News Scraper — Scrape Google News articles for any search query via official RSS feed.
- 📰 Article Extractor — Extract clean article content with title, author, publish date, images from news and blog pages.
- 📄 Website Content Extractor — Extract clean main content from any webpage as text, markdown, or HTML.
- 📡 RSS Feed Aggregator — Aggregate multiple RSS and Atom feeds with keyword filtering and deduplication.
- 📰 Hacker News Scraper — Fetch Hacker News top, new, best, ask, show, job stories via official Firebase API.
- 📡 Reddit All-in-One Scraper — Scrape Reddit subreddits, posts, comments, user profiles, and search results via public JSON endpoints.
Cost
Pay Per Event:
actor-start: $0.01 (flat fee per run)dataset-item: $0.003 per output item
Example: 1,000 items = $0.01 + (1,000 × $0.003) = $3.01
No subscription required — you only pay for what you use.