⭐️ FREE Reddit Scraper Pro
Pricing
Pay per usage
⭐️ FREE Reddit Scraper Pro
Free Reddit scraper that does what the paid ones do but better. No API keys needed, no usage fees. Pairs with ready-made n8n workflow templates for lead gen and content research.
Pricing
Pay per usage
Rating
5.0
(6)
Developer

Greg
Actor stats
4
Bookmarked
16
Total users
4
Monthly active users
a day ago
Last modified
Categories
Share
Reddit Scraper
Scrape posts, comments, and discover subreddits from Reddit — no API credentials required.
What Can This Actor Do?
| Mode | What You Get | Example Use Case |
|---|---|---|
| 📥 Scrape | Posts & comments from subreddits | Monitor r/startups for pain points |
| 🔍 Discover | Find subreddits by keyword | Find where your audience hangs out |
| 🔎 Search | Posts matching keywords | Track brand mentions |
| 🌐 Domain | Posts linking to websites | Monitor your content sharing |
⚡ Quick Start
Scrape Subreddits
{"mode": "scrape","scrape": {"subreddits": ["entrepreneur", "startups"],"maxPostsPerSubreddit": 100}}
Search Reddit
{"mode": "search","search": {"queries": ["best CRM software"],"maxPostsPerQuery": 100}}
Discover Subreddits
{"mode": "discover","discover": {"terms": ["saas", "startup tools"],"maxSubredditsPerTerm": 25}}
Track Domain Mentions
{"mode": "domain","domain": {"domains": ["mycompany.com"],"maxPostsPerDomain": 100}}
📊 Output
Results are available in multiple dataset views:
- All Results — Everything scraped
- Posts Only — Just posts (no comments)
- Comments Only — Just comments
- High Engagement — Posts sorted by engagement metrics
- Discovered Subreddits — Subreddits from Discover mode
Post fields include: title, text, author, score, upvote_ratio, num_comments, created_utc_iso, permalink, listing_rank, score_per_hour, engagement_level
Comment fields include: text, author, score, depth, parent_id, reply_count_direct, reply_count_total
Subreddit fields include: display_name, subscribers, active_users, estimated_posts_per_day, public_description
✨ Key Features
- Parallel execution — All targets run simultaneously
- Deep comments — Nested reply threads with configurable depth
- Engagement metrics — Score/hour, comments/hour, engagement level
- Smart proxies — Automatic rotation to avoid blocks
- No API keys — Uses Reddit's public JSON endpoints
🤖 MCP Server (AI Agent Integration)
Connect this Actor to Claude, Cursor, VS Code, or Windsurf as an MCP server. One command for Claude Code:
claude mcp add reddit-scraper \-e APIFY_TOKEN=<YOUR_APIFY_TOKEN> \-- npx -y @apify/actors-mcp-server@latest --actors spry_wholemeal/reddit-scraper
Full setup guide (all clients) → | Copy-paste agent prompts →
🧩 n8n Workflow Templates
Import-ready n8n workflows that use this Actor as the data source:
- Lead Finder — AI buying-intent scanner → Slack + Google Sheets
- Subreddit Discovery — Niche audience map
- Content Machine — Reddit threads → blog post drafts
- Weekly Digest — Email summary + AI trends
Browse templates & setup guides →
📋 Input Reference
Required Fields by Mode
| Mode | Required Field | Format |
|---|---|---|
scrape | scrape.subreddits | ["python", "webdev", ...] |
discover | discover.terms | ["saas", "startup", ...] |
search | search.queries | ["best CRM", "project management", ...] |
domain | domain.domains | ["github.com", "mycompany.com", ...] |
Sorting & Limits
| Field | Type | Default | Used In | Description |
|---|---|---|---|---|
scrape.sort | string | hot | Scrape | hot, new, top, rising, controversial |
scrape.timeframe | string | week | Scrape | For top/controversial: hour, day, week, month, year, all |
scrape.maxPostsPerSubreddit | number | 100 | Scrape | Max posts per subreddit |
discover.maxSubredditsPerTerm | number | 25 | Discover | Max subreddits per term |
search.sort | string | relevance | Search | relevance, hot, new, top, comments |
search.maxPostsPerQuery | number | 25 | Search | Max posts per query |
domain.maxPostsPerDomain | number | 500 | Domain | Max posts per domain |
Comments
| Field | Type | Default | Description |
|---|---|---|---|
comments.mode | string | none | none, all, or high_engagement |
comments.maxTopLevel | number | 50 | Top-level comments (0 = max ~500) |
comments.maxDepth | number | 3 | Reply depth (0 = top-level only) |
Comment modes:
none— Skip comments (fastest)all— Fetch for every posthigh_engagement— Only for posts with score ≥ 10 AND comments ≥ 5 (defaults)
Optional: if comments.highEngagement.filterPosts: true, posts that don’t qualify are omitted from the dataset.
Search Filters
| Field | Type | Description |
|---|---|---|
restrictToSubreddit | string | Limit search to one subreddit |
authorFilter | string | Only posts by this username |
flairFilter | string | Only posts with this flair |
selfPostsOnly | boolean | Only text posts (no links) |
Strict search (exact terms)
Search mode supports strict matching using Reddit’s boolean search syntax (quoted terms + AND/OR):
{"mode": "search","search": {"queries": ["best CRM software"],"strict": {"enabled": true,"operator": "AND","terms": ["voice agent", "sales"]}}}
Discover Options
| Field | Type | Default | Description |
|---|---|---|---|
minSubscribers | number | 100 | Filter out small subreddits |
estimateActivity | boolean | true | Calculate posts per day/week |
includeNsfw | boolean | false | Include adult subreddits |
Proxy Configuration
| Field | Type | Default | Description |
|---|---|---|---|
proxyConfiguration | object | Residential | Highly recommended. Reddit blocks without proxies. |
Default proxy config (works well):
{"proxyConfiguration": {"useApifyProxy": true,"apifyProxyGroups": ["RESIDENTIAL"]}}
Other Settings
| Field | Type | Default | Description |
|---|---|---|---|
tag | string | — | Label to identify this run's data |
Advanced (rarely needed)
| Field | Type | Default | Description |
|---|---|---|---|
proxyCountry | string | US | ISO country code (US, GB, DE, etc.) |
requestDelayMs | number | 100 | Milliseconds between requests |
includeRaw | boolean | false | Include Reddit's raw JSON (debugging) |
🔧 Per-Target Overrides
Each mode supports easy arrays + optional advanced overrides. Example (scrape):
{"mode": "scrape","scrape": {"subreddits": ["python", "AskReddit"],"sort": "hot","maxPostsPerSubreddit": 100,"comments": { "mode": "none" },"overrides": [{"subreddit": "AskReddit","sort": "top","timeframe": "day","maxPostsPerSubreddit": 25,"comments": { "mode": "all" }}]}}
Here, r/python uses defaults (hot, 100 posts, no comments), while r/AskReddit overrides everything.
📈 Run Metadata
Each run stores statistics in the OUTPUT key-value record:
{"mode": "scrape","posts_scraped": 150,"comments_scraped": 2340,"http_stats": {"requests_made": 48,"requests_succeeded": 48,"rate_limits_hit": 0}}
❓ Troubleshooting
"403 Forbidden" or "429 Too Many Requests"
Cause: Reddit is blocking your requests.
Fix: Enable Apify Proxy (residential works best):
{"proxyConfiguration": {"useApifyProxy": true,"apifyProxyGroups": ["RESIDENTIAL"]}}
Empty results
Check:
- Subreddit name spelling (no
r/prefix needed) includeNsfw: trueif searching NSFW contentsort+timeframecombination (e.g.,topwithhourmay have few results)- Subreddit may be private or quarantined
Comments seem incomplete
Why: Reddit returns "load more" placeholders for very long threads. This scraper does not expand those placeholders.
What you get: The reply_count_direct and reply_count_total fields show fetched replies. The missing_direct_replies field shows the lower bound of unfetched replies.
Workaround: Increase maxTopLevelComments (max 500) and maxRepliesDepth (max 10).
Slow performance
Tips:
- Set
comments.mode: "none"if you don't need comments - Use
comments.mode: "high_engagement"to only fetch comments on popular posts - Reduce
scrape.maxPostsPerSubreddit/search.maxPostsPerQueryfor faster testing - All targets run in parallel automatically
Rate limits despite using proxies
Try:
- Increase
requestDelayMsto 200-500 - Change
proxyCountryto a different region (GB, DE, CA) - The scraper automatically retries with exponential backoff
License
Apache-2.0