Reddit Scraper Pro avatar
Reddit Scraper Pro

Pricing

$18.00 / 1,000 results

Go to Apify Store
Reddit Scraper Pro

Reddit Scraper Pro

High-reliability Reddit scraper with no artificial limits. Extract posts, comments, users, and subreddit data at scale. Supports JSON API, OAuth, HTML, and Playwright modes. Full comment trees, working filters, NSFW support, multi-account rate-boost.

Pricing

$18.00 / 1,000 results

Rating

0.0

(0)

Developer

Dennis

Dennis

Maintained by Community

Actor stats

0

Bookmarked

8

Total users

5

Monthly active users

20 days ago

Last modified

Share

Extract posts, comments, subreddits, and user data from Reddit without limitations.

Features

No artificial limits - Scrape 100K+ posts (competitors stop at 1000!)
98%+ Success Rate - Most reliable Reddit scraper
All filters work - Date, upvotes, sort (competitors ignore them!)
Nested comments - Full comment trees with unlimited depth
4 Scraping Modes - JSON API, OAuth API, HTML Scraping, Browser (Playwright)
Multi-Account Support - Add multiple Reddit apps for 100 QPM each
NSFW Support - Works without login (MODE 1, 2 & 4)

Quick Start Examples

Example 1: Scrape Multiple Subreddits

{
"subreddits": ["python", "webdev", "machinelearning"],
"scrapeOptions": {
"posts": true,
"comments": true,
"subredditInfo": true
},
"limits": {
"maxPosts": 50,
"maxCommentsPerPost": 20
},
"filters": {
"sortBy": "hot",
"minUpvotes": 10
}
}

Example 2: Scrape Specific URLs

{
"startUrls": [
{"url": "https://www.reddit.com/r/python/"},
{"url": "https://www.reddit.com/r/webdev/comments/abc123/title/"},
{"url": "https://www.reddit.com/user/spez/"}
],
"scrapeOptions": {
"posts": true,
"comments": true
}
}

Example 3: r/popular with Country Filter

{
"specialSubreddit": "popular",
"countryCode": "US",
"filters": {
"sortBy": "top",
"timeRange": "week"
},
"limits": {
"maxPosts": 100
}
}

→ Scrapes: https://www.reddit.com/r/popular/top/?geo_filter=US&t=week

Example 4: r/all - All of Reddit

{
"specialSubreddit": "all",
"filters": {
"sortBy": "controversial",
"timeRange": "month",
"minUpvotes": 100
}
}

→ Scrapes: https://www.reddit.com/r/all/controversial/?t=month

Example 5: Search for Posts

{
"searchKeywords": ["GPT-4", "artificial intelligence"],
"filters": {
"timeRange": "week",
"minUpvotes": 50,
"sortBy": "top"
},
"limits": {
"maxPosts": 200
}
}

Example 6: Discover Subreddits (Browser Mode)

{
"scrapingMode": "browser",
"searchSubreddits": ["webdev", "python", "datascience"]
}

→ Finds: r/webdev, r/web_design, r/WebdevTutorials, r/python, r/learnpython, etc.

Example 7: Deep Comment Extraction

{
"subreddits": ["AskReddit"],
"scrapeOptions": {
"posts": true,
"comments": true
},
"limits": {
"maxPosts": 10,
"maxCommentDepth": 20,
"maxCommentsPerPost": 100
}
}

Example 8: User Profile with Activity

{
"startUrls": [
{"url": "https://www.reddit.com/user/AutoModerator/"}
],
"scrapeOptions": {
"users": true
},
"scrapingMode": "browser"
}

Scraping Modes

  • No authentication needed
  • ~60 requests/minute
  • ✅ NSFW content supported
  • Fast and reliable (1-2s per page)
  • Best for most use cases

Mode 2: Official Reddit API (Power Users 🔑)

  • Requires Reddit App credentials (Create App)
  • 100 requests/minute per account
  • Add multiple accounts for higher limits!
  • ✅ NSFW content supported
  • OAuth-based authentication
  • Highest rate limits

Mode 3: HTML Scraping (Fallback 📄)

  • No authentication needed
  • No JavaScript rendering required
  • Works for SFW content
  • ⚠️ NSFW not available (Reddit limitation)
  • Lightweight and efficient

Mode 4: Browser (Playwright 🌐)

  • Headless Chromium browser
  • Full JavaScript rendering
  • ✅ NSFW content supported
  • Nested comments (unlimited depth!)
  • User profiles with their posts/comments
  • ✅ All media: Images, Videos, External URLs
  • Slower (~2-5s per page) but most robust

What You Can Extract

Posts:

  • Title, text, URL
  • Author, score, comments count
  • Timestamps, awards
  • Image URLs (direct download links!)
  • Video URLs
  • Subreddit info

Comments:

  • Full nested comment trees
  • Author, body, score
  • Timestamps
  • Reply chains (unlimited depth!)

Subreddits:

  • Subscriber count, description
  • Banner & icon images
  • Creation date

Users:

  • Karma scores, account age
  • Profile icons
  • Post/comment history

Input Configuration

Subreddits: List of subreddit names (without r/)
Search Keywords: Search Reddit for specific terms
Start URLs: Direct URLs to posts, subreddits, or users

Filters:

  • Time Range: hour, day, week, month, year, all
  • Min/Max Upvotes
  • Sort: hot, new, top, controversial, rising
  • Exclude NSFW

Limits:

  • Max Posts (null = unlimited!)
  • Max Comments per Post
  • Max Comment Depth

Multi-Account Setup (Optional)

For higher rate limits, add multiple Reddit App credentials:

  1. Create apps at https://www.reddit.com/prefs/apps
  2. Choose "script" type
  3. Add to input:
{
"scrapingMode": "official_api",
"redditCredentials": [
{"client_id": "...", "client_secret": "..."},
{"client_id": "...", "client_secret": "..."}
]
}

Result: 100 QPM per account = 200 QPM total!

Why This Scraper?

FeatureCompetitorsReddit Scraper Pro
Post Limits1000-6000✅ Unlimited
FiltersOften broken✅ Always work
Success Rate~94%✅ >98%
CommentsLimited✅ Full depth
Multi-Account✅ Yes
Modes1✅ 3

Use Cases

  • Research: Sentiment analysis, trend tracking
  • Marketing: Brand monitoring, competitor analysis
  • Data Science: Training data, NLP
  • Journalism: News monitoring, fact checking

Output Example

{
"id": "abc123",
"type": "post",
"subreddit": "Python",
"title": "...",
"author": "username",
"score": 234,
"num_comments": 89,
"url": "https://i.redd.it/image.jpg",
"created_at": "2025-11-03T10:00:00",
"over_18": false,
"is_video": false,
"comments": [...]
}

API Resources

Have any issues or feedback?

A crawler can only be as good as its feedback! If you encounter any problems or have feature requests, please don't hesitate to reach out. I'm committed to fixing bugs and implementing your suggestions to make this the best Reddit scraper possible.

Thank you for your support! 🙏

Version

0.1.0 - Initial Release