Reddit Scraper
Pricing
$19.99/month + usage
Reddit Scraper
Scrape Reddit posts, comments, communities, and users without login. Extract data from subreddits, search results, user profiles. Sort by hot/new/top, filter by date, include/exclude NSFW. Keyword search, residential proxies, fast and reliable.
Pricing
$19.99/month + usage
Rating
0.0
(0)
Developer
SilentFlow
Actor stats
1
Bookmarked
2
Total users
1
Monthly active users
19 hours ago
Last modified
Categories
Share
Scrape Reddit posts, comments, communities, and users without login. Extract data from any subreddit, search result, or user profile.
โจ Why use this scraper?
- ๐ No login required: Scrape all public Reddit data without authentication
- ๐ฆ 4 data types: Posts, comments, communities, and users with full metadata
- ๐ Search & URL modes: Scrape specific URLs or search Reddit by keywords
- ๐๏ธ Smart filtering: Sort by hot/new/top, filter by date, include/exclude NSFW
- โก High reliability: Automatic retries and residential proxy support
๐ฏ Use cases
| Industry | Application |
|---|---|
| Market research | Monitor brand mentions and sentiment across subreddits |
| Content analysis | Analyze trending topics and community discussions |
| Academic research | Study online communities, opinions, and user behavior |
| Competitive intelligence | Track competitor discussions and product feedback |
| Trend monitoring | Identify emerging trends before they hit mainstream |
๐ฅ Input parameters
URL scraping
| Parameter | Type | Description |
|---|---|---|
startUrls | array | Reddit URL(s) to scrape (subreddits, posts, users, search pages) |
Supported URL types:
- Subreddits:
https://www.reddit.com/r/programming/ - Subreddit sort:
https://www.reddit.com/r/programming/hot - Posts:
https://www.reddit.com/r/learnprogramming/comments/abc123/... - Users:
https://www.reddit.com/user/username - User comments:
https://www.reddit.com/user/username/comments/ - Search:
https://www.reddit.com/search/?q=keyword - Popular:
https://www.reddit.com/r/popular/ - Leaderboards:
https://www.reddit.com/subreddits/leaderboard/crypto/
Search
| Parameter | Type | Description |
|---|---|---|
searches | array | Keywords to search on Reddit |
searchCommunityName | string | Restrict search to a specific subreddit (e.g. programming) |
searchTypes | array | Types of results: posts, communities, users (default: posts) |
Sorting & filtering
| Parameter | Type | Default | Description |
|---|---|---|---|
sort | string | new | Sort by: relevance, hot, top, new, rising, comments |
time | string | all | Time filter: all, hour, day, week, month, year |
includeNSFW | boolean | true | Include adult/NSFW content |
postDateLimit | string | - | Only posts after this date (YYYY-MM-DD) |
Options & limits
| Parameter | Type | Default | Description |
|---|---|---|---|
includeComments | boolean | true | Also scrape comments when visiting posts |
maxItems | integer | 50 | Maximum total items to return |
๐ Output data
Post example
{"id": "t3_abc123","parsedId": "abc123","url": "https://www.reddit.com/r/programming/comments/abc123/example_post/","username": "dev_user","userId": "t2_abc123","title": "Example Post Title","communityName": "r/programming","parsedCommunityName": "programming","body": "Post body text...","html": null,"numberOfComments": 42,"upVotes": 256,"upVoteRatio": 0.95,"isVideo": false,"isAd": false,"over18": false,"flair": "Discussion","link": "https://example.com/article","thumbnailUrl": "https://b.thumbs.redditmedia.com/...","videoUrl": "","imageUrls": ["https://i.redd.it/abc123.jpg"],"createdAt": "2024-06-01T12:00:00Z","scrapedAt": "2024-06-02T10:30:00Z","dataType": "post"}
Comment example
{"id": "t1_xyz789","parsedId": "xyz789","url": "https://www.reddit.com/r/programming/comments/abc123/example_post/xyz789/","parentId": "t3_abc123","postId": "abc123","username": "commenter","userId": "t2_xyz789","category": "programming","communityName": "r/programming","body": "Great post!","html": "<div class=\"md\"><p>Great post!</p></div>","createdAt": "2024-06-01T13:00:00Z","scrapedAt": "2024-06-02T10:30:00Z","upVotes": 15,"numberOfreplies": 3,"dataType": "comment"}
Community example
{"id": "2fwo","name": "t5_2fwo","title": "Programming","url": "https://www.reddit.com/r/programming/","description": "Computer programming","over18": false,"numberOfMembers": 5800000,"createdAt": "2006-01-25T00:00:00Z","scrapedAt": "2024-06-02T10:30:00Z","dataType": "community"}
User example
{"id": "abc123","url": "https://www.reddit.com/user/dev_user/","username": "dev_user","description": "Software engineer and open source enthusiast","postKarma": 15000,"commentKarma": 42000,"over18": false,"createdAt": "2020-01-15T00:00:00Z","scrapedAt": "2024-06-02T10:30:00Z","dataType": "user"}
๐๏ธ Data fields
| Category | Fields |
|---|---|
| Identity | id, parsedId, url, username, userId |
| Content | title, body, html, flair |
| Community | communityName, parsedCommunityName, category |
| Engagement | upVotes, upVoteRatio, numberOfComments, numberOfreplies |
| Media | imageUrls, videoUrl, thumbnailUrl, link |
| Flags | isVideo, isAd, over18 |
| Meta | createdAt, scrapedAt, dataType |
๐ Examples
Scrape a subreddit
{"startUrls": [{"url": "https://www.reddit.com/r/programming/"}],"maxItems": 50,"sort": "hot"}
Search for a keyword
{"searches": ["machine learning"],"searchTypes": ["posts", "communities"],"sort": "top","time": "month","maxItems": 100}
Scrape a post with comments
{"startUrls": [{"url": "https://www.reddit.com/r/learnprogramming/comments/lp1hi4/is_webscraping_a_good_skill_to_learn/"}],"includeComments": true,"maxItems": 100}
Search within a community
{"searches": ["python"],"searchCommunityName": "programming","sort": "new","maxItems": 50}
Get recent posts only
{"startUrls": [{"url": "https://www.reddit.com/r/technology/"}],"postDateLimit": "2026-03-01","includeComments": false,"maxItems": 200}
๐ป Integrations
Python
from apify_client import ApifyClientclient = ApifyClient("YOUR_API_TOKEN")run = client.actor("silentflow/reddit-scraper").call(run_input={"startUrls": [{"url": "https://www.reddit.com/r/programming/"}],"maxItems": 50,"sort": "hot"})for item in client.dataset(run["defaultDatasetId"]).iterate_items():if item["dataType"] == "post":print(f"[{item['upVotes']}] {item['title']}")elif item["dataType"] == "comment":print(f" > {item['body'][:80]}")
JavaScript
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });const run = await client.actor('silentflow/reddit-scraper').call({searches: ['web scraping'],searchTypes: ['posts'],sort: 'top',time: 'week',maxItems: 100});const { items } = await client.dataset(run.defaultDatasetId).listItems();items.forEach(item => {if (item.dataType === 'post') {console.log(`[${item.upVotes}] ${item.title}`);}});
๐ Performance & limits
| Metric | Value |
|---|---|
| Items per request | up to 100 |
| Average speed | ~50 items/second |
| Max items per run | 10,000 |
| Supported content | Posts, Comments, Communities, Users |
๐ก Tips for best results
- Target specific subreddits: Focused scraping gives cleaner, more relevant data
- Start small: Test with
maxItems: 10before running large scrapes - Use date filters: Combine
postDateLimitwith sort "new" for recent content monitoring - Disable comments when not needed: Set
includeComments: falseto speed up subreddit scraping - Combine search types: Use
searchTypes: ["posts", "communities"]to find both discussions and relevant subreddits
โ FAQ
Q: Can I scrape private subreddits? A: No, this scraper only accesses publicly available data.
Q: Why are some posts missing?
A: Reddit may filter certain posts. NSFW content is included by default but can be toggled with includeNSFW.
Q: How often can I run the scraper? A: No limits on run frequency. The scraper handles rate limiting automatically.
Q: What happens if Reddit is temporarily unavailable? A: The scraper automatically retries. If all attempts fail, try again later.
๐ฌ Support
We're building this scraper for you, your feedback makes it better for everyone!
- ๐ก Need a feature? Tell us what's missing and we'll prioritize it
- โ๏ธ Custom solutions: Contact us for enterprise integrations or high-volume needs
Check out our other scrapers: SilentFlow on Apify