Reddit Scraper
Pricing
$19.99/month + usage
Reddit Scraper
Scrape Reddit posts, comments, communities, and users without login. Extract data from subreddits, search results, user profiles. Sort by hot/new/top, filter by date, include/exclude NSFW. Keyword search, residential proxies, fast and reliable.
Pricing
$19.99/month + usage
Rating
0.0
(0)
Developer

SilentFlow
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
a day ago
Last modified
Categories
Share
by SilentFlow
Unlimited Reddit web scraper to crawl posts, comments, communities, and users without login. Extract all data from subreddits, search results, and user profiles.
Why use this scraper?
- No login required: Scrape all public Reddit data without authentication
- Comprehensive data: Posts, comments, communities, and users with full metadata
- Flexible input: Scrape by URL or search by keyword
- Advanced filtering: Sort by hot/new/top, filter by date, include/exclude NSFW
- High reliability: Built-in retry logic and residential proxy support
Use cases
| Industry | Application |
|---|---|
| Market research | Monitor brand mentions and sentiment across subreddits |
| Content analysis | Analyze trending topics and community discussions |
| Academic research | Study online communities and user behavior |
| Competitive intelligence | Track competitor discussions and product feedback |
| Trend monitoring | Identify emerging trends and opinions |
Input parameters
URL scraping
| Parameter | Type | Description |
|---|---|---|
startUrls | array | Reddit URL(s) to scrape (subreddits, posts, users, search pages) |
Supported URL types:
- Subreddits:
https://www.reddit.com/r/programming/ - Subreddit channels:
https://www.reddit.com/r/programming/hot - Posts:
https://www.reddit.com/r/learnprogramming/comments/abc123/... - Users:
https://www.reddit.com/user/username - User comments:
https://www.reddit.com/user/username/comments/ - Search:
https://www.reddit.com/search/?q=keyword - Popular:
https://www.reddit.com/r/popular/ - Leaderboards:
https://www.reddit.com/subreddits/leaderboard/crypto/
Keyword search
| Parameter | Type | Description |
|---|---|---|
searches | array | Keywords to search on Reddit |
searchCommunityName | string | Restrict search to a specific community |
searchPosts | boolean | Search for posts (default: true) |
searchComments | boolean | Search for comments (default: false) |
searchCommunities | boolean | Search for communities (default: false) |
searchUsers | boolean | Search for users (default: false) |
Sorting & filtering
| Parameter | Type | Default | Options |
|---|---|---|---|
sort | string | new | relevance, hot, top, new, rising, comments |
time | string | all | all, hour, day, week, month, year |
includeNSFW | boolean | true | Include NSFW content |
postDateLimit | string | - | Only posts after this date (YYYY-MM-DD) |
Limits
| Parameter | Type | Default | Description |
|---|---|---|---|
maxItems | integer | 10 | Maximum total items to save |
maxPostCount | integer | 10 | Maximum posts per page/subreddit |
maxComments | integer | 10 | Maximum comments per post |
maxCommunitiesCount | integer | 2 | Maximum communities to scrape |
maxUserCount | integer | 2 | Maximum users to scrape |
Skip options
| Parameter | Type | Default | Description |
|---|---|---|---|
skipComments | boolean | false | Skip comment extraction |
skipUserPosts | boolean | false | Skip user post extraction |
skipCommunity | boolean | false | Skip community info extraction |
Advanced
| Parameter | Type | Default | Description |
|---|---|---|---|
scrollTimeout | integer | 40 | Request timeout in seconds |
debugMode | boolean | false | Enable detailed logging |
proxy | object | residential | Proxy configuration (useApifyProxy, apifyProxyGroups) |
Output data
Post example
{"id": "t3_abc123","parsedId": "abc123","url": "https://www.reddit.com/r/programming/comments/abc123/example_post/","username": "dev_user","userId": "t2_abc123","title": "Example Post Title","communityName": "r/programming","parsedCommunityName": "programming","body": "Post body text...","html": null,"numberOfComments": 42,"upVotes": 256,"upVoteRatio": 0.95,"isVideo": false,"isAd": false,"over18": false,"flair": "Discussion","link": "https://example.com/article","thumbnailUrl": "https://b.thumbs.redditmedia.com/...","videoUrl": "","imageUrls": ["https://i.redd.it/abc123.jpg"],"createdAt": "2024-06-01T12:00:00Z","scrapedAt": "2024-06-02T10:30:00Z","dataType": "post"}
Comment example
{"id": "t1_xyz789","parsedId": "xyz789","url": "https://www.reddit.com/r/programming/comments/abc123/example_post/xyz789/","parentId": "t3_abc123","postId": "abc123","username": "commenter","userId": "t2_xyz789","category": "programming","communityName": "r/programming","body": "Great post!","html": "<div class=\"md\"><p>Great post!</p></div>","createdAt": "2024-06-01T13:00:00Z","scrapedAt": "2024-06-02T10:30:00Z","upVotes": 15,"numberOfreplies": 3,"dataType": "comment"}
Community example
{"id": "2fwo","name": "t5_2fwo","title": "Programming","url": "https://www.reddit.com/r/programming/","description": "Computer programming","over18": false,"numberOfMembers": 5800000,"createdAt": "2006-01-25T00:00:00Z","scrapedAt": "2024-06-02T10:30:00Z","dataType": "community"}
User example
{"id": "abc123","url": "https://www.reddit.com/user/dev_user/","username": "dev_user","description": "Software engineer and open source enthusiast","postKarma": 15000,"commentKarma": 42000,"over18": false,"createdAt": "2020-01-15T00:00:00Z","scrapedAt": "2024-06-02T10:30:00Z","dataType": "user"}
Field descriptions
| Field | Description |
|---|---|
userId | Reddit internal user ID (author_fullname) |
html | Raw HTML body (null for posts, populated for comments) |
flair | Post flair/tag (null if none) |
link | External link attached to the post (empty if self-post) |
thumbnailUrl | Reddit-generated thumbnail URL |
videoUrl | Reddit video URL (v.redd.it) if the post is a video |
imageUrls | Array of image URLs (empty array if no images) |
isAd | Whether the post is a promoted/sponsored post |
postId | Parsed post ID the comment belongs to |
category | Community name (parsed) for the comment |
numberOfreplies | Direct reply count for the comment |
Data fields
| Category | Fields |
|---|---|
| Identity | id, parsedId, url, username, userId |
| Content | title, body, html, flair |
| Community | communityName, parsedCommunityName, category |
| Engagement | upVotes, upVoteRatio, numberOfComments, numberOfreplies |
| Media | imageUrls, videoUrl, thumbnailUrl, link |
| Flags | isVideo, isAd, over18 |
| Meta | createdAt, scrapedAt, dataType |
Examples
Scrape a subreddit
{"startUrls": [{"url": "https://www.reddit.com/r/programming/"}],"maxItems": 50,"maxPostCount": 20,"maxComments": 10,"sort": "hot"}
Search for a keyword
{"searches": ["machine learning"],"searchPosts": true,"searchCommunities": true,"sort": "top","time": "month","maxItems": 100}
Scrape a specific post with comments
{"startUrls": [{"url": "https://www.reddit.com/r/learnprogramming/comments/lp1hi4/is_webscraping_a_good_skill_to_learn/"}],"maxComments": 50}
Search within a community
{"searches": ["python"],"searchCommunityName": "programming","searchPosts": true,"sort": "new","maxItems": 50}
Integrations
Python
from apify_client import ApifyClientclient = ApifyClient("YOUR_API_TOKEN")run = client.actor("silentflow/reddit-scraper").call(run_input={"startUrls": [{"url": "https://www.reddit.com/r/programming/"}],"maxItems": 50,"maxPostCount": 20,"maxComments": 10,"sort": "hot"})for item in client.dataset(run["defaultDatasetId"]).iterate_items():if item["dataType"] == "post":print(f"[{item['upVotes']}] {item['title']}")elif item["dataType"] == "comment":print(f" > {item['body'][:80]}")
JavaScript
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });const run = await client.actor('silentflow/reddit-scraper').call({searches: ['web scraping'],searchPosts: true,sort: 'top',time: 'week',maxItems: 100});const { items } = await client.dataset(run.defaultDatasetId).listItems();items.forEach(item => {if (item.dataType === 'post') {console.log(`[${item.upVotes}] ${item.title}`);}});
Performance & limits
| Metric | Value |
|---|---|
| Items per request | up to 100 |
| Average speed | ~50 items/second |
| Max items per run | 10,000 |
| Supported content | Posts, Comments, Communities, Users |
Tips for best results
- Use specific subreddits: Target specific communities for focused data
- Set realistic limits: Start with
maxItems: 10to test before large scrapes - Use date filters: Combine
postDateLimitwith sort "new" for recent content - Residential proxy: Enabled by default for best reliability
- Skip what you don't need: Use
skipCommentsto speed up subreddit scraping
FAQ
Q: Can I scrape private subreddits? A: No, this scraper only accesses publicly available data.
Q: Why are some posts missing? A: Reddit may filter certain posts. NSFW content is included by default but can be toggled.
Q: How often can I run the scraper? A: No limits on frequency. Use residential proxies for best results.
Q: What happens if Reddit blocks the scraper? A: The scraper automatically rotates proxies and retries. If all attempts fail, try again later.
Support
Need help? We're here for you:
- Bug reports: Open an issue on the actor page
- Questions: Message us via Apify console
- Feature requests: Let us know what you need
- Custom solutions: Contact us for enterprise integrations or high-volume needs
Check out our other scrapers: SilentFlow on Apify