Reddit Universal Scraper (No API/Cookies)
Pricing
from $1.00 / 1,000 results
Reddit Universal Scraper (No API/Cookies)
A powerful and reliable Reddit scraper that extracts posts, comments, user profiles, and subreddit data **without requiring Reddit API keys**. Features intelligent sorting, time filtering, and built-in proxy support.
Pricing
from $1.00 / 1,000 results
Rating
5.0
(1)
Developer

Dead
Actor stats
0
Bookmarked
3
Total users
2
Monthly active users
8 days ago
Last modified
Categories
Share
๐ค Reddit Universal Scraper
A powerful and reliable Reddit scraper that extracts posts, comments, user profiles, and subreddit data without requiring Reddit API keys. Features intelligent sorting, time filtering, and built-in proxy support.
โจ Features
- ๐ฅ No API Keys Required
- ๐ Complete Data Extraction - Posts, comments, user profiles, and subreddit metadata
- ๐ฏ Multiple Sort Options - Hot, New, Top, Rising, Controversial
- โฐ Time Filters - Hour, Day, Week, Month, Year, All Time
- ๐ Pagination Support - Scrape up to 1000+ posts per run
- ๐ฌ Nested Comments - Extract full comment threads with reply depth
- ๐ค User Analysis - Profile data including karma and account age
- ๐ Proxy Support - Built-in Apify residential proxy integration
- ๐ฆ Organized Output - Clean data structure with type identification
- ๐ Rate Limiting - Smart delays to avoid Reddit blocking
๐ Quick Start
Example 1: Scrape Hot Posts
{"scrapeType": "subreddit","subredditName": "python","sortType": "hot","maxPosts": 100}
Example 2: Top Posts of the Week
{"scrapeType": "subreddit","subredditName": "AskReddit","sortType": "top","timeFilter": "week","maxPosts": 50}
Example 3: Scrape User Posts
{"scrapeType": "user","username": "spez","sortType": "top","timeFilter": "all","maxPosts": 100,"getUserProfile": true}
๐ Input Parameters
Scrape Type
| Value | Description |
|---|---|
subreddit | Scrape posts from a subreddit |
user | Scrape posts from a user's profile |
both | Scrape both subreddit and user |
Required Fields
subredditName- Subreddit name without 'r/' (e.g., "python", "technology")username- Username without 'u/' (e.g., "spez", "gallowboob")
Note: If
scrapeTypeis "subreddit", onlysubredditNameis required. If "user", onlyusernameis required. If "both", both are required.
Sort Options
| Sort Type | Description | Best For |
|---|---|---|
hot | Currently trending posts | Real-time popular content |
new | Latest posts chronologically | Breaking news, monitoring |
top | Most upvoted posts | Quality content, research |
rising | Posts gaining traction | Early trend detection |
controversial | Most debated posts | Controversial topics |
Time Filters (Top/Controversial only)
| Filter | Range |
|---|---|
hour | Past hour |
day | Today |
week | This week |
month | This month |
year | This year |
all | All time |
โ ๏ธ Important: Time filters only work with
topandcontroversialsorting.
Limits
| Parameter | Min | Max | Default | Description |
|---|---|---|---|---|
maxPosts | 1 | 1000 | 100 | Posts to scrape |
maxPostsForComments | 1 | 100 | 10 | Posts to get comments from |
commentsPerPost | 10 | 500 | 50 | Comments per post |
Optional Features
| Feature | Default | Description |
|---|---|---|
scrapeComments | false | Extract comments (slower) |
getUserProfile | true | Get user profile data |
getSubredditInfo | true | Get subreddit metadata |
scrapeUserPosts | true | Scrape user's posts |
Proxy Configuration
- Enabled by default - Uses Apify residential proxies
- Helps avoid Reddit rate limiting
- Free credits included with Apify
๐ Output Data Structure
All data is saved with a data_type field for easy filtering:
Post Data (data_type: "post" or "user_post")
{"data_type": "post","id": "1jy1wdc","title": "I made an n8n Cheat Sheet!","author": "Superb_Net_7426","subreddit": "n8n","score": 2032,"upvote_ratio": 0.99,"num_comments": 81,"created_utc": "2025-04-13T07:06:07","post_type": "image","url": "https://i.redd.it/w4ult2xaxjue1.png","full_link": "https://reddit.com/r/n8n/comments/1jy1wdc/...","selftext": "","domain": "i.redd.it","link_flair_text": null,"thumbnail": "https://...","media_url": "https://i.redd.it/w4ult2xaxjue1.png","preview_images": ["https://..."],"is_video": false,"is_self": false,"over_18": false,"spoiler": false,"stickied": false,"locked": false,"gilded": 0,"total_awards_received": 0,"sort_type": "top","time_filter": "all","scraped_at": "2025-12-29T10:30:00"}
Comment Data (data_type: "comment")
{"data_type": "comment","comment_id": "abc123","post_id": "1jy1wdc","post_title": "I made an n8n Cheat Sheet!","subreddit": "n8n","author": "user123","body": "This is amazing! Thanks for sharing.","score": 45,"depth": 0,"parent_id": "","created_utc": "2025-04-13T08:00:00","post_permalink": "/r/n8n/comments/1jy1wdc/...","post_full_link": "https://reddit.com/r/n8n/comments/1jy1wdc/...","gilded": 0,"is_submitter": false,"scraped_at": "2025-12-29T10:30:00"}
User Profile (data_type: "user_profile")
{"data_type": "user_profile","username": "spez","link_karma": 123456,"comment_karma": 789012,"total_karma": 912468,"created_utc": "2005-06-06T00:00:00","account_age_days": 7146,"is_gold": true,"is_mod": true,"is_employee": true,"verified": true,"icon_img": "https://...","scraped_at": "2025-12-29T10:30:00"}
Subreddit Info (data_type: "subreddit_info")
{"data_type": "subreddit_info","name": "n8n","title": "n8n - Workflow Automation","description": "n8n is a free and source-available workflow automation tool","long_description": "Full description with rules...","subscribers": 50000,"active_users": 500,"created_utc": "2019-06-01T00:00:00","over_18": false,"url": "https://reddit.com/r/n8n","icon_img": "https://...","banner_img": "https://...","scraped_at": "2025-12-29T10:30:00"}
๐ฏ Common Use Cases
1. News Monitoring
{"scrapeType": "subreddit","subredditName": "worldnews","sortType": "new","maxPosts": 200,"scrapeComments": false}
Use: Track breaking news in real-time
2. Viral Content Discovery
{"scrapeType": "subreddit","subredditName": "memes","sortType": "top","timeFilter": "day","maxPosts": 50}
Use: Find trending memes and viral content
3. Market Research
{"scrapeType": "subreddit","subredditName": "startups","sortType": "top","timeFilter": "month","maxPosts": 100,"scrapeComments": true,"maxPostsForComments": 20}
Use: Research startup trends and discussions
4. Influencer Analysis
{"scrapeType": "user","username": "influencer_name","sortType": "top","timeFilter": "year","maxPosts": 200,"getUserProfile": true}
Use: Analyze influencer activity and engagement
5. Community Analysis
{"scrapeType": "subreddit","subredditName": "your_community","sortType": "controversial","timeFilter": "week","maxPosts": 50,"scrapeComments": true,"maxPostsForComments": 50}
Use: Monitor community health and controversy
6. Content Research
{"scrapeType": "subreddit","subredditName": "writing","sortType": "top","timeFilter": "week","maxPosts": 30,"scrapeComments": true}
Use: Find content ideas and trending topics
โ๏ธ Performance & Limits
Speed Estimates
| Configuration | Approximate Time |
|---|---|
| 50 posts, no comments | 1-2 minutes |
| 100 posts, no comments | 2-3 minutes |
| 100 posts, 10 with comments | 5-7 minutes |
| 500 posts, no comments | 8-12 minutes |
| 100 posts, all with comments | 20-30 minutes |
Reddit Limits
- Maximum ~1000 posts available per subreddit/sort combination
- Stickied posts may appear in results
- Deleted/removed posts are excluded
[deleted]indicates deleted user accounts
Rate Limiting
- Built-in delays (2-4 seconds between requests)
- Automatic retry on rate limit (429 errors)
- Proxy recommended for heavy scraping
๐ก Pro Tips
1. Fastest Scraping
{"maxPosts": 100,"scrapeComments": false,"getSubredditInfo": false}
2. Most Complete Data
{"maxPosts": 100,"scrapeComments": true,"maxPostsForComments": 20,"commentsPerPost": 100}
3. Real-time Monitoring
{"sortType": "new","maxPosts": 50,"scrapeComments": false}
4. Quality Content Research
{"sortType": "top","timeFilter": "month","maxPosts": 100}
5. Early Trend Detection
{"sortType": "rising","maxPosts": 50}
๐ Privacy & Ethics
- โ Scrapes only public Reddit data
- โ Respects Reddit's rate limits
- โ No authentication required
- โ Follows ethical scraping practices
- โ ๏ธ Always respect user privacy
- โ ๏ธ Use data responsibly
- โ ๏ธ Follow Reddit's Content Policy
Need help? Here's how to get support:
- Review example inputs in this README
- Contact via Apify Console - Use the feedback button in your run
- Check logs - Actor logs show detailed progress and errors
