Reddit Scraper ✅ Posts, Comments, Users, Communities | NO LOGIN
Pricing
Pay per usage
Reddit Scraper ✅ Posts, Comments, Users, Communities | NO LOGIN
*ENJOY FREE* for 2 weeks. Reddit scraper for posts, comments, users, listings, communities and more. NO LOGIN Required
Pricing
Pay per usage
Rating
4.5
(6)
Developer

Peaky Dev
Actor stats
6
Bookmarked
126
Total users
1
Monthly active users
a day ago
Last modified
Categories
Share
Reddit Scraper
Extract posts, comments, communities, and user profiles from Reddit, structured, clean, and ready to use.
What It Does
This scraper lets you pull data from any public Reddit URL or search term. Whether you're tracking discussions, researching communities, or building datasets, it returns organised JSON you can pipe straight into your workflow.
Supported content types:
- Posts and listing pages
- Comment threads
- User profiles
- Subreddit community pages
Getting Started
- Paste one or more Reddit URLs or enter a search term
- Choose a scrape type — post, comments, user, listing, or community
- Set your result limits
- Run and collect your dataset
Input URL Examples
| Content | Example URL |
|---|---|
| Subreddit listing | https://www.reddit.com/r/worldnews/ |
| Post + comments | https://www.reddit.com/r/learnprogramming/comments/lp1hi4/... |
| User profile | https://www.reddit.com/user/lukaskrivka |
| Search — posts | https://www.reddit.com/search/?q=news |
| Search — users & communities | https://www.reddit.com/search/?q=news&type=sr%2Cuser |
Input Parameters
| Parameter | Type | Description | Default |
|---|---|---|---|
url | array | One or more Reddit URLs to scrape | — |
searchTerm | string | Keyword to search Reddit | "" |
searchSort | string | Sort order: relevance, hot, top, new, comments | relevance |
searchTime | string | Time filter: all, hour, day, week, month, year | all |
searchSubreddit | string | Limit search to a specific subreddit (no r/ prefix) | "" |
scrapeType | string | What to scrape: post, listing, comments, user, community | post |
maxPosts | integer | Max posts to return | 1000 |
maxComments | integer | Max comments to return | 1000 |
maxListingPosts | integer | Max posts from a listing page | 1000 |
maxCommunities | integer | Max communities to return | 1000 |
maxUsers | integer | Max user profiles to return | 1000 |
Use
urlorsearchTerm— not both at the same time.
Sample Input
{"url": ["https://www.reddit.com/r/learnprogramming/comments/lp1hi4/is_webscraping_a_good_skill_to_learn_as_a_beginner/"],"scrapeType": "post","maxPosts": 1000,"maxComments": 1000}
Sample Output
[{"contentType": "post","id": "lp1hi4","parseId": "lp1hi4","createdAt": "2021-02-21T17:08:34.853Z","scrapedAt": "2025-11-06T20:39:27.038Z","communityName": "learnprogramming","author": "SadFrodo401","title": "Is Web-Scraping a good skill to learn as a Beginner?","body": "I'm a python beginner...","upvotes": 4,"noOfcomments": 12,"url": "https://www.reddit.com/r/learnprogramming/comments/lp1hi4/...","thumbnailUrl": null,"isVideo": false,"isAd": false,"isPinned": false,"isOver18": false},{"contentType": "comments","id": "nnegxe8","parseId": "nnegxe8","createdAt": "2025-11-06T11:33:27.000Z","scrapedAt": "2025-11-06T20:46:20.020Z","communityName": "ContagiousLaughter","author": "AutoModerator","body": "Please report this post if...","upvotes": 1,"noOfcomments": 0,"noOfreplies": 0,"url": "https://www.reddit.com/r/ContagiousLaughter/comments/1opwizl/.../nnegxe8/","isVideo": false,"isAd": false,"isPinned": true,"isOver18": false,"depth": 0,"parentId": "1opwizl","postUrl": "https://www.reddit.com/r/ContagiousLaughter/comments/1opwizl/eddie_murphys_uncle/"},{"id": "worldnews","name": "t5_worldnews","title": "r/worldnews","headerImage": "https://external-preview.redd.it/...","description": "Anyone can view, post, and comment to this community","over18": false,"createdAt": "2008-01-24T23:00:00.000Z","scrapedAt": "2025-11-06T21:00:17.080Z","numberOfMembers": 46854095,"url": "https://www.reddit.com/r/worldnews/","dataType": "community"},{"id": "4y22fmn1","url": "https://www.reddit.com/user/lukaskrivka/","username": "lukaskrivka","userIcon": "https://styles.redditmedia.com/...","postKarma": 44,"commentKarma": 270,"over18": false,"createdAt": "2020-07-22T10:04:07.000Z","scrapedAt": "2025-11-06T20:39:27.038Z","dataType": "user"}]
Resumable Runs
If a run is interrupted — whether by a timeout, error, or manual stop — you can restart it and it will pick up from where it left off rather than starting over.
Progress is tracked at the URL level. Once a URL has been fully scraped, it is recorded as complete. On restart, those URLs are skipped entirely and the scraper picks up on any that were not yet finished.
Use Cases
- Market research — monitor brand mentions, product feedback, and community sentiment
- Academic research — collect discussion data for topic modelling, NLP, or social studies
- Trend analysis — track what's gaining traction across subreddits over time
- App development — feed live Reddit data into dashboards, bots, or recommendation engines
- Content strategy — find high-engagement threads and understand what resonates with an audience
Pricing
| Plan | Limit | Cost |
|---|---|---|
| Free | 100 items per month | Free |
Support
Open a ticket on the Issues tab or contact us directly at peakydev00@gmail.com.