Reddit Posts Scraper — (with Comments & Replies)
Pricing
from $2.00 / 1,000 post scrapeds
Reddit Posts Scraper — (with Comments & Replies)
Extract Reddit posts, comments & subreddit data with no login required. Returns title, score, author, flair, body text, and dates. JSON API-powered for 99%+ reliability.
Pricing
from $2.00 / 1,000 post scrapeds
Rating
0.0
(0)
Developer
Khadin Akbar
Actor stats
0
Bookmarked
1
Total users
0
Monthly active users
5 days ago
Last modified
Categories
Share
🔍 Reddit Scraper — Posts & Comments | No Login
What does Reddit Scraper do?
Reddit Scraper extracts posts, comments, and metadata from any subreddit, search query, or direct Reddit URL — with no login, no API key, and no cookies required. It uses Reddit's public JSON API for 99%+ reliability and returns clean, structured data ready for analysis, AI pipelines, or lead generation workflows.
Why use this Reddit Scraper?
- No login or Reddit API key needed — works out of the box on any public subreddit or search
- 99%+ success rate — powered by Reddit's own JSON API, not fragile HTML scraping
- 50% cheaper than the leading competitor — $0.002 per result vs $0.004 elsewhere
- Full MCP/AI compatibility — every output field has semantic names and metadata so Claude and other AI agents understand exactly what data they're getting
- Advanced filtering — filter by minimum score, flair, author, date, comment count, and NSFW status
What data can Reddit Scraper extract?
| Field | Description | Example |
|---|---|---|
post_id | Reddit post identifier | t3_abc123 |
title | Full post title | "Best Python resources in 2025?" |
author | Reddit username | "curious_dev" |
subreddit | Community name | "learnprogramming" |
url | Full post URL | "https://reddit.com/r/..." |
body_text | Post text content | "I've been coding for 2 years..." |
score | Net upvotes | 482 |
upvote_ratio | % upvoted | 0.97 |
num_comments | Total comments | 84 |
flair | Post flair label | "Question" |
external_url | Link post URL | "https://github.com/..." |
thumbnail_url | Thumbnail image | "https://b.thumbs..." |
is_nsfw | NSFW flag | false |
is_video | Video post flag | false |
created_at | Post creation time (UTC) | "2025-11-15T14:32:00Z" |
scraped_at | When scraped | "2026-04-09T10:00:00Z" |
data_type | Record type | "post" or "comment" |
How to scrape Reddit
Step 1 — Choose your input source
Option A: By Subreddits
Enter subreddit names (with or without r/ prefix). The scraper fetches posts sorted by your chosen method.
{"subreddits": ["programming", "learnpython", "MachineLearning"],"sort": "top","time": "week","maxResults": 100}
Option B: By Search Query Search across all of Reddit for any keyword or phrase.
{"searchQueries": ["AI news 2025", "best side hustle"],"sort": "relevance","time": "month","maxResults": 200}
Option C: By Direct URL Pass any Reddit URL and the scraper extracts data from it directly.
{"startUrls": [{ "url": "https://www.reddit.com/r/datascience/top/" },{ "url": "https://www.reddit.com/r/programming/search/?q=typescript" }],"maxResults": 50}
Step 2 — Optional: Enable comment scraping
Set includeComments: true to also pull the top comments for each post.
{"subreddits": ["AskReddit"],"maxResults": 20,"includeComments": true,"maxCommentsPerPost": 50}
Comment records are saved alongside posts in the same dataset, with data_type: "comment".
Input Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
subreddits | string[] | — | Subreddit names to scrape |
searchQueries | string[] | — | Keywords to search on Reddit |
startUrls | URL[] | — | Direct Reddit URLs to scrape |
sort | string | hot | Sort by: hot, new, top, rising, controversial, relevance |
time | string | all | Time filter: hour, day, week, month, year, all |
maxResults | number | 50 | Maximum total posts to save |
includeComments | boolean | false | Scrape top comments for each post |
maxCommentsPerPost | number | 20 | Max comments per post |
includeNSFW | boolean | false | Include NSFW posts |
minScore | number | — | Minimum upvote score filter |
maxScore | number | — | Maximum upvote score filter |
minComments | number | — | Minimum comment count filter |
flairFilter | string | — | Only include posts with this exact flair |
authorFilter | string | — | Only include posts from this username |
postDateLimit | string | — | Exclude posts older than this date (YYYY-MM-DD) |
proxyConfiguration | object | Residential | Proxy settings |
Output Example
{"post_id": "t3_1abc23","title": "What's the best way to learn Python in 2025?","author": "curious_dev","subreddit": "learnprogramming","url": "https://www.reddit.com/r/learnprogramming/comments/abc123/","permalink": "/r/learnprogramming/comments/abc123/whats_the_best_way/","body_text": "I've been coding JavaScript for 2 years and want to branch out...","score": 482,"upvote_ratio": 0.97,"num_comments": 84,"flair": "Question","domain": "self.learnprogramming","external_url": null,"thumbnail_url": null,"is_nsfw": false,"is_video": false,"is_self": true,"created_at": "2025-11-15T14:32:00.000Z","scraped_at": "2026-04-09T10:00:00.000Z","source_url": "https://www.reddit.com/r/learnprogramming/comments/abc123/","data_type": "post"}
Use Cases
Market Research & Consumer Insights Scrape product-related subreddits to understand what real customers say about your product or competitors. Reddit users are unusually candid — ideal for genuine sentiment analysis.
AI & NLP Training Data Build large, diverse text datasets for fine-tuning LLMs or sentiment classifiers. Reddit's wide range of topics, writing styles, and community sizes makes it one of the best public text sources.
Brand Monitoring Set up scheduled runs on keyword searches for your brand name, product, or competitors. Catch PR issues early or spot positive sentiment to amplify.
Content Strategy & Trend Discovery
Track which posts get the most upvotes in your niche each week. Use the sort: top + time: week combo to find what resonates with your target audience before creating content.
Lead Generation & Community Analysis
Find engaged community members in your niche. Use minScore to filter for only high-signal discussions.
How to run via API
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_APIFY_API_TOKEN' });const run = await client.actor('USERNAME/reddit-posts-scraper').call({subreddits: ['programming', 'learnpython'],sort: 'top',time: 'week',maxResults: 500,});const { items } = await client.dataset(run.defaultDatasetId).listItems();console.log(items);
Export scraped data, run the scraper via API, schedule and monitor runs, or integrate with other tools. Data is available in JSON, CSV, Excel, XML, and HTML formats.
Pricing
This actor uses pay-per-event pricing — you only pay for what you actually scrape.
| Plan | Price per Post |
|---|---|
| Free | $0.002 |
| Bronze | $0.0018 |
| Silver | $0.0016 |
| Gold+ | $0.0015 |
Cost examples:
- 100 posts → ~$0.20
- 1,000 posts → ~$2.00
- 10,000 posts → ~$15–20
The Free Apify plan includes $5 in monthly credits — enough to scrape 2,000+ posts for free every month.
FAQ
Does this require a Reddit account or API key? No. This scraper uses Reddit's public JSON API endpoints which are accessible without authentication. No cookies, no login, no Reddit API key needed.
Why is the success rate higher than other Reddit scrapers? Most Reddit scrapers use Playwright (a browser) to render pages. This actor queries Reddit's own JSON API directly using lightweight HTTP requests, which is more reliable, faster, and harder to block than headless browser traffic.
Can I scrape private subreddits? No. This scraper only accesses publicly available Reddit data. Private or quarantined subreddits require authentication and are not supported.
Why do some posts show [deleted] as the author?
Reddit accounts that were deleted after posting will show [deleted]. This is Reddit's own value — the scraper preserves it accurately.
How do I scrape more than 1,000 posts from a subreddit?
Reddit limits browsing to ~1,000 posts per listing view. To collect more, use searchQueries with sort: new and postDateLimit to fetch posts in time-based windows. This breaks the 1,000-post cap.
Do proxies cost extra? Apify Residential proxies are included in your Apify subscription. The scraper uses them automatically when configured.
Legal Disclaimer
This actor is designed for lawful data collection from publicly available Reddit content. Users are solely responsible for ensuring compliance with Reddit's Terms of Service, applicable laws, data protection regulations (GDPR, CCPA), and any other legal requirements in their jurisdiction. Do not use this tool to collect data in violation of Reddit's Terms of Service or for any unlawful purpose. Anthropic and the actor developer assume no liability for misuse.