Reddit Scraper
Pricing
Pay per event
Reddit Scraper
Scrape Reddit posts, comments, search results, and user profiles. Extract structured data from any subreddit with pagination, nested comments, and configurable depth. Export to JSON, CSV, or Excel.
Pricing
Pay per event
Rating
4.3
(2)
Developer
Stas Persiianenko
Actor stats
1
Bookmarked
203
Total users
103
Monthly active users
2 days ago
Last modified
Categories
Share
What does Reddit Scraper do?
Reddit Scraper extracts structured data from Reddit — posts, comments, search results, and user profiles. Just paste any Reddit URL or enter a search query and get clean JSON, CSV, or Excel output. No Reddit account or API key needed.
It supports subreddit listings (hot, new, top, rising), individual posts with nested comments, user submission history, and full-text search across all of Reddit or within a specific subreddit.
Why use Reddit Scraper?
- 4x cheaper than the leading Reddit scraper on Apify ($1/1K posts vs $4/1K)
- Posts + comments in one actor — no need to run separate scrapers
- All input types — subreddits, posts, users, search queries, or just paste any Reddit URL
- Pure HTTP — no browser, low memory, fast execution
- Clean output — structured fields with consistent naming, not raw API dumps
- Pagination built in — scrape hundreds or thousands of posts automatically
- Pay only for results — pay-per-event pricing, no monthly subscription
What data can you extract?
Post fields:
| Field | Description |
|---|---|
| title | Post title |
| author | Reddit username |
| subreddit | Subreddit name |
| score | Net upvotes |
| upvoteRatio | Upvote percentage (0-1) |
| numComments | Comment count |
| createdAt | ISO 8601 timestamp |
| url | Full Reddit URL |
| selfText | Post body text |
| link | External link (for link posts) |
| domain | Link domain |
| isVideo, isSelf, isNSFW, isSpoiler | Content flags |
| linkFlairText | Post flair |
| totalAwards | Award count |
| subredditSubscribers | Subreddit size |
| imageUrls | Extracted image URLs |
| thumbnail | Thumbnail URL |
Comment fields:
| Field | Description |
|---|---|
| author | Commenter username |
| body | Comment text |
| score | Net upvotes |
| createdAt | ISO 8601 timestamp |
| depth | Nesting level (0 = top-level) |
| isSubmitter | Whether commenter is the post author |
| parentId | Parent comment/post ID |
| replies | Number of direct replies |
| postId | Parent post ID |
| postTitle | Parent post title |
How much does it cost to scrape Reddit?
This Actor uses pay-per-event pricing — you pay only for what you scrape. No monthly subscription. All platform costs (compute, proxy, storage) are included.
| Event | Cost |
|---|---|
| Actor start | $0.003 per run |
| Per post | $0.001 |
| Per comment | $0.0005 |
That's $1.00 per 1,000 posts or $0.50 per 1,000 comments.
Real-world cost examples:
| Input | Results | Duration | Cost |
|---|---|---|---|
| 1 subreddit, 100 posts | 100 posts | ~15s | ~$0.10 |
| 5 subreddits, 50 posts each | 250 posts | ~30s | ~$0.25 |
| 1 post + 200 comments | 201 items | ~5s | ~$0.10 |
| Search "AI", 100 results | 100 posts | ~15s | ~$0.10 |
| 1 subreddit, 5 posts + 3 comments each | 20 items | ~12s | ~$0.02 |
How to scrape Reddit posts
- Go to the Reddit Scraper input page
- Add Reddit URLs to the Reddit URLs field — any of these formats work:
https://www.reddit.com/r/technology/https://www.reddit.com/r/AskReddit/comments/abc123/post-title/https://www.reddit.com/user/spez/r/technologyor justtechnology
- Or enter a Search Query to search across Reddit
- Set Max Posts per Source to control how many posts to scrape
- Enable Include Comments if you also want comment data
- Click Start and wait for results
Example input:
{"urls": ["https://www.reddit.com/r/technology/"],"maxPostsPerSource": 100,"sort": "hot","includeComments": false}
Input parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
| urls | string[] | — | Reddit URLs to scrape (subreddits, posts, users, search URLs) |
| searchQuery | string | — | Search Reddit for this query |
| searchSubreddit | string | — | Limit search to a specific subreddit |
| sort | enum | hot | Sort order: hot, new, top, rising, relevance |
| timeFilter | enum | week | Time filter for top/relevance: hour, day, week, month, year, all |
| maxPostsPerSource | integer | 100 | Max posts per subreddit/search/user. 0 = unlimited |
| includeComments | boolean | false | Also scrape comments for each post |
| maxCommentsPerPost | integer | 100 | Max comments per post |
| commentDepth | integer | 3 | Max reply nesting depth |
| maxRequestRetries | integer | 5 | Retry attempts for failed requests |
Output examples
Post:
{"type": "post","id": "1qw5kwf","title": "3 Teen Sisters Jump to Their Deaths from 9th Floor Apartment After Parents Remove Access to Phone","author": "Sandstorm400","subreddit": "technology","score": 18009,"upvoteRatio": 0.92,"numComments": 1363,"createdAt": "2026-02-05T00:04:58.000Z","url": "https://www.reddit.com/r/technology/comments/1qw5kwf/3_teen_sisters_jump_to_their_deaths_from_9th/","permalink": "/r/technology/comments/1qw5kwf/3_teen_sisters_jump_to_their_deaths_from_9th/","selfText": "","link": "https://people.com/3-sisters-jumping-deaths-online-gaming-addiction-11899069","domain": "people.com","isVideo": false,"isSelf": false,"isNSFW": false,"isSpoiler": false,"isStickied": false,"thumbnail": "https://external-preview.redd.it/...","linkFlairText": "Society","totalAwards": 0,"subredditSubscribers": 17101887,"imageUrls": [],"scrapedAt": "2026-02-05T12:33:50.000Z"}
Comment:
{"type": "comment","id": "m3abc12","postId": "1qw5kwf","postTitle": "3 Teen Sisters Jump to Their Deaths...","author": "commenter123","body": "This is heartbreaking. Phone addiction in teens is a serious issue.","score": 542,"createdAt": "2026-02-05T01:15:00.000Z","permalink": "/r/technology/comments/1qw5kwf/.../m3abc12","depth": 0,"isSubmitter": false,"parentId": "t3_1qw5kwf","replies": 12,"scrapedAt": "2026-02-05T12:33:52.000Z"}
Tips for best results
- Start small — test with 5-10 posts before running large scrapes
- Use sort + time filter —
sort: "top"withtimeFilter: "month"gets the most popular content - Comments cost extra — only enable
includeCommentswhen you need them - Multiple subreddits — add multiple URLs to scrape several subreddits in one run
- Search within subreddit — use
searchSubredditto limit search to a specific community - Direct post URLs — paste a specific post URL to get that post + its comments
- Rate limits — Reddit allows ~1,000 requests/hour; large scrapes may take a few minutes
Integrations
Connect Reddit Scraper to other apps and services using Apify integrations:
- Google Sheets — automatically export Reddit posts and comments to a spreadsheet for tracking trends or building content calendars
- Slack / Discord — get notifications when scraping finishes, or set up alerts for posts matching specific keywords
- Zapier / Make — trigger workflows based on new Reddit data, e.g., save high-engagement posts to a CRM or send weekly reports
- Webhooks — send results to your own API endpoint for custom processing pipelines
- Scheduled runs — run the scraper daily or weekly to monitor subreddits for new discussions
- Data warehouses — pipe data to BigQuery, Snowflake, or PostgreSQL for large-scale analysis
- AI/LLM pipelines — feed Reddit discussions into sentiment analysis, topic modeling, or lead qualification workflows
Using the Apify API
Node.js:
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });const run = await client.actor('automation-lab/reddit-scraper').call({urls: ['https://www.reddit.com/r/technology/'],maxPostsPerSource: 100,sort: 'hot',includeComments: false,});const { items } = await client.dataset(run.defaultDatasetId).listItems();console.log(items);
Python:
from apify_client import ApifyClientclient = ApifyClient('YOUR_API_TOKEN')run = client.actor('automation-lab/reddit-scraper').call(run_input={'urls': ['https://www.reddit.com/r/technology/'],'maxPostsPerSource': 100,'sort': 'hot','includeComments': False,})items = client.dataset(run['defaultDatasetId']).list_items().itemsprint(items)
cURL:
curl "https://api.apify.com/v2/acts/automation-lab~reddit-scraper/runs" \-X POST \-H "Content-Type: application/json" \-H "Authorization: Bearer YOUR_API_TOKEN" \-d '{"urls":["https://www.reddit.com/r/technology/"],"maxPostsPerSource":100,"sort":"hot"}'
Use with AI agents via MCP
Reddit Scraper is available as a tool for AI assistants that support the Model Context Protocol (MCP). This lets you use natural language to scrape data — just ask your AI assistant and it will configure and run the scraper for you.
Setup for Claude Code
$claude mcp add --transport http apify "https://mcp.apify.com"
Setup for Claude Desktop, Cursor, or VS Code
Add this to your MCP config file:
{"mcpServers": {"apify": {"url": "https://mcp.apify.com"}}}
Your AI assistant will use OAuth to authenticate with your Apify account on first use.
Example prompts
Once connected, try asking your AI assistant:
- "Get the top 100 posts from r/technology this month"
- "Scrape comments from this Reddit thread"
- "Search Reddit for discussions about 'AI coding'"
Learn more in the Apify MCP documentation.
FAQ
Can I scrape any subreddit? Yes, as long as the subreddit is public. Private subreddits will return a 403 error and be skipped.
Does it scrape NSFW content?
Yes, NSFW posts are included by default. You can filter them out using the isNSFW field in the output.
How many posts can I scrape?
There is no hard limit. Set maxPostsPerSource: 0 for unlimited. Reddit's pagination allows up to ~1,000 posts per listing. For more, use search with different time filters.
Can I scrape comments from multiple posts at once?
Yes. Enable includeComments and the scraper will fetch comments for every post it finds. Use maxCommentsPerPost to control how many comments per post.
What happens if Reddit rate-limits me? The scraper automatically detects rate limits via response headers and waits before retrying. You don't need to configure anything.
Can I export to CSV or Excel? Yes. Apify datasets support JSON, CSV, Excel, XML, and HTML export formats. Use the dataset export buttons or API.
The scraper returns fewer posts than I expected — what's going on?
Reddit's pagination API has a limit of approximately 1,000 posts per listing. If you need more, use search with different time filters (e.g., timeFilter: "month" then timeFilter: "year") to access older content. Also note that some subreddits simply have fewer posts than your limit.
I'm getting 403 errors for a subreddit — how do I fix this? This means the subreddit is private, quarantined, or banned. The scraper can only access public subreddits. Check if you can view the subreddit in an incognito browser window — if not, the scraper won't be able to access it either.
Other social media scrapers
- Twitter/X Scraper — extract tweets and user profiles
- Instagram Scraper — scrape Instagram posts, profiles, and hashtags
- TikTok Scraper — extract TikTok videos, profiles, and comments
- YouTube Scraper — scrape YouTube videos, channels, and comments
- Threads Scraper — extract Threads posts and profiles
- Bluesky Scraper — scrape Bluesky posts and profiles
- Telegram Scraper — extract Telegram channel messages