Reddit Scraper
Pricing
Pay per event
Reddit Scraper
Scrape Reddit posts, comments, search results, and user profiles. Extract structured data from any subreddit with pagination, nested comments, and configurable depth. Export to JSON, CSV, or Excel.
Pricing
Pay per event
Rating
0.0
(0)
Developer

Stas Persiianenko
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
a day ago
Last modified
Categories
Share
What does Reddit Scraper do?
Reddit Scraper extracts structured data from Reddit — posts, comments, search results, and user profiles. Just paste any Reddit URL or enter a search query and get clean JSON, CSV, or Excel output. No Reddit account or API key needed.
It supports subreddit listings (hot, new, top, rising), individual posts with nested comments, user submission history, and full-text search across all of Reddit or within a specific subreddit.
Why use Reddit Scraper?
- 4x cheaper than the leading Reddit scraper on Apify ($1/1K posts vs $4/1K)
- Posts + comments in one actor — no need to run separate scrapers
- All input types — subreddits, posts, users, search queries, or just paste any Reddit URL
- Pure HTTP — no browser, low memory, fast execution
- Clean output — structured fields with consistent naming, not raw API dumps
- Pagination built in — scrape hundreds or thousands of posts automatically
- Pay only for results — pay-per-event pricing, no monthly subscription
What data can you extract?
Post fields:
| Field | Description |
|---|---|
| title | Post title |
| author | Reddit username |
| subreddit | Subreddit name |
| score | Net upvotes |
| upvoteRatio | Upvote percentage (0-1) |
| numComments | Comment count |
| createdAt | ISO 8601 timestamp |
| url | Full Reddit URL |
| selfText | Post body text |
| link | External link (for link posts) |
| domain | Link domain |
| isVideo, isSelf, isNSFW, isSpoiler | Content flags |
| linkFlairText | Post flair |
| totalAwards | Award count |
| subredditSubscribers | Subreddit size |
| imageUrls | Extracted image URLs |
| thumbnail | Thumbnail URL |
Comment fields:
| Field | Description |
|---|---|
| author | Commenter username |
| body | Comment text |
| score | Net upvotes |
| createdAt | ISO 8601 timestamp |
| depth | Nesting level (0 = top-level) |
| isSubmitter | Whether commenter is the post author |
| parentId | Parent comment/post ID |
| replies | Number of direct replies |
| postId | Parent post ID |
| postTitle | Parent post title |
How much does it cost to scrape Reddit?
This Actor uses pay-per-event pricing — you pay only for what you scrape. No monthly subscription. All platform costs (compute, proxy, storage) are included.
| Event | Cost |
|---|---|
| Actor start | $0.003 per run |
| Per post | $0.001 |
| Per comment | $0.0005 |
That's $1.00 per 1,000 posts or $0.50 per 1,000 comments.
Real-world cost examples:
| Input | Results | Duration | Cost |
|---|---|---|---|
| 1 subreddit, 100 posts | 100 posts | ~15s | ~$0.10 |
| 5 subreddits, 50 posts each | 250 posts | ~30s | ~$0.25 |
| 1 post + 200 comments | 201 items | ~5s | ~$0.10 |
| Search "AI", 100 results | 100 posts | ~15s | ~$0.10 |
| 1 subreddit, 5 posts + 3 comments each | 20 items | ~12s | ~$0.02 |
How to scrape Reddit posts
- Go to the Reddit Scraper input page
- Add Reddit URLs to the Reddit URLs field — any of these formats work:
https://www.reddit.com/r/technology/https://www.reddit.com/r/AskReddit/comments/abc123/post-title/https://www.reddit.com/user/spez/r/technologyor justtechnology
- Or enter a Search Query to search across Reddit
- Set Max Posts per Source to control how many posts to scrape
- Enable Include Comments if you also want comment data
- Click Start and wait for results
Example input:
{"urls": ["https://www.reddit.com/r/technology/"],"maxPostsPerSource": 100,"sort": "hot","includeComments": false}
Input parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
| urls | string[] | — | Reddit URLs to scrape (subreddits, posts, users, search URLs) |
| searchQuery | string | — | Search Reddit for this query |
| searchSubreddit | string | — | Limit search to a specific subreddit |
| sort | enum | hot | Sort order: hot, new, top, rising, relevance |
| timeFilter | enum | week | Time filter for top/relevance: hour, day, week, month, year, all |
| maxPostsPerSource | integer | 100 | Max posts per subreddit/search/user. 0 = unlimited |
| includeComments | boolean | false | Also scrape comments for each post |
| maxCommentsPerPost | integer | 100 | Max comments per post |
| commentDepth | integer | 3 | Max reply nesting depth |
| maxRequestRetries | integer | 5 | Retry attempts for failed requests |
Output examples
Post:
{"type": "post","id": "1qw5kwf","title": "3 Teen Sisters Jump to Their Deaths from 9th Floor Apartment After Parents Remove Access to Phone","author": "Sandstorm400","subreddit": "technology","score": 18009,"upvoteRatio": 0.92,"numComments": 1363,"createdAt": "2026-02-05T00:04:58.000Z","url": "https://www.reddit.com/r/technology/comments/1qw5kwf/3_teen_sisters_jump_to_their_deaths_from_9th/","permalink": "/r/technology/comments/1qw5kwf/3_teen_sisters_jump_to_their_deaths_from_9th/","selfText": "","link": "https://people.com/3-sisters-jumping-deaths-online-gaming-addiction-11899069","domain": "people.com","isVideo": false,"isSelf": false,"isNSFW": false,"isSpoiler": false,"isStickied": false,"thumbnail": "https://external-preview.redd.it/...","linkFlairText": "Society","totalAwards": 0,"subredditSubscribers": 17101887,"imageUrls": [],"scrapedAt": "2026-02-05T12:33:50.000Z"}
Comment:
{"type": "comment","id": "m3abc12","postId": "1qw5kwf","postTitle": "3 Teen Sisters Jump to Their Deaths...","author": "commenter123","body": "This is heartbreaking. Phone addiction in teens is a serious issue.","score": 542,"createdAt": "2026-02-05T01:15:00.000Z","permalink": "/r/technology/comments/1qw5kwf/.../m3abc12","depth": 0,"isSubmitter": false,"parentId": "t3_1qw5kwf","replies": 12,"scrapedAt": "2026-02-05T12:33:52.000Z"}
Tips for best results
- Start small — test with 5-10 posts before running large scrapes
- Use sort + time filter —
sort: "top"withtimeFilter: "month"gets the most popular content - Comments cost extra — only enable
includeCommentswhen you need them - Multiple subreddits — add multiple URLs to scrape several subreddits in one run
- Search within subreddit — use
searchSubredditto limit search to a specific community - Direct post URLs — paste a specific post URL to get that post + its comments
- Rate limits — Reddit allows ~1,000 requests/hour; large scrapes may take a few minutes
Integrations
Connect Reddit Scraper to other apps and services using Apify integrations:
- Google Sheets — automatically export Reddit data to a spreadsheet
- Slack / Discord — get notified when scraping finishes
- Zapier / Make — trigger workflows based on new Reddit data
- Webhooks — send results to your own API endpoint
- Schedule — run the scraper automatically on a daily or weekly basis
See Apify integrations for the full list.
Using the Apify API
Node.js:
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });const run = await client.actor('aYMxR9AqRjxmgzcwB').call({urls: ['https://www.reddit.com/r/technology/'],maxPostsPerSource: 100,sort: 'hot',includeComments: false,});const { items } = await client.dataset(run.defaultDatasetId).listItems();console.log(items);
Python:
from apify_client import ApifyClientclient = ApifyClient('YOUR_API_TOKEN')run = client.actor('aYMxR9AqRjxmgzcwB').call(run_input={'urls': ['https://www.reddit.com/r/technology/'],'maxPostsPerSource': 100,'sort': 'hot','includeComments': False,})items = client.dataset(run['defaultDatasetId']).list_items().itemsprint(items)
FAQ
Can I scrape any subreddit? Yes, as long as the subreddit is public. Private subreddits will return a 403 error and be skipped.
Does it scrape NSFW content?
Yes, NSFW posts are included by default. You can filter them out using the isNSFW field in the output.
How many posts can I scrape?
There is no hard limit. Set maxPostsPerSource: 0 for unlimited. Reddit's pagination allows up to ~1,000 posts per listing. For more, use search with different time filters.
Can I scrape comments from multiple posts at once?
Yes. Enable includeComments and the scraper will fetch comments for every post it finds. Use maxCommentsPerPost to control how many comments per post.
What happens if Reddit rate-limits me? The scraper automatically detects rate limits via response headers and waits before retrying. You don't need to configure anything.
Can I export to CSV or Excel? Yes. Apify datasets support JSON, CSV, Excel, XML, and HTML export formats. Use the dataset export buttons or API.
Other scrapers
- YouTube Scraper — scrape YouTube videos, channels, and comments
- Twitter Scraper — extract tweets and user profiles
- Instagram Scraper — scrape Instagram posts, profiles, and hashtags