
Reddit Scraper - MCP, AI, PPR
Pricing
Pay per event

Reddit Scraper - MCP, AI, PPR
Reddit Scraper lets you extract posts, comments, users, and communities without API rate limits. With a flexible pay-per-result model, it’s perfect for one-time projects, targeted data pulls, research, brand tracking, and competitor analysis—only pay for what you use. Works with MCP and AI tools.
5.0 (2)
Pricing
Pay per event
4
38
31
Issues response
0.031 hours
Last modified
15 days ago
Reddit Scraper
Pay-As-You-Go Reddit Scraper for Apify
Reddit Scraper is a usage-based actor that captures Reddit posts, comments, subreddits, and user profiles on demand. Launch a run, pay a tiny start fee, and then only for each item saved—no API keys, no logins, and no monthly commitment. Built on the battle-tested Apify SDK, it pairs headless browsers with proxy rotation for fast, reliable crawling at any scale.
Key Features
- Usage-based billing – spend pennies instead of subscriptions.
- Unified crawler – harvest posts, comments, users, and community details in one run.
- Zero API tokens – gather public Reddit data without rate-limit headaches.
- Powerful filters – keyword, time range, sort order, and type selectors.
- Click-and-run interface – launch from the Apify UI; code optional.
- Scalable & fast – headless Chrome concurrency optimized for speed.
- Instant exports – JSON, CSV, Excel, XML, or HTML ready for dashboards.
- REST API access – pull results into any pipeline or automation.
Support and Feature Requests
We strive to make Reddit Scraper the most comprehensive tool for your Reddit data extraction needs. However, if you find that something is missing or not working as expected:
-
Report an Issue: You can easily report any issues directly in the Run console. This helps us track and address problems efficiently.
-
Email Support: For more detailed inquiries or feature requests, feel free to email harshmaur@gmail.com.
Rest assured, you will receive a prompt response to your issue or request. We pride ourselves on our quick problem-solving and feature implementation. Your feedback is invaluable in helping us continually improve Reddit Scraper to meet your needs.
Our commitment is to provide swift solutions and implement new features rapidly, ensuring that Reddit Scraper remains the most up-to-date and efficient Reddit scraping tool available.
Use Cases
- Monitor Brand Mentions: Track discussions about your brand or product across Reddit.
- Conduct Sentiment Analysis: Use comment data for sentiment analysis to understand public opinion.
- Stay Ahead of Trends: Identify and analyze trending topics within various Reddit communities.
- Perform Competitor Analysis: Monitor competitor activity and discussions to gain strategic insights.
Pay-Per-Result Pricing
With Reddit Scraper, you pay only for what you run and store—no monthly subscription and no platform usage fees.
- Actor start: $0.02 per run
- Result stored: $0.002 each
Example: A run that stores 1,000 items costs $2.02 (one actor start + 1,000 × $0.002).
Forget fixed-price subscriptions—pay strictly for what you need.
Feature | Reddit Scraper (pay-per-result) | Reddit Scraper Pro (subscription) |
---|---|---|
Billing model | $0.02/run + $0.002/item | $20 per month + usage, unlimited items |
Ideal for | Occasional or exploratory jobs, tight budgets | Continuous, large-scale scraping |
Cost control | Pay exactly for usage | Fixed monthly fee |
Same technology | ✅ | ✅ |
Why pay-per-result?
Pay-per-result is ideal when you scrape Reddit occasionally or need a quick snapshot. There’s no idle expense: you’re billed once when the actor starts and then $0.002 for every item stored. For example, two runs that save 500 items each cost just $0.02 × 2 + 1,000 × $0.002 = $1.04.
Need unlimited results with a predictable monthly fee? Check out Reddit Scraper Pro—same engine, flat subscription.
Works Nicely with n8n
Automate your Reddit data pipelines by integrating Reddit Scraper with n8n, a powerful workflow automation tool. This allows you to connect the scraped data with hundreds of other applications and services seamlessly.
Method 1: Synchronous Run (Recommended for quick scrapes)
This method is best for quick scrapes. It runs the scraper and returns the results in a single step.
- Get your Apify API Token: You can find your API token on the Apify platform.
- Set up your n8n workflow:
- Add an HTTP Request node.
- Method:
POST
- URL:
https://api.apify.com/v2/acts/harshmaur~reddit-scraper/run-sync-get-dataset-items?token=YOUR_APIFY_TOKEN
(ReplaceYOUR_APIFY_TOKEN
with your token). - Body Content Type:
JSON
- Body: Add your scraper input here. For example:
{"startUrls": [{"url": "https://www.reddit.com/r/developers/"}],"searchSort": "new","searchTime": "all","maxPostsCount": 10,"maxCommentsCount": 10,"maxCommentsPerPost": 10,"maxCommunitiesCount": 2,"proxy": {"useApifyProxy": true,"apifyProxyGroups": ["RESIDENTIAL"]}}
Method 2: Asynchronous Run (For Long Scrapes)
If you are scraping a large amount of data (e.g., an entire community), the run may take longer than the default 300-second timeout. In this case, use the asynchronous method which involves starting the run and then fetching the results separately.
For a detailed guide on handling asynchronous runs in n8n, watch this video:
Important Note on Timeouts Scraping Reddit can be time-consuming. The synchronous API has a 300-second (5-minute) timeout. If your scraping task takes longer, the request will fail.
To handle this:
- Increase Timeout in n8n: In your HTTP Request node's settings, increase the timeout to a value that suits your needs (e.g., 600 seconds for a 10-minute timeout).
- Use Polling for Async Runs: When using the asynchronous method, use a Wait node in n8n to poll for the run's completion status before fetching the results. This is the most reliable way to handle very long-running jobs.
How to scrape Reddit?
Reddit Scraper doesn't require any coding skills to start using it.
- Sign up for a free Apify account.
- Visit the Reddit Scraper page.
- Enter the URLs or keywords for the subreddits, users, or posts you want to scrape.
- Click "Start" and let Reddit Scraper handle the rest.
- Download your results in your preferred format: JSON, CSV, Excel, XML, or HTML.
Input parameters
Reddit Scraper offers versatile input options to suit your needs:
- Direct URLs: Scrape specific data from any Reddit URL, be it a post, user, or subreddit.
- Keyword Search: Extract data based on keywords for posts, users, or communities with advanced search options like sorting and date.
- NSFW Filter: Toggle to include or exclude NSFW content from your results.
- Limits: Set the maximum number of items, posts, comments, communities, or users to scrape.
Scrape by URLs
- Subreddit: https://www.reddit.com/r/technology/
- User Profile: https://www.reddit.com/user/someusername
- Popular: https://www.reddit.com/r/popular/
- Search Urls: https://www.reddit.com/search/?q=example&type=sr
Scrape by search term
To scrape Reddit using search terms, follow these steps:
- In the "Search Term" field, enter your desired keywords or phrases.
- Configure the search options:
- "Get posts": Enable to search for posts (default: true)
- "Get comments": Enable to search for comments (default: false)
- "Get communities": Enable to search for communities (default: false)
- Set the "Sort search" option to determine how results are ordered:
- Options: Relevance, Hot, Top, New, Comments (default: New)
- For post searches, specify the "Retrieve From" time range:
- Options: All time, Last hour, Last day, Last week, Last month, Last year (default: All time)
- Adjust the "Include NSFW content" setting if needed (default: false)
- Set limits for the number of results:
- "Maximum number of posts to be saved" (default: 10)
- "Limit of comments to be saved" (default: 10)
- "Limit of comments per post" (default: 10)
- "Limit of Communities to be saved" (default: 2)
Example search configuration:
- Search Term: ["cryptocurrency", "blockchain"]
- Get posts: true
- Get comments: true
- Get communities: false
- Sort search: Hot
- Retrieve From: Last month
- Include NSFW content: false
- Maximum number of posts: 50
- Limit of comments: 100
- Limit of comments per post: 20
This setup will search for cryptocurrency and blockchain-related content, focusing on hot posts and comments from the last month, excluding NSFW content, and limiting the results to 50 posts with up to 100 total comments (max 20 per post).
To see the full list of parameters, their default values, and how to set the values of your own, head over to Input Schema tab.
Input Examples
Here are some input examples for different use cases, based on the input schema. Default values are included where applicable:
-
Scraping posts from a subreddit:
{"startUrls": [{ "url": "https://www.reddit.com/r/technology/" }],"crawlCommentsPerPost": false,"maxPostsCount": 10,"maxCommentsPerPost": 10,"includeNSFW": false,"proxy": {"useApifyProxy": true,"apifyProxyGroups": ["RESIDENTIAL"]}} -
Searching for posts on a specific topic:
{"searchTerms": ["artificial intelligence"],"searchPosts": true,"searchComments": false,"searchCommunities": false,"searchSort": "hot","searchTime": "week","maxPostsCount": 50,"includeNSFW": false,"proxy": {"useApifyProxy": true,"apifyProxyGroups": ["RESIDENTIAL"]}} -
Scraping comments from a specific post:
{"startUrls": [{"url": "https://www.reddit.com/r/AskReddit/comments/example_post_id/example_post_title/"}],"crawlCommentsPerPost": true,"maxCommentsPerPost": 100,"includeNSFW": false,"proxy": {"useApifyProxy": true,"apifyProxyGroups": ["RESIDENTIAL"]}} -
Extracting community information:
{"startUrls": [{ "url": "https://www.reddit.com/r/AskScience/" }],"maxPostsCount": 0,"maxCommentsCount": 0,"includeNSFW": false,"proxy": {"useApifyProxy": true,"apifyProxyGroups": ["RESIDENTIAL"]}} -
Scraping user posts and comments:
{"startUrls": [{ "url": "https://www.reddit.com/user/example_username" }],"maxPostsCount": 20,"maxCommentsCount": 50,"includeNSFW": false,"proxy": {"useApifyProxy": true,"apifyProxyGroups": ["RESIDENTIAL"]}} -
Searching for comments across Reddit:
{"searchTerms": ["climate change"],"searchPosts": false,"searchComments": true,"searchCommunities": false,"searchSort": "top","searchTime": "month","maxCommentsCount": 100,"includeNSFW": false,"proxy": {"useApifyProxy": true,"apifyProxyGroups": ["RESIDENTIAL"]}} -
Scraping popular posts from multiple subreddits:
{"startUrls": [{ "url": "https://www.reddit.com/r/news/" },{ "url": "https://www.reddit.com/r/worldnews/" },{ "url": "https://www.reddit.com/r/politics/" }],"maxPostsCount": 10,"crawlCommentsPerPost": true,"maxCommentsPerPost": 5,"includeNSFW": false,"proxy": {"useApifyProxy": true,"apifyProxyGroups": ["RESIDENTIAL"]}}
These examples demonstrate various configurations for different use cases of the Reddit Scraper, adhering to the provided input schema and including default values where applicable.
Limiting results
If you need to limit the scope of your search, you can do that by setting the max number of posts you want to scrape inside a community or user. You can also set a limit to the number of comments for each post. You can limit the number of communities and the number of leaderboards by using the following parameters:
{"maxPostsCount": 10,"maxCommentsPerPost": 5,"maxCommunitiesCount": 2,"maxCommentsCount": 100,"maxItems": 1000}
You can also set maxItems
to prevent a very long run of the Actor. This parameter will stop your scraper when it reaches the number of results you've indicated. Useful for testing.
FAQ
Is Reddit scraping legal?
While scraping publicly available data from Reddit is generally allowed, it's important to comply with Reddit's terms of service and respect the site's usage policies. It's recommended to use the scraper responsibly, avoid excessive requests, and ensure that the scraped data is used in compliance with applicable laws and regulations. You can read more about compliance with ToS in our blogpost.
Do I need to use cookies for accessing logged-in content when scraping Reddit?
No, it is not required. Reddit maintains its data publicly accessible and does not enforce users to login.
Do you need proxies for scraping Reddit?
Yes. Please use Apify's residential proxies for Reddit scraping.
On this page
Share Actor: