Reddit Scraper | All-In-One | $12 / mo
Pricing
$11.99/month + usage
Reddit Scraper | All-In-One | $12 / mo
All-in-one Reddit Scraper. Scrape posts and full comment threads from any search, subreddit, user, or direct post URL. This enterprise-grade scraper is the fastest in the market and delivers clean and detailed JSON.
Pricing
$11.99/month + usage
Rating
5.0
(1)
Developer

Fatih Tahta
Actor stats
4
Bookmarked
17
Total users
2
Monthly active users
1.2 days
Issues response
18 days ago
Last modified
Categories
Share
Reddit Scraper Pro
Slug: fatihtahta/reddit-scraper
Price: $1.50 per 1,000 saved items (posts or comments)
The all-in-one Reddit data solution. Go beyond simple search—scrape posts, full comment threads with a configurable limit, subreddits, and user pages with a single tool. Whether you provide search queries or a list of direct URLs, this actor deliver clean, structured JSON, ready for any type of analysis.
What Can This Reddit Scraper Do?
- Scrape Anything on Reddit: Provides two powerful modes:
- Search Mode: Scrape search results for any query with advanced sorting and time filters.
- URL Mode: Directly scrape one or more URLs, including subreddits, user pages, or individual posts.
- Deep Comment Scraping (Optional): A simple switch (
scrapeComments) allows you to extract not just the post, but also up to a specified number of comments from the discussion tree. - Include NSFW Content: A new option (
includeNsfw) allows you to scrape content from posts tagged for adults (18+). - Built-in Resiliency: Automatically retries failed requests with intelligent backoff, gracefully handling network errors and timeouts.
- Clean, Structured JSON: Outputs two distinct item types (
postandcomment) with clear schemas, perfect for market research, social listening, brand monitoring, or academic analysis. - Fast and Efficient: Built with TypeScript and the latest Crawlee framework for high-performance, concurrent scraping.
What Input Does the Reddit Scraper Require?
queries(array of strings, optional): A list of search terms to look up on Reddit. This input is ignored ifurlsis provided.urls(array of strings, optional): A list of specific Reddit URLs to scrape. This has priority overqueries.scrapeComments(boolean, default:false): Iftrue, the scraper will extract comments from posts.maxComments(integer, default:100): The maximum number of comments to save for each post. Only applies whenscrapeCommentsistrue.maxPosts(integer, default:100): A hard limit on the number of posts to save for each individual search term or URL. Does not include comments.includeNsfw(boolean, default:false): Iftrue, includes NSFW (over 18) results.sort(string, default:relevance): Sort order for search results (relevance,hot,top,new,comments).timeframe(string, default:all): Time range for search results. This only applies when sorting by top, relevance, or comments.
Input and Output Examples
Example Input
This example will scrape a specific subreddit for up to 50 posts and a single post URL, limiting the number of comments saved to 100 for each.
{"includeNsfw": false,"queries": ["Cheesecake","Swimming Pool"],"scrapeComments": true,"sort": "hot","timeframe": "year","urls": ["https://www.reddit.com/r/socialmedia/"]}
Output Example
The dataset will contain two types of items, distinguished by the kind field. You can download the data in various formats such as JSON, HTML, CSV, or Excel.
For the post record (kind: "post"):
{"kind": "post","query": "Cheesecake","id": "1oiwt3p","title": "My first cheesecake :)","body": "Turned out a bit short but thats ok cause it tasted amazing. ","author": "No_Opportunity_1502","score": 27,"upvote_ratio": 0.97,"num_comments": 1,"subreddit": "Baking","created_utc": "2025-10-29T05:59:38.000Z","url": "https://www.reddit.com/r/Baking/comments/1oiwt3p/my_first_cheesecake/","flair": "No-Recipe Provided","over_18": false,"is_self": false,"spoiler": false,"locked": false,"is_video": false,"domain": "old.reddit.com","thumbnail": "https://b.thumbs.redditmedia.com/oIOAf9jpp5jUSRjEljGBBvN4EOtH6dJo7sujoeG3Wug.jpg","url_overridden_by_dest": "https://www.reddit.com/gallery/1oiwt3p","media": null,"gallery_data": {"items": [{"media_id": "iniej0usqzxf1","id": 782212827},{"media_id": "qi29mztsqzxf1","id": 782212828},{"media_id": "fehlpdvsqzxf1","id": 782212829}]}}
For the comment record (kind: "comment"):
{"kind": "comment","query": "[https://www.reddit.com/r/technology/](https://www.reddit.com/r/technology/)...","id": "k5z1x2y","postId": "t3_1d95j4g","parentId": "t3_1d95j4g","body": "Great analysis, but I think you're underestimating the impact of quantum computing on these timelines.","author": "future_thinker","score": 142,"created_utc": "2025-08-05T19:15:22.000Z","url": "[https://www.reddit.com/r/technology/comments/1d95j4g/the_state_of_ai_in_2025_a_comprehensive_report/k5z1x2y/](https://www.reddit.com/r/technology/comments/1d95j4g/the_state_of_ai_in_2025_a_comprehensive_report/k5z1x2y/)"}
How Much Will It Cost to Scrape Reddit?
The Actor is priced at $1.50 per 1,000 saved items (posts or comments).
All infrastructure and residential proxy costs are bundled in. You only pay for successful results. This transparent pricing means you can easily estimate the cost of a run: scraping 10,000 posts and 25,000 comments would cost approximately (35,000 / 1,000) * $1.50 = $52.50.
Tips for an Efficient Scrape
Save Costs: To keep your costs down, only enable scrapeComments when you absolutely need the discussion data. Scraping only posts is much faster and cheaper.
Targeted Scraping: Use the URL Mode for scraping specific subreddits or posts to avoid unnecessary searches and get exactly the data you need.
Is It Legal to Scrape Reddit?
Our scrapers are ethical and do not extract any private user data. They only extract data that is publicly available on Reddit. We believe that our scrapers, when used for ethical purposes, are safe and legal. However, you should be aware that your results could contain personal data. You should not scrape personal data unless you have a legitimate reason to do so.
Support
Questions or custom needs? Open an issue on the Issues tab in Apify Console, and it will be solved around the clock.
Happy Scrapings! Fatih