Reddit Scraper - Neatrat โก
Pricing
from $2.90 / 1,000 results
Reddit Scraper - Neatrat โก
๐ High-speed Reddit scraping. No API limits. No Proxies needed.
Pricing
from $2.90 / 1,000 results
Rating
5.0
(4)
Developer
Neatrat
Actor stats
13
Bookmarked
5
Total users
1
Monthly active users
3 days ago
Last modified
Categories
Share
๐ High-speed Reddit scraping. No API limits. No proxy needed. Pay only for results.
Get fresh Reddit posts, comments, subreddits, user pages, popular feeds, leaderboards, and search results as clean structured JSON, without touching the Reddit API, without managing proxies, and without paying for compute time you didn't use.
Built for developers, marketers, researchers, data teams, and AI agents who just want Reddit data to show up, correct and on time.
Why Neatrat's Reddit Scraper?
- โก High-speed. Runs finish in seconds, not minutes. Tuned to be the fastest Reddit scraper on Apify.
- ๐ No API limits. Skip Reddit's rate limits, quotas, and app-registration paperwork.
- ๐ No proxy needed. Residential proxies are included. Nothing to buy, configure, or rotate.
- ๐ธ Pay only for results. Flat $3 per 1,000 stored items. No per-hour compute surprises. Blocked responses are never billed.
- ๐งผ Clean input. Paste a Reddit link or a keyword. Done. No twenty toggles to learn.
- ๐ Auto URL detection. Posts, comments, subreddits, users, searches,
r/popular, and leaderboards all handled from one list. - ๐ค AI-agent friendly. The same engine is exposed as an MCP (Model Context Protocol) server, so agents in Claude Desktop, Cursor, and VS Code can call Reddit scraping as a native tool.
- ๐ชถ Lean runtime. 512 MB is all it needs, which keeps Apify compute charges minimal on every plan.
What you can scrape
| You give us | You get |
|---|---|
| A post URL | Full post with title, body, metadata, and expandable comment threads |
| A comment permalink | The comment with its ancestor context and replies |
| A subreddit URL | Post listings, with pagination and sort control |
| A user URL | Profile, submitted posts, or comment history |
| A search URL | Search results across Reddit or scoped to one subreddit |
r/popular or subreddits/leaderboard | Trending posts or trending communities |
| Keywords | Keyword search across posts, communities, and users |
Everything comes back as structured JSON, one item at a time, straight to the Apify dataset. Stream it into Make, Zapier, n8n, Google Sheets, a webhook, or your own pipeline.
Pricing
$3 per 1,000 stored items. That's the whole price.
- Residential proxies included.
- Live Reddit fetches included.
- Dataset delivery included.
- No hidden compute upcharge.
- Blocked responses are never billed - they're skipped and counted in the run summary.
Free trial for Apify users
Not on a paid plan yet? Kick the tires for free:
- 5 lifetime runs
- 500 lifetime stored results
When you hit the limit the actor exits cleanly and points you to a paid plan. No credit-card surprises, no partial charges.
How to use it
You can combine direct URLs and keyword searches in the same run.
Option 1 - Drop in Reddit URLs
Just paste the URLs. The actor figures out the rest.
{"startUrls": [{ "url": "https://www.reddit.com/r/programming/" },{ "url": "https://www.reddit.com/r/programming/comments/173viwj/" },{ "url": "https://www.reddit.com/user/spez" },{ "url": "https://www.reddit.com/r/popular/" }]}
Option 2 - Search by keyword
{"searchTerms": ["typescript", "bun runtime"],"searchTypes": ["posts", "communities"],"withinSubreddit": "programming","searchSort": "new","timeFilter": "week"}
searchTypes accepts any combination of "posts", "communities", "users".
Option 3 - Full-depth post crawl
Turn any post listing into a deep scrape that follows every post into its full comment thread:
{"startUrls": [{ "url": "https://www.reddit.com/r/generativeAI/" }],"crawlComments": true,"maxPosts": 10,"maxCommentsPerPost": 50}
Input reference
| Field | Type | Default | What it does |
|---|---|---|---|
startUrls | { url }[] | - | Reddit URLs to scrape. Posts, comments, subreddits, users, searches, popular, leaderboard. |
searchTerms | string[] | - | Keywords to search for. |
searchTypes | ("posts" | "communities" | "users")[] | ["posts"] | Which surfaces each keyword hits. |
withinSubreddit | string | null | Restrict keyword post search to one subreddit (e.g. programming). |
searchSort | "relevance" | "new" | "comments" | "top" | "new" | Sort order for keyword post search. |
timeFilter | "all" | "hour" | "day" | "week" | "month" | "year" | "all" | Time window for searches and top/controversial sorts. |
postSort | "hot" | "new" | "top" | "rising" | "controversial" | "hot" | Sort for subreddit listings when the URL doesn't specify one. |
crawlComments | boolean | false | Treat post listings as discovery and fetch full comments for every post. |
pages | integer | 1 | Listing pages to follow. For posts, this is how many extra comment expansion rounds to run. |
includeNsfw | boolean | true | When false, NSFW items are filtered out before billing. |
maxItems | integer | 100 | Total dataset cap for the whole run. |
maxPosts | integer | 25 | Cap per post-style listing. |
maxComments | integer | 100 | Global cap on nested comments stored across all full-post fetches. |
maxCommentsPerPost | integer | 20 | Per-post cap on nested comments. |
maxCommunities | integer | 10 | Cap for community search and leaderboard. |
maxUsers | integer | 25 | Cap for user search. |
requestTimeoutSecs | integer | 45 | Per-request timeout. |
Example inputs
Single subreddit feed
{"startUrls": [{ "url": "https://www.reddit.com/r/programming/" }],"maxPosts": 25,"maxItems": 25}
One post with deep comment expansion
{"startUrls": [{ "url": "https://www.reddit.com/r/programming/comments/173viwj/" }],"pages": 3,"maxCommentsPerPost": 200,"maxItems": 1}
Keyword search scoped to one subreddit
{"searchTerms": ["typescript"],"searchTypes": ["posts"],"withinSubreddit": "programming","searchSort": "new","timeFilter": "week","maxPosts": 50,"maxItems": 50}
Discovery plus full-post crawl
{"startUrls": [{ "url": "https://www.reddit.com/r/generativeAI/" }],"crawlComments": true,"maxPosts": 10,"maxCommentsPerPost": 50,"maxItems": 10}
Mixed run
{"startUrls": [{ "url": "https://www.reddit.com/r/popular/" },{ "url": "https://www.reddit.com/user/spez" }],"searchTerms": ["apify", "neatrat"],"searchTypes": ["posts", "communities"],"maxPosts": 15,"maxCommunities": 5,"maxItems": 120}
Output shape
Every dataset item carries a dataType and a sourceType so downstream pipelines can filter cleanly even when one run mixes post results, community previews, and user search hits.
Typical dataType values:
post: full post with commentscomment-permalink: a comment with ancestor contextcommunityDetails: subreddit about-boxuserProfile: user about-boxpostPreview: one item from a post listingcommentPreview: one item from a user comment listingcommunityPreview: one item from community search or leaderboarduserPreview: one item from user search
Listing routes are flattened (one dataset item per preview). Single-resource routes store one item. When crawlComments is on, raw previews are dropped so you only pay for the full-comment post objects.
Good use cases
- Marketing: track brand mentions, watch competitors, monitor product subreddits, find influencers.
- Research: pull public discussions about a topic, sample comment sentiment, build datasets.
- Analytics: snapshot subreddit activity over time, feed BI dashboards.
- AI / LLM teams: build RAG corpora from niche subreddits, keep LLM context fresh, ground agents with live Reddit signal.
- Community & growth: spot trending threads in your niche, catch support questions fast.
For AI agents and MCP users
This scraper also ships as an MCP (Model Context Protocol) server, so AI agents in Claude Desktop, Cursor, VS Code, and other MCP-capable clients can call Reddit scraping as a native tool.
If you're building an agent and want it wired in as an MCP tool, reach out through the contact channel on the Apify store page and we'll share the MCP endpoint.
For everyone else, this Apify actor is the simplest way to turn Reddit into clean structured data without thinking about APIs, proxies, or rate limits.
Support
Questions, feature requests, or custom use cases? Reach out through the Apify store page and we'll get back fast.