Reddit Scraper Pro — Posts, Comments, Subreddits, No API Key avatar

Reddit Scraper Pro — Posts, Comments, Subreddits, No API Key

Under maintenance

Pricing

Pay per usage

Go to Apify Store
Reddit Scraper Pro — Posts, Comments, Subreddits, No API Key

Reddit Scraper Pro — Posts, Comments, Subreddits, No API Key

Under maintenance

Reddit scraper via public JSON — posts + comments, no login. 20 fields per post (title, author, subreddit, score, ratio, date, URL, text, flair, awards, NSFW, domain, etc.). CSV/JSON. Polite 2-5s delay. For research, brand tracking, AI/RAG. spinov001@gmail.com · t.me/scraping_ai

Pricing

Pay per usage

Rating

0.0

(0)

Developer

Alex

Alex

Maintained by Community

Actor stats

0

Bookmarked

6

Total users

2

Monthly active users

4 hours ago

Last modified

Share

Reddit Scraper Pro — API-Based, Never Breaks on Redesigns

The most reliable Reddit scraper on Apify. Uses Reddit's native JSON API instead of HTML parsing — so it never breaks when Reddit updates their UI.

Why This Scraper?

Most Reddit scrapers use HTML/CSS selectors that break every time Reddit changes their design. This scraper uses Reddit's official JSON endpoint (/r/subreddit.json) — the same data format Reddit's own apps use. This means:

  • Never breaks on redesigns — JSON API is separate from the UI
  • Complete data — 20+ fields per post, full comment trees
  • Structured output — clean JSON, no HTML parsing artifacts
  • No login required — public data, no credentials needed
  • Built-in rate limiting — respects Reddit's API limits, won't get you banned

Features

  • 20+ data fields per post — title, author, score, upvote ratio, comment count, flair, awards, URL, self text, link URL, domain, NSFW flag, stickied status, and more
  • Full comment threads — nested comments with author, score, depth level, and creation date
  • Multiple subreddits — scrape r/programming, r/datascience, r/Entrepreneur in one run
  • Cross-Reddit search — find posts by keyword across all of Reddit
  • Flexible sorting — hot, new, top, rising with time filters (hour/day/week/month/year)
  • Automatic pagination — follows Reddit's cursor-based pagination for 500+ posts
  • Proxy support — uses Apify Proxy (residential) for reliable access

Output Data (20+ fields)

{
"id": "1b2c3d4",
"title": "What tools do you use for market research?",
"author": "startup_founder",
"subreddit": "Entrepreneur",
"score": 847,
"upvoteRatio": 0.94,
"numComments": 234,
"createdUtc": "2026-03-17T15:30:00.000Z",
"url": "https://reddit.com/r/Entrepreneur/comments/.",
"selfText": "I've been looking for affordable tools.",
"linkUrl": "https://example.com/article",
"flair": "Discussion",
"awards": 3,
"isNSFW": false,
"isStickied": false,
"domain": "self.Entrepreneur",
"thumbnail": "https://.",
"comments": [
{
"id": "abc123",
"author": "data_analyst",
"body": "I use a combination of.",
"score": 156,
"createdUtc": "2026-03-17T16:00:00.000Z",
"depth": 0
}
]
}

Use Cases

  • Market research — discover what people say about your product, brand, or industry
  • Sentiment analysis — collect posts and comments for NLP models
  • AI training data — build datasets from Reddit discussions for LLM fine-tuning
  • Trend monitoring — track emerging topics and viral content in real-time
  • Competitive intelligence — monitor competitor mentions and complaints
  • Content research — find top questions and topics your audience cares about
  • Lead generation — identify users asking for your type of product/service
  • Academic research — gather social media data for papers and studies

Input Parameters

ParameterTypeDefaultDescription
subredditsArray[]Subreddit names (e.g., ["technology", "startups"])
searchQueriesArray[]Search terms across all of Reddit
maxPostsPerSourceNumber50Max posts per subreddit/query (1-500)
includeCommentsBooleantrueExtract comment threads
maxCommentsPerPostNumber20Max comments per post
sortByString"hot"Sort: hot, new, top, rising
timeFilterString"week"Time filter: hour, day, week, month, year

Technical Details

  • Method: Reddit JSON API (/r/subreddit.json, /search.json)
  • Proxy: Apify residential proxy for reliable access
  • Rate limiting: Built-in delays between requests (2-3 seconds)
  • Pagination: Cursor-based (Reddit's after parameter)
  • Error handling: Graceful handling of 403/429 errors with retry logic

Cost Estimation

  • ~$0.50 per 100 posts without comments
  • ~$1.00 per 100 posts with full comment threads
  • Free tier available with Apify free plan

Important: Proxy Requirements

Reddit blocks most datacenter IP addresses. For reliable operation on Apify Cloud:

  • Apify Proxy (Residential) — recommended, works reliably
  • Apify Proxy (Datacenter) — may get 403 errors on some subreddits
  • No proxy (local) — works fine when running locally

If you're on the Apify Free plan (datacenter-only proxy), some subreddits may return empty results. Upgrading to a paid plan with residential proxy access solves this.

Step-by-Step Tutorial

1. Open Reddit Scraper on Apify

Go to Reddit Scraper Pro and click "Try for free."

2. Enter Subreddits

Add subreddit names without the r/ prefix:

{
"subreddits": ["startups", "SaaS", "Entrepreneur"],
"maxPostsPerSource": 50,
"sortBy": "top",
"timeFilter": "month"
}

3. Or Search Across All of Reddit

{
"searchQueries": ["best CRM tools", "project management software"],
"maxPostsPerSource": 25
}

4. Run and Download

Results are available as JSON, CSV, or Excel. Each post includes 20+ fields.

Integration Examples

Python

import requests
response = requests.get(
"https://api.apify.com/v2/acts/knotless_cadence~reddit-scraper-pro/runs/last/dataset/items",
params={"token": "YOUR_TOKEN"}
)
posts = response.json()
for post in posts:
print(f"[{post['score']}] {post['title']}")

JavaScript

const response = await fetch(
`https://api.apify.com/v2/acts/knotless_cadence~reddit-scraper-pro/runs/last/dataset/items?token=YOUR_TOKEN`
);
const posts = await response.json();
posts.forEach(p => console.log(`[${p.score}] ${p.title}`));

n8n Workflow

  1. HTTP Request node → POST https://api.apify.com/v2/acts/knotless_cadence~reddit-scraper-pro/runs
  2. Wait node → 30 seconds
  3. HTTP Request node → GET ./runs/last/dataset/items
  4. Send to Slack/Sheets/CRM

Pricing

VolumeEstimated Cost
100 posts (no comments)~$0.50
100 posts + comments~$1.00
500 posts + comments~$3.00

Free tier available on Apify's free plan (with datacenter proxy limitations).

FAQ

Q: Why JSON API instead of HTML scraping? A: HTML scrapers break every time Reddit updates their design. The JSON API returns structured data in a format that hasn't changed in years. It's the same API Reddit's mobile app uses.

Q: Can I scrape private subreddits? A: No — only publicly accessible subreddits. This scraper uses public endpoints.

Q: Does it need my Reddit credentials? A: No. All data is fetched from public JSON endpoints.

Q: How many posts can I get per run? A: Up to 500 posts per subreddit with pagination. Multiple subreddits can be scraped in one run.


Need a custom Reddit variant?

Common asks I've delivered for paying clients:

  • Sentiment dashboard — daily sentiment scoring on a list of subreddits, fed into Looker/Metabase
  • Keyword alerts — webhook fires the moment a brand/term appears in target subreddits
  • Competitor tracking — pull all mentions of competitor names, summarize weekly
  • Comment-thread expansion — recursive comment graph for any post, exportable as edge list
  • Cross-source merge — Reddit + HackerNews + Bluesky into one normalized feed for one query

Typical turnaround 2–4 days, scoped before any payment. Email spinov001@gmail.com.

More from me


Related tools by knotless_cadence on Apify: