Reddit Scraper avatar

Reddit Scraper

Pricing

Pay per event

Go to Apify Store
Reddit Scraper

Reddit Scraper

Scrape Reddit posts, comments, search results, and user profiles. Extract structured data from any subreddit with pagination, nested comments, and configurable depth. Export to JSON, CSV, or Excel.

Pricing

Pay per event

Rating

4.3

(2)

Developer

Stas Persiianenko

Stas Persiianenko

Maintained by Community

Actor stats

1

Bookmarked

203

Total users

103

Monthly active users

2 days ago

Last modified

Categories

Share

What does Reddit Scraper do?

Reddit Scraper extracts structured data from Reddit — posts, comments, search results, and user profiles. Just paste any Reddit URL or enter a search query and get clean JSON, CSV, or Excel output. No Reddit account or API key needed.

It supports subreddit listings (hot, new, top, rising), individual posts with nested comments, user submission history, and full-text search across all of Reddit or within a specific subreddit.

Why use Reddit Scraper?

  • 4x cheaper than the leading Reddit scraper on Apify ($1/1K posts vs $4/1K)
  • Posts + comments in one actor — no need to run separate scrapers
  • All input types — subreddits, posts, users, search queries, or just paste any Reddit URL
  • Pure HTTP — no browser, low memory, fast execution
  • Clean output — structured fields with consistent naming, not raw API dumps
  • Pagination built in — scrape hundreds or thousands of posts automatically
  • Pay only for results — pay-per-event pricing, no monthly subscription

What data can you extract?

Post fields:

FieldDescription
titlePost title
authorReddit username
subredditSubreddit name
scoreNet upvotes
upvoteRatioUpvote percentage (0-1)
numCommentsComment count
createdAtISO 8601 timestamp
urlFull Reddit URL
selfTextPost body text
linkExternal link (for link posts)
domainLink domain
isVideo, isSelf, isNSFW, isSpoilerContent flags
linkFlairTextPost flair
totalAwardsAward count
subredditSubscribersSubreddit size
imageUrlsExtracted image URLs
thumbnailThumbnail URL

Comment fields:

FieldDescription
authorCommenter username
bodyComment text
scoreNet upvotes
createdAtISO 8601 timestamp
depthNesting level (0 = top-level)
isSubmitterWhether commenter is the post author
parentIdParent comment/post ID
repliesNumber of direct replies
postIdParent post ID
postTitleParent post title

How much does it cost to scrape Reddit?

This Actor uses pay-per-event pricing — you pay only for what you scrape. No monthly subscription. All platform costs (compute, proxy, storage) are included.

EventCost
Actor start$0.003 per run
Per post$0.001
Per comment$0.0005

That's $1.00 per 1,000 posts or $0.50 per 1,000 comments.

Real-world cost examples:

InputResultsDurationCost
1 subreddit, 100 posts100 posts~15s~$0.10
5 subreddits, 50 posts each250 posts~30s~$0.25
1 post + 200 comments201 items~5s~$0.10
Search "AI", 100 results100 posts~15s~$0.10
1 subreddit, 5 posts + 3 comments each20 items~12s~$0.02

How to scrape Reddit posts

  1. Go to the Reddit Scraper input page
  2. Add Reddit URLs to the Reddit URLs field — any of these formats work:
    • https://www.reddit.com/r/technology/
    • https://www.reddit.com/r/AskReddit/comments/abc123/post-title/
    • https://www.reddit.com/user/spez/
    • r/technology or just technology
  3. Or enter a Search Query to search across Reddit
  4. Set Max Posts per Source to control how many posts to scrape
  5. Enable Include Comments if you also want comment data
  6. Click Start and wait for results

Example input:

{
"urls": ["https://www.reddit.com/r/technology/"],
"maxPostsPerSource": 100,
"sort": "hot",
"includeComments": false
}

Input parameters

ParameterTypeDefaultDescription
urlsstring[]Reddit URLs to scrape (subreddits, posts, users, search URLs)
searchQuerystringSearch Reddit for this query
searchSubredditstringLimit search to a specific subreddit
sortenumhotSort order: hot, new, top, rising, relevance
timeFilterenumweekTime filter for top/relevance: hour, day, week, month, year, all
maxPostsPerSourceinteger100Max posts per subreddit/search/user. 0 = unlimited
includeCommentsbooleanfalseAlso scrape comments for each post
maxCommentsPerPostinteger100Max comments per post
commentDepthinteger3Max reply nesting depth
maxRequestRetriesinteger5Retry attempts for failed requests

Output examples

Post:

{
"type": "post",
"id": "1qw5kwf",
"title": "3 Teen Sisters Jump to Their Deaths from 9th Floor Apartment After Parents Remove Access to Phone",
"author": "Sandstorm400",
"subreddit": "technology",
"score": 18009,
"upvoteRatio": 0.92,
"numComments": 1363,
"createdAt": "2026-02-05T00:04:58.000Z",
"url": "https://www.reddit.com/r/technology/comments/1qw5kwf/3_teen_sisters_jump_to_their_deaths_from_9th/",
"permalink": "/r/technology/comments/1qw5kwf/3_teen_sisters_jump_to_their_deaths_from_9th/",
"selfText": "",
"link": "https://people.com/3-sisters-jumping-deaths-online-gaming-addiction-11899069",
"domain": "people.com",
"isVideo": false,
"isSelf": false,
"isNSFW": false,
"isSpoiler": false,
"isStickied": false,
"thumbnail": "https://external-preview.redd.it/...",
"linkFlairText": "Society",
"totalAwards": 0,
"subredditSubscribers": 17101887,
"imageUrls": [],
"scrapedAt": "2026-02-05T12:33:50.000Z"
}

Comment:

{
"type": "comment",
"id": "m3abc12",
"postId": "1qw5kwf",
"postTitle": "3 Teen Sisters Jump to Their Deaths...",
"author": "commenter123",
"body": "This is heartbreaking. Phone addiction in teens is a serious issue.",
"score": 542,
"createdAt": "2026-02-05T01:15:00.000Z",
"permalink": "/r/technology/comments/1qw5kwf/.../m3abc12",
"depth": 0,
"isSubmitter": false,
"parentId": "t3_1qw5kwf",
"replies": 12,
"scrapedAt": "2026-02-05T12:33:52.000Z"
}

Tips for best results

  • Start small — test with 5-10 posts before running large scrapes
  • Use sort + time filtersort: "top" with timeFilter: "month" gets the most popular content
  • Comments cost extra — only enable includeComments when you need them
  • Multiple subreddits — add multiple URLs to scrape several subreddits in one run
  • Search within subreddit — use searchSubreddit to limit search to a specific community
  • Direct post URLs — paste a specific post URL to get that post + its comments
  • Rate limits — Reddit allows ~1,000 requests/hour; large scrapes may take a few minutes

Integrations

Connect Reddit Scraper to other apps and services using Apify integrations:

  • Google Sheets — automatically export Reddit posts and comments to a spreadsheet for tracking trends or building content calendars
  • Slack / Discord — get notifications when scraping finishes, or set up alerts for posts matching specific keywords
  • Zapier / Make — trigger workflows based on new Reddit data, e.g., save high-engagement posts to a CRM or send weekly reports
  • Webhooks — send results to your own API endpoint for custom processing pipelines
  • Scheduled runs — run the scraper daily or weekly to monitor subreddits for new discussions
  • Data warehouses — pipe data to BigQuery, Snowflake, or PostgreSQL for large-scale analysis
  • AI/LLM pipelines — feed Reddit discussions into sentiment analysis, topic modeling, or lead qualification workflows

Using the Apify API

Node.js:

import { ApifyClient } from 'apify-client';
const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });
const run = await client.actor('automation-lab/reddit-scraper').call({
urls: ['https://www.reddit.com/r/technology/'],
maxPostsPerSource: 100,
sort: 'hot',
includeComments: false,
});
const { items } = await client.dataset(run.defaultDatasetId).listItems();
console.log(items);

Python:

from apify_client import ApifyClient
client = ApifyClient('YOUR_API_TOKEN')
run = client.actor('automation-lab/reddit-scraper').call(run_input={
'urls': ['https://www.reddit.com/r/technology/'],
'maxPostsPerSource': 100,
'sort': 'hot',
'includeComments': False,
})
items = client.dataset(run['defaultDatasetId']).list_items().items
print(items)

cURL:

curl "https://api.apify.com/v2/acts/automation-lab~reddit-scraper/runs" \
-X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-d '{"urls":["https://www.reddit.com/r/technology/"],"maxPostsPerSource":100,"sort":"hot"}'

Use with AI agents via MCP

Reddit Scraper is available as a tool for AI assistants that support the Model Context Protocol (MCP). This lets you use natural language to scrape data — just ask your AI assistant and it will configure and run the scraper for you.

Setup for Claude Code

$claude mcp add --transport http apify "https://mcp.apify.com"

Setup for Claude Desktop, Cursor, or VS Code

Add this to your MCP config file:

{
"mcpServers": {
"apify": {
"url": "https://mcp.apify.com"
}
}
}

Your AI assistant will use OAuth to authenticate with your Apify account on first use.

Example prompts

Once connected, try asking your AI assistant:

  • "Get the top 100 posts from r/technology this month"
  • "Scrape comments from this Reddit thread"
  • "Search Reddit for discussions about 'AI coding'"

Learn more in the Apify MCP documentation.

FAQ

Can I scrape any subreddit? Yes, as long as the subreddit is public. Private subreddits will return a 403 error and be skipped.

Does it scrape NSFW content? Yes, NSFW posts are included by default. You can filter them out using the isNSFW field in the output.

How many posts can I scrape? There is no hard limit. Set maxPostsPerSource: 0 for unlimited. Reddit's pagination allows up to ~1,000 posts per listing. For more, use search with different time filters.

Can I scrape comments from multiple posts at once? Yes. Enable includeComments and the scraper will fetch comments for every post it finds. Use maxCommentsPerPost to control how many comments per post.

What happens if Reddit rate-limits me? The scraper automatically detects rate limits via response headers and waits before retrying. You don't need to configure anything.

Can I export to CSV or Excel? Yes. Apify datasets support JSON, CSV, Excel, XML, and HTML export formats. Use the dataset export buttons or API.

The scraper returns fewer posts than I expected — what's going on? Reddit's pagination API has a limit of approximately 1,000 posts per listing. If you need more, use search with different time filters (e.g., timeFilter: "month" then timeFilter: "year") to access older content. Also note that some subreddits simply have fewer posts than your limit.

I'm getting 403 errors for a subreddit — how do I fix this? This means the subreddit is private, quarantined, or banned. The scraper can only access public subreddits. Check if you can view the subreddit in an incognito browser window — if not, the scraper won't be able to access it either.

Other social media scrapers