Reddit Scraper Pro avatar
Reddit Scraper Pro

Pricing

$20.00/month + usage

Go to Apify Store
Reddit Scraper Pro

Reddit Scraper Pro

Reddit Scraper Pro is a powerful, unlimited scraping for $20/mo for extracting data from Reddit. Scrape posts, users, comments, and communities with advanced search capabilities. Perfect for brand monitoring, trend tracking, and competitor research. Supports make, n8n integrations

Pricing

$20.00/month + usage

Rating

3.8

(6)

Developer

Harsh Maur

Harsh Maur

Maintained by Community

Actor stats

67

Bookmarked

1.3K

Total users

105

Monthly active users

9.8 hours

Issues response

5 days ago

Last modified

Share

Reddit Scraper β€” Extract Reddit Data Without API Limits

Scrape Reddit posts, comments, users & communities β€” no API keys required

Try on Apify Input Schema API Docs

What it does β€’ Why use it β€’ How to use β€’ Output β€’ Pricing β€’ FAQ


What does Reddit Scraper do? {#what-does-reddit-scraper-do}

Reddit Scraper extracts posts, comments, user profiles, and community data from Reddit without needing API keys or authentication. Simply provide URLs or search terms, and get structured data in JSON, CSV, or Excel format.

βœ… No API keys needed

Bypass Reddit's 600 requests/10min API limit

⚑ Fast Mode (default)

Up to 70% faster extraction for large datasets

Search within specific subreddits using withinCommunity

πŸ“Š Multiple export formats

JSON, CSV, Excel, XML, HTML

πŸ”„ Easy integrations

n8n, Zapier, Make, REST API

πŸ’° Pay only for results

No monthly fees β€” $0.002 per item


Why use Reddit Scraper?

Reddit's official API limits you to 600 requests per 10 minutes and requires OAuth setup. This scraper bypasses those limitations entirely, letting you extract millions of posts and comments without authentication.

Perfect for:

  • πŸ“Š Market researchers analyzing consumer opinions
  • πŸ“’ Brand managers monitoring mentions and sentiment
  • πŸ€– Data scientists building ML training datasets
  • πŸ“ Content creators discovering trending topics
  • οΏ½ Business analysts tracking competitors

What data can you extract from Reddit?

Data TypeFields Extracted
PostsTitle, content, URL, author, upvotes, score, comments count, timestamp, subreddit, awards, flair, media links
CommentsText, author, upvotes, score, timestamp, permalink, parent relationships, depth level, awards
UsersUsername, karma scores, account age, post history, comment history
CommunitiesName, description, subscriber count, active users, rules, creation date

Use Cases


How to scrape Reddit without API {#how-to-scrape-reddit}

No coding required. Follow these steps to start extracting Reddit data:

  1. Create a free Apify account (or log in)
  2. Go to Reddit Scraper
  3. Enter Reddit URLs or search terms
  4. Click Start
  5. Download results as JSON, CSV, or Excel

πŸ’‘ Tip: For the complete list of input parameters, see the Input Schema tab.


Configuration

Input Parameters

Reddit Scraper offers versatile input options to suit your needs:

  • Direct URLs: Scrape specific data from any Reddit URL, be it a post, user, or subreddit.
  • Keyword Search: Extract data based on keywords for posts, users, or communities with advanced search options like sorting and date.
  • Limits: Set the maximum number of items, posts, comments, communities, or users to scrape.
  • Community-Specific Search: Search within a specific subreddit using the withinCommunity parameter.
  • Fast Mode: Optimized scraping enabled by default for faster data extraction.

Advanced Features

Fast Mode - Optimized Performance

Fast Mode is enabled by default for significantly faster scraping when extracting large amounts of data. This feature optimizes the scraping process by using direct API endpoints and skipping unnecessary navigation steps.

Performance Benefits:

  • βœ… Up to 70% faster than regular mode
  • βœ… Ideal for scraping large subreddits (10,000+ posts)
  • βœ… Perfect for extracting data from multiple communities
  • βœ… Optimized for time-sensitive data collection

Important Note on Accuracy:

⚠️ Fast Mode may not be very accurate when searching for comments within subreddits. If you need precise comment search results, disable Fast Mode by setting "fastMode": false.

When to disable Fast Mode:

  • ❌ Searching for specific comments in subreddits
  • ❌ When comment search accuracy is critical
  • ❌ Deep comment thread analysis with search queries

Example - Fast Mode enabled (default):

{
"startUrls": [{ "url": "https://www.reddit.com/r/technology/" }],
"fastMode": true,
"maxPostsCount": 1000,
"proxy": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}

Example - Fast Mode disabled for accurate comment searches:

{
"searchTerms": ["specific topic"],
"withinCommunity": "r/technology",
"searchComments": true,
"fastMode": false,
"maxCommentsCount": 500,
"proxy": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}

Community-Specific Search - Target Specific Subreddits

Use the withinCommunity parameter to search for keywords within a specific subreddit. This is perfect for focused market research, niche analysis, or monitoring specific communities.

Format: r/subredditname (e.g., r/gaming, r/technology)

Use cases:

  • 🎯 Monitor brand mentions in specific communities
  • πŸ“Š Analyze sentiment within niche subreddits
  • πŸ” Research topics in targeted communities
  • πŸ’‘ Discover trends in specific industries

Example - Search for "AI" within r/technology:

{
"searchTerms": ["artificial intelligence", "machine learning"],
"withinCommunity": "r/technology",
"searchPosts": true,
"searchSort": "hot",
"searchTime": "week",
"maxPostsCount": 100,
"proxy": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}

Example - Multiple searches in the same community:

{
"searchTerms": ["iPhone 15", "Samsung Galaxy", "Google Pixel"],
"withinCommunity": "r/Android",
"searchPosts": true,
"searchComments": true,
"searchSort": "top",
"searchTime": "month",
"maxPostsCount": 50,
"maxCommentsCount": 200,
"proxy": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}

πŸ’‘ Pro Tip: Combine withinCommunity with fastMode for lightning-fast, targeted data extraction!


Scrape by URLs

You can scrape data from various Reddit URL types:

URL TypeExample
Subreddithttps://www.reddit.com/r/technology/
User Profilehttps://www.reddit.com/user/someusername
Popularhttps://www.reddit.com/r/popular/
Search URLshttps://www.reddit.com/search/?q=example&type=sr

Scrape by Search Term

Step-by-step configuration guide:

  1. Enter Search Terms: In the "Search Term" field, enter your desired keywords or phrases
  2. Configure Search Options:
    • Get posts: Enable to search for posts (default: true)
    • Get comments: Enable to search for comments (default: false)
    • Get communities: Enable to search for communities (default: false)
  3. Set Sort Order: Determine how results are ordered
    • Options: Relevance, Hot, Top, New, Comments (default: New)
  4. Specify Time Range: For post searches, set the "Retrieve From" time range
    • Options: All time, Last hour, Last day, Last week, Last month, Last year (default: All time)
  5. NSFW Content: Adjust the "Include NSFW content" setting (default: false)
  6. Set Result Limits:
    • Maximum number of posts (default: 10)
    • Limit of comments (default: 10)
    • Limit of comments per post (default: 10)
    • Limit of communities (default: 2)

Example Configuration

{
"searchTerms": ["cryptocurrency", "blockchain"],
"searchPosts": true,
"searchComments": true,
"searchCommunities": false,
"searchSort": "hot",
"searchTime": "month",
"includeNSFW": false,
"maxPostsCount": 50,
"maxCommentsCount": 100,
"maxCommentsPerPost": 20
}

This configuration will search for cryptocurrency and blockchain-related content, focusing on hot posts and comments from the last month, excluding NSFW content, and limiting results to 50 posts with up to 100 total comments (max 20 per post).

πŸ’‘ Pro Tip: For the complete list of parameters and their default values, visit the Input Schema tab.


Input Examples

Here are practical input examples for different use cases:

1. Scraping posts from a subreddit

{
"startUrls": [{ "url": "https://www.reddit.com/r/technology/" }],
"crawlCommentsPerPost": false,
"maxPostsCount": 10,
"maxCommentsPerPost": 10,
"proxy": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}

2. Searching for posts on a specific topic

{
"searchTerms": ["artificial intelligence"],
"searchPosts": true,
"searchComments": false,
"searchCommunities": false,
"searchSort": "hot",
"searchTime": "week",
"maxPostsCount": 50,
"proxy": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}

3. Scraping comments from a specific post

{
"startUrls": [
{
"url": "https://www.reddit.com/r/AskReddit/comments/example_post_id/example_post_title/"
}
],
"crawlCommentsPerPost": true,
"maxCommentsPerPost": 100,
"proxy": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}

4. Extracting community information

{
"startUrls": [{ "url": "https://www.reddit.com/r/AskScience/" }],
"maxPostsCount": 0,
"maxCommentsCount": 0,
"proxy": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}

5. Scraping user posts and comments

{
"startUrls": [{ "url": "https://www.reddit.com/user/example_username" }],
"maxPostsCount": 20,
"maxCommentsCount": 50,
"proxy": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}

6. Searching for comments across Reddit

{
"searchTerms": ["climate change"],
"searchPosts": false,
"searchComments": true,
"searchCommunities": false,
"searchSort": "top",
"searchTime": "month",
"maxCommentsCount": 100,
"proxy": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}
{
"startUrls": [
{ "url": "https://www.reddit.com/r/news/" },
{ "url": "https://www.reddit.com/r/worldnews/" },
{ "url": "https://www.reddit.com/r/politics/" }
],
"maxPostsCount": 10,
"crawlCommentsPerPost": true,
"maxCommentsPerPost": 5,
"proxy": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}

8. Fast Mode - Large-scale subreddit scraping

{
"startUrls": [
{ "url": "https://www.reddit.com/r/technology/" },
{ "url": "https://www.reddit.com/r/programming/" }
],
"fastMode": true,
"maxPostsCount": 5000,
"crawlCommentsPerPost": false,
"proxy": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}

Use case: Quickly extract thousands of posts for trend analysis or dataset creation.

{
"searchTerms": ["product launch", "new feature", "update"],
"withinCommunity": "r/SaaS",
"searchPosts": true,
"searchComments": true,
"searchSort": "new",
"searchTime": "week",
"maxPostsCount": 100,
"maxCommentsCount": 500,
"proxy": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}

Use case: Monitor product launch discussions within a specific industry subreddit.

10. Brand monitoring across search terms

{
"searchTerms": ["YourBrand", "YourProduct", "@YourCompany"],
"searchPosts": true,
"searchComments": true,
"searchSort": "new",
"searchTime": "day",
"maxPostsCount": 200,
"maxCommentsCount": 1000,
"proxy": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}

Use case: Real-time brand mention monitoring across all of Reddit.

11. Competitor analysis with Fast Mode

{
"searchTerms": ["Competitor1", "Competitor2", "Competitor3"],
"withinCommunity": "r/Entrepreneur",
"searchPosts": true,
"searchComments": true,
"searchSort": "top",
"searchTime": "month",
"fastMode": true,
"maxPostsCount": 500,
"maxCommentsCount": 2000,
"proxy": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}

Use case: Analyze competitor mentions and sentiment in business communities.

12. Multi-community trend analysis

{
"startUrls": [
{ "url": "https://www.reddit.com/r/technology/top/?t=week" },
{ "url": "https://www.reddit.com/r/gadgets/top/?t=week" },
{ "url": "https://www.reddit.com/r/Futurology/top/?t=week" }
],
"fastMode": true,
"maxPostsCount": 100,
"crawlCommentsPerPost": true,
"maxCommentsPerPost": 50,
"proxy": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}

Use case: Identify trending topics across related technology subreddits.

13. Deep comment analysis for sentiment

{
"startUrls": [{ "url": "https://www.reddit.com/r/CryptoCurrency/" }],
"maxPostsCount": 50,
"crawlCommentsPerPost": true,
"maxCommentsPerPost": 500,
"maxCommentsCount": 10000,
"proxy": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}

Use case: Extract large comment datasets for NLP and sentiment analysis models.

14. Search URL scraping with filters

{
"startUrls": [
{
"url": "https://www.reddit.com/search/?q=machine%20learning&type=link&sort=top&t=month"
}
],
"fastMode": true,
"maxPostsCount": 1000,
"proxy": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}

Use case: Scrape pre-filtered search results with specific sorting and time parameters.

Limiting Results

Control the scope of your scraping by setting limits on various parameters:

{
"maxPostsCount": 10,
"maxCommentsPerPost": 5,
"maxCommunitiesCount": 2,
"maxCommentsCount": 100,
"maxItems": 1000
}

πŸ’‘ Testing Tip: Use maxItems to prevent very long runs. This parameter stops your scraper when it reaches the specified number of resultsβ€”perfect for testing configurations!


Output Example

You can download the dataset extracted by Reddit Scraper in various formats: JSON, CSV, Excel, XML, or HTML.

Here's an example of the JSON output for a Reddit post:

{
"dataType": "post",
"id": "t3_1abc123",
"parsedId": "1abc123",
"title": "What's the best programming language to learn in 2025?",
"body": "I'm looking to switch careers into tech and wondering which language...",
"bodyHtml": "<p>I'm looking to switch careers into tech...</p>",
"authorId": "t2_xyz789",
"parsedAuthorId": "xyz789",
"authorName": "curious_developer",
"communityName": "r/learnprogramming",
"communityId": "t5_2qh55",
"parsedCommunityId": "2qh55",
"parsedCommunityName": "learnprogramming",
"subredditName": "learnprogramming",
"subredditId": "t5_2qh55",
"parsedSubredditId": "2qh55",
"postType": "text",
"flair": "Career",
"upVotes": 1542,
"commentsCount": 387,
"postUrl": "https://www.reddit.com/r/learnprogramming/comments/1abc123/",
"url": "https://www.reddit.com/r/learnprogramming/comments/1abc123/",
"contentUrl": null,
"images": [],
"nsfw": false,
"createdAt": "2025-01-15T14:32:00.000Z",
"crawledAt": "2025-01-31T03:41:00.000Z",
"searchTerm": "programming languages"
}

Pricing

Pay-Per-Result Model

With Reddit Scraper, you pay only for what you run and storeβ€”no monthly subscription and no platform usage fees.

  • Actor start: $0.02 per run
  • Result stored: $0.002 each

Example Cost Calculation

A run that stores 1,000 items costs:

  • 1 actor start: $0.02
  • 1,000 items Γ— $0.002: $2.00
  • Total: $2.02

Pricing Comparison

FeatureReddit Scraper
(pay-per-result)
Reddit Scraper Pro
(subscription)
Billing model$0.02/run + $0.002/item$20/month + usage, unlimited items
Ideal forOccasional or exploratory jobs, tight budgetsContinuous, large-scale scraping
Cost controlPay exactly for usageFixed monthly fee
Same technologyβœ…βœ…

Why Pay-Per-Result?

Pay-per-result is ideal when you:

  • Scrape Reddit occasionally
  • Need quick snapshots of data
  • Want to avoid idle expenses

Example: Two runs that save 500 items each cost just $0.02 Γ— 2 + 1,000 Γ— $0.002 = $2.04

πŸ’Ό Need unlimited results with a predictable monthly fee? Check out Reddit Scraper Proβ€”same engine, flat subscription.


Integration with n8n

Automate your Reddit data pipelines by integrating Reddit Scraper with n8n, a powerful workflow automation tool. Connect scraped data with hundreds of other applications and services seamlessly.

Best for: Quick scrapes that complete within 5 minutes

Setup Steps

  1. Get your Apify API Token

  2. Configure n8n HTTP Request Node

    • Method: POST
    • URL: https://api.apify.com/v2/acts/harshmaur~reddit-scraper/run-sync-get-dataset-items?token=YOUR_APIFY_TOKEN
    • Body Content Type: JSON
    • Body:
{
"startUrls": [
{
"url": "https://www.reddit.com/r/developers/"
}
],
"searchSort": "new",
"searchTime": "all",
"maxPostsCount": 10,
"maxCommentsCount": 10,
"maxCommentsPerPost": 10,
"maxCommunitiesCount": 2,
"proxy": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}

Method 2: Asynchronous Run (For long scrapes)

Best for: Large-scale scraping (entire communities, extensive data collection)

For scraping large amounts of data that may exceed the 300-second timeout, use the asynchronous method. This involves starting the run and fetching results separately.

πŸ“Ί Video Tutorial: How to connect to any API (that uses polling)

⚠️ Important Note on Timeouts

The synchronous API has a 300-second (5-minute) timeout. If your scraping task takes longer, the request will fail.

Solutions:

  • Increase Timeout in n8n: In your HTTP Request node settings, increase the timeout (e.g., 600 seconds for 10 minutes)
  • Use Polling for Async Runs: Use a Wait node in n8n to poll for run completion status before fetching resultsβ€”the most reliable method for long-running jobs

Support

We strive to make Reddit Scraper the most comprehensive tool for your Reddit data extraction needs.

Get Help

πŸ“ Report an Issue

  • Report issues directly in the Run console
  • Helps us track and address problems efficiently

πŸ“§ Email Support

Our Commitment

βœ… Prompt responses to all issues and requests
βœ… Quick problem-solving and feature implementation
βœ… Continuous improvement based on your feedback
βœ… Rapid feature deployment to keep the tool up-to-date

Your feedback is invaluable in helping us continually improve Reddit Scraper to meet your needs.


FAQ

While scraping publicly available data from Reddit is generally allowed, it's important to comply with Reddit's terms of service and respect the site's usage policies.

Best practices:

  • Use the scraper responsibly
  • Avoid excessive requests
  • Ensure scraped data is used in compliance with applicable laws and regulations
  • Respect robots.txt and rate limits
  • Only scrape publicly available content

πŸ“– Read more about compliance with ToS in our blog post.

Do I need Reddit API keys or authentication?

No! One of the biggest advantages of Reddit Scraper is that you don't need any Reddit API keys, OAuth tokens, or authentication. The scraper accesses publicly available Reddit data directly, bypassing API rate limits entirely.

This means:

  • βœ… No Reddit account required
  • βœ… No API application process
  • βœ… No rate limit restrictions (600 requests per 10 minutes)
  • βœ… Unlimited data extraction

Do I need cookies to scrape Reddit?

No, it is not required. Reddit maintains its data publicly accessible and does not enforce users to login for viewing public posts, comments, and communities.

Do you need proxies for scraping Reddit?

Yes. Proxies are required for Reddit scraping to ensure reliable and uninterrupted data extraction. We recommend using Apify's residential proxies for best results.

Why proxies are necessary:

  • Prevent IP blocking from Reddit
  • Distribute requests across multiple IPs
  • Maintain scraping reliability
  • Enable large-scale data extraction

Apify's residential proxy groups are automatically configured in the examples provided.

What's the difference between Fast Mode and regular mode?

Fast Mode is an optimized scraping method that uses direct API endpoints and skips unnecessary navigation steps, resulting in significantly faster data extraction.

Performance comparison:

  • Regular mode: ~100-200 posts per minute
  • Fast Mode: ~500-1000 posts per minute (up to 70% faster)

When to use Fast Mode:

  • Scraping large subreddits (1,000+ posts)
  • Time-sensitive data collection
  • High-volume operations
  • Multiple community scraping

How does the withinCommunity parameter work?

The withinCommunity parameter allows you to search for keywords within a specific subreddit, enabling targeted data extraction.

Format: r/subredditname (e.g., r/technology, r/gaming)

Example use cases:

  • Monitor brand mentions in specific communities
  • Analyze sentiment within niche subreddits
  • Research topics in targeted industries
  • Track competitor discussions in relevant communities

This is perfect for focused market research and community-specific analysis.

What data can I extract from Reddit?

Reddit Scraper can extract comprehensive data including:

Post data:

  • Title, content, and URL
  • Author username and profile link
  • Upvotes, downvotes, and score
  • Number of comments
  • Post timestamp and subreddit
  • Awards and flair
  • Images, videos, and media links

Comment data:

  • Comment text and author
  • Upvotes and score
  • Timestamp and permalink
  • Parent comment relationships
  • Awards and depth level

User data:

  • Username and profile information
  • Post and comment history
  • Karma scores
  • Account age

Community data:

  • Subreddit name and description
  • Subscriber count
  • Active users
  • Community rules and information

How much does it cost to scrape Reddit?

Reddit Scraper uses a pay-per-result pricing model:

  • Actor start: $0.02 per run
  • Result stored: $0.002 per item

Example costs:

  • 1,000 items: $2.02
  • 10,000 items: $20.02
  • 100,000 items: $200.02

No monthly subscription fees or platform charges. You only pay for what you use!

For unlimited scraping with predictable costs, check out Reddit Scraper Pro with flat monthly pricing.

Can I export Reddit data to CSV or Excel?

Yes! Reddit Scraper supports multiple export formats:

  • βœ… JSON - For API integration and data processing
  • βœ… CSV - For Excel and spreadsheet analysis
  • βœ… Excel (XLSX) - Direct Excel format
  • βœ… XML - For structured data exchange
  • βœ… HTML - For web viewing

You can download your data in any format directly from the Apify platform after your scraping run completes.

How do I integrate Reddit Scraper with other tools?

Reddit Scraper offers multiple integration options:

1. n8n Integration - Automate workflows with 300+ app connections

2. Apify API - RESTful API for custom integrations

3. Webhooks - Real-time notifications when scraping completes

4. Zapier - Connect with 5,000+ apps (via Apify integration)

5. Make (Integromat) - Visual automation workflows

See the Integration with n8n section for detailed setup instructions.

What are the rate limits or scraping limits?

Reddit Scraper has no built-in rate limits. You can scrape as much data as you need, limited only by:

  • Your Apify account plan limits
  • The maxPostsCount, maxCommentsCount, and maxItems parameters you set
  • Available proxy resources

Unlike the Reddit API (limited to 600 requests per 10 minutes), Reddit Scraper can extract millions of posts and comments without restrictions.

How long does it take to scrape Reddit data?

Scraping time depends on several factors:

Regular mode:

  • 100 posts: ~1-2 minutes
  • 1,000 posts: ~10-15 minutes
  • 10,000 posts: ~1-2 hours

Fast Mode (recommended for large scrapes):

  • 100 posts: ~30 seconds
  • 1,000 posts: ~3-5 minutes
  • 10,000 posts: ~30-45 minutes

Enable fastMode: true for up to 70% faster scraping!

Can I schedule automatic Reddit scraping?

Yes! Apify supports scheduled runs for automated data collection:

  • Set up daily, weekly, or custom schedules
  • Monitor brand mentions automatically
  • Track trending topics in real-time
  • Build time-series datasets

Configure schedules directly in the Apify Console under the "Schedule" tab of your actor.


Other Actors

Check out these related Apify Actors:

ActorDescription
Reddit Scraper ProUnlimited scraping with flat monthly pricing

Resources


Try Reddit Scraper