Reddit Universal Scraper (No API/Cookies) avatar
Reddit Universal Scraper (No API/Cookies)

Pricing

from $1.00 / 1,000 results

Go to Apify Store
Reddit Universal Scraper (No API/Cookies)

Reddit Universal Scraper (No API/Cookies)

A powerful and reliable Reddit scraper that extracts posts, comments, user profiles, and subreddit data **without requiring Reddit API keys**. Features intelligent sorting, time filtering, and built-in proxy support.

Pricing

from $1.00 / 1,000 results

Rating

5.0

(1)

Developer

Dead

Dead

Maintained by Community

Actor stats

0

Bookmarked

3

Total users

2

Monthly active users

8 days ago

Last modified

Share

๐Ÿค– Reddit Universal Scraper

A powerful and reliable Reddit scraper that extracts posts, comments, user profiles, and subreddit data without requiring Reddit API keys. Features intelligent sorting, time filtering, and built-in proxy support.

โœจ Features

  • ๐Ÿ”ฅ No API Keys Required
  • ๐Ÿ“Š Complete Data Extraction - Posts, comments, user profiles, and subreddit metadata
  • ๐ŸŽฏ Multiple Sort Options - Hot, New, Top, Rising, Controversial
  • โฐ Time Filters - Hour, Day, Week, Month, Year, All Time
  • ๐Ÿ”„ Pagination Support - Scrape up to 1000+ posts per run
  • ๐Ÿ’ฌ Nested Comments - Extract full comment threads with reply depth
  • ๐Ÿ‘ค User Analysis - Profile data including karma and account age
  • ๐Ÿ” Proxy Support - Built-in Apify residential proxy integration
  • ๐Ÿ“ฆ Organized Output - Clean data structure with type identification
  • ๐Ÿš€ Rate Limiting - Smart delays to avoid Reddit blocking

๐Ÿš€ Quick Start

Example 1: Scrape Hot Posts

{
"scrapeType": "subreddit",
"subredditName": "python",
"sortType": "hot",
"maxPosts": 100
}

Example 2: Top Posts of the Week

{
"scrapeType": "subreddit",
"subredditName": "AskReddit",
"sortType": "top",
"timeFilter": "week",
"maxPosts": 50
}

Example 3: Scrape User Posts

{
"scrapeType": "user",
"username": "spez",
"sortType": "top",
"timeFilter": "all",
"maxPosts": 100,
"getUserProfile": true
}

๐Ÿ“‹ Input Parameters

Scrape Type

ValueDescription
subredditScrape posts from a subreddit
userScrape posts from a user's profile
bothScrape both subreddit and user

Required Fields

  • subredditName - Subreddit name without 'r/' (e.g., "python", "technology")
  • username - Username without 'u/' (e.g., "spez", "gallowboob")

Note: If scrapeType is "subreddit", only subredditName is required. If "user", only username is required. If "both", both are required.

Sort Options

Sort TypeDescriptionBest For
hotCurrently trending postsReal-time popular content
newLatest posts chronologicallyBreaking news, monitoring
topMost upvoted postsQuality content, research
risingPosts gaining tractionEarly trend detection
controversialMost debated postsControversial topics

Time Filters (Top/Controversial only)

FilterRange
hourPast hour
dayToday
weekThis week
monthThis month
yearThis year
allAll time

โš ๏ธ Important: Time filters only work with top and controversial sorting.

Limits

ParameterMinMaxDefaultDescription
maxPosts11000100Posts to scrape
maxPostsForComments110010Posts to get comments from
commentsPerPost1050050Comments per post

Optional Features

FeatureDefaultDescription
scrapeCommentsfalseExtract comments (slower)
getUserProfiletrueGet user profile data
getSubredditInfotrueGet subreddit metadata
scrapeUserPoststrueScrape user's posts

Proxy Configuration

  • Enabled by default - Uses Apify residential proxies
  • Helps avoid Reddit rate limiting
  • Free credits included with Apify

๐Ÿ“Š Output Data Structure

All data is saved with a data_type field for easy filtering:

Post Data (data_type: "post" or "user_post")

{
"data_type": "post",
"id": "1jy1wdc",
"title": "I made an n8n Cheat Sheet!",
"author": "Superb_Net_7426",
"subreddit": "n8n",
"score": 2032,
"upvote_ratio": 0.99,
"num_comments": 81,
"created_utc": "2025-04-13T07:06:07",
"post_type": "image",
"url": "https://i.redd.it/w4ult2xaxjue1.png",
"full_link": "https://reddit.com/r/n8n/comments/1jy1wdc/...",
"selftext": "",
"domain": "i.redd.it",
"link_flair_text": null,
"thumbnail": "https://...",
"media_url": "https://i.redd.it/w4ult2xaxjue1.png",
"preview_images": ["https://..."],
"is_video": false,
"is_self": false,
"over_18": false,
"spoiler": false,
"stickied": false,
"locked": false,
"gilded": 0,
"total_awards_received": 0,
"sort_type": "top",
"time_filter": "all",
"scraped_at": "2025-12-29T10:30:00"
}

Comment Data (data_type: "comment")

{
"data_type": "comment",
"comment_id": "abc123",
"post_id": "1jy1wdc",
"post_title": "I made an n8n Cheat Sheet!",
"subreddit": "n8n",
"author": "user123",
"body": "This is amazing! Thanks for sharing.",
"score": 45,
"depth": 0,
"parent_id": "",
"created_utc": "2025-04-13T08:00:00",
"post_permalink": "/r/n8n/comments/1jy1wdc/...",
"post_full_link": "https://reddit.com/r/n8n/comments/1jy1wdc/...",
"gilded": 0,
"is_submitter": false,
"scraped_at": "2025-12-29T10:30:00"
}

User Profile (data_type: "user_profile")

{
"data_type": "user_profile",
"username": "spez",
"link_karma": 123456,
"comment_karma": 789012,
"total_karma": 912468,
"created_utc": "2005-06-06T00:00:00",
"account_age_days": 7146,
"is_gold": true,
"is_mod": true,
"is_employee": true,
"verified": true,
"icon_img": "https://...",
"scraped_at": "2025-12-29T10:30:00"
}

Subreddit Info (data_type: "subreddit_info")

{
"data_type": "subreddit_info",
"name": "n8n",
"title": "n8n - Workflow Automation",
"description": "n8n is a free and source-available workflow automation tool",
"long_description": "Full description with rules...",
"subscribers": 50000,
"active_users": 500,
"created_utc": "2019-06-01T00:00:00",
"over_18": false,
"url": "https://reddit.com/r/n8n",
"icon_img": "https://...",
"banner_img": "https://...",
"scraped_at": "2025-12-29T10:30:00"
}

๐ŸŽฏ Common Use Cases

1. News Monitoring

{
"scrapeType": "subreddit",
"subredditName": "worldnews",
"sortType": "new",
"maxPosts": 200,
"scrapeComments": false
}

Use: Track breaking news in real-time

2. Viral Content Discovery

{
"scrapeType": "subreddit",
"subredditName": "memes",
"sortType": "top",
"timeFilter": "day",
"maxPosts": 50
}

Use: Find trending memes and viral content

3. Market Research

{
"scrapeType": "subreddit",
"subredditName": "startups",
"sortType": "top",
"timeFilter": "month",
"maxPosts": 100,
"scrapeComments": true,
"maxPostsForComments": 20
}

Use: Research startup trends and discussions

4. Influencer Analysis

{
"scrapeType": "user",
"username": "influencer_name",
"sortType": "top",
"timeFilter": "year",
"maxPosts": 200,
"getUserProfile": true
}

Use: Analyze influencer activity and engagement

5. Community Analysis

{
"scrapeType": "subreddit",
"subredditName": "your_community",
"sortType": "controversial",
"timeFilter": "week",
"maxPosts": 50,
"scrapeComments": true,
"maxPostsForComments": 50
}

Use: Monitor community health and controversy

6. Content Research

{
"scrapeType": "subreddit",
"subredditName": "writing",
"sortType": "top",
"timeFilter": "week",
"maxPosts": 30,
"scrapeComments": true
}

Use: Find content ideas and trending topics


โš™๏ธ Performance & Limits

Speed Estimates

ConfigurationApproximate Time
50 posts, no comments1-2 minutes
100 posts, no comments2-3 minutes
100 posts, 10 with comments5-7 minutes
500 posts, no comments8-12 minutes
100 posts, all with comments20-30 minutes

Reddit Limits

  • Maximum ~1000 posts available per subreddit/sort combination
  • Stickied posts may appear in results
  • Deleted/removed posts are excluded
  • [deleted] indicates deleted user accounts

Rate Limiting

  • Built-in delays (2-4 seconds between requests)
  • Automatic retry on rate limit (429 errors)
  • Proxy recommended for heavy scraping

๐Ÿ’ก Pro Tips

1. Fastest Scraping

{
"maxPosts": 100,
"scrapeComments": false,
"getSubredditInfo": false
}

2. Most Complete Data

{
"maxPosts": 100,
"scrapeComments": true,
"maxPostsForComments": 20,
"commentsPerPost": 100
}

3. Real-time Monitoring

{
"sortType": "new",
"maxPosts": 50,
"scrapeComments": false
}

4. Quality Content Research

{
"sortType": "top",
"timeFilter": "month",
"maxPosts": 100
}

5. Early Trend Detection

{
"sortType": "rising",
"maxPosts": 50
}

๐Ÿ” Privacy & Ethics

  • โœ… Scrapes only public Reddit data
  • โœ… Respects Reddit's rate limits
  • โœ… No authentication required
  • โœ… Follows ethical scraping practices
  • โš ๏ธ Always respect user privacy
  • โš ๏ธ Use data responsibly
  • โš ๏ธ Follow Reddit's Content Policy

Need help? Here's how to get support:

  1. Review example inputs in this README
  2. Contact via Apify Console - Use the feedback button in your run
  3. Check logs - Actor logs show detailed progress and errors