Reddit All-In-One Scraper avatar

Reddit All-In-One Scraper

Pricing

$15.00/month + usage

Go to Apify Store
Reddit All-In-One Scraper

Reddit All-In-One Scraper

✨Scrape any part of Reddit — posts, comments, users, subreddits, media, videos & search results ✅ 15 scraper types including full nested comment trees, gallery images, video streams, trophies & community rules. Clean structured JSON output. No account or API key needed. 🚀 Reddit Scraper 🌍

Pricing

$15.00/month + usage

Rating

0.0

(0)

Developer

Scrape Architect

Scrape Architect

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

2 days ago

Last modified

Share

Reddit All-In-One Scraper — Extract Any Data from Reddit at Scale

The most complete Reddit scraper available on Apify. Scrape users, subreddits, posts, comments, media, videos, and search results — all from a single actor, with no coding required.

Whether you need a lightweight Reddit scraper for quick data pulls or a high-throughput pipeline for research and analytics, this actor covers every use case. Four engines run in parallel on every request, giving you the speed and reliability needed at any scale.

Quick start: Select a Scraper Type → fill in the matching input field → click Run. Use all mode to run this Reddit scraper across every data type simultaneously.


Why This Reddit Scraper?

Most Reddit scrapers are single-purpose tools — they scrape posts or comments, but not both. This actor is different. It is a true all-in-one Reddit scraper that covers 15 distinct data types across users, communities, posts, media, and search, all in one run.

Key advantages over other Reddit scrapers:

FeatureThis ActorTypical Reddit Scrapers
Scraper types15 types + all mode1–3 types
Parallel execution4 engines fire simultaneouslySingle engine
Proxy supportResidential US (built-in)Optional or not included
Media & video extractionFull: gallery, DASH, HLS, audioLimited or absent
Comment treeFull nested tree, 3 sort ordersFlat list only
FiltersScore, sort, time, result limitRarely included
Output bucketsPosts, comments, users, subreddits, media, trophies, rules1–2 buckets
all modeRuns every matching type at onceNot available

What This Reddit Scraper Returns

Every run of this Reddit scraper produces structured JSON records, each tagged with _item_type and _scraper_type for easy filtering. Depending on the type you choose, you get:

  • Posts — title, body text, score, upvote ratio, flair, author, subreddit, awards, gallery items, preview images, video info, crosspost data, creation timestamp
  • Comments — full text, score, depth, parent ID, author, flair, awards, controversiality, nested replies (child comment IDs)
  • Users — bio, karma breakdown (link / comment / total / awardee / awarder), snoovatar image, account age, gold status, moderation status
  • Subreddits — name, description, subscriber count, active users, rules summary, community type, NSFW flag, icon/banner images, flair list, wiki status
  • Media items — direct download URL, audio URL (separate stream), thumbnail, dimensions, duration, all quality formats (DASH / HLS / fallback)
  • Trophies — name, icon URL, grant date
  • Subreddit rules — full rule text, short name, violation reason, creation date

15 Scraper Types

👤 User Types — Input: Username

user_profile Scrapes the complete public profile of a Reddit user. Returns bio, total karma, link karma, comment karma, awardee and awarder karma, account creation date, gold status, moderator status, snoovatar image, and full profile subreddit metadata.

user_posts Fetches all posts submitted by the user. Pulls from hot, new, top-all, and controversial feeds simultaneously for maximum coverage. Each post includes full metadata: title, score, upvote ratio, flair, subreddit, body text, awards, gallery items, and video information.

user_comments Retrieves the user's most recent comments sorted by both new and top. Each comment includes the full body text, score, post context, subreddit, depth, awards, and whether the commenter is the original post author.

user_trophies Returns all Reddit trophies earned by the user, including the trophy name, description, icon URL, and the date it was granted.


📋 Subreddit Types — Input: Subreddit Name

subreddit_posts The core feed scraper. Fetches posts from the subreddit across hot, new, top (all time / year / month), rising, and controversial feeds in parallel. Each post is fully parsed with score, flair, awards, gallery items, video info, and author data.

subreddit_about Returns the full community metadata: title, public description, long description with HTML, submit text, subscriber count, active user count, creation date, community type, icon image, banner image, NSFW flag, wiki status, flair list, and spoiler settings.

subreddit_rules Returns all community rules with the full rule description, short name, violation reason, kind (link, comment, or all), and creation timestamp.


📝 Post Types — Input: Post URL or Video URL

post_details Scrapes the complete data of a single Reddit post. Returns title, selftext body, HTML body, score, upvote ratio, flair, author flair, domain, post hint, total awards, award list with icons and coin prices, gallery items with captions, preview images at all resolutions, video info, and crosspost data.

post_comments Fetches the full nested comment tree for a post using top, best, and new sort orders. Every comment includes body text, score, depth, parent ID, author, flair, awards, edit status, controversiality, and child comment IDs for tree reconstruction.

post_media Extracts all media associated with a post: gallery images (with captions, dimensions, and outbound links), preview images at every available resolution, and embedded video metadata (DASH URL, HLS URL, fallback download URL, audio URL, dimensions, duration, has-audio flag).

video_metadata Returns all available stream formats for a Reddit-hosted video: the best fallback MP4 URL, the DASH adaptive manifest, the HLS adaptive manifest, and the separate audio-only stream URL. Also includes dimensions, duration, and whether audio is available.


📥 Downloader Types

video_downloader — Input: Post URL or Video URL Returns a structured download package: the best playable video URL, the audio-only stream URL (separate track for v.redd.it videos), and a complete list of all available quality formats with labels, URLs, and format notes. Ready to feed directly into a download pipeline.

post_media_downloader — Input: Post URL or Video URL Expands every piece of media in a post into individual download records. A gallery post with 12 images becomes 12 separate output records, each with a direct download URL, thumbnail, dimensions, and the full post context (subreddit, score, author, creation date). Ideal for bulk media collection.

posts_with_media — Input: Subreddit Name or Username Crawls a subreddit or user feed and keeps only posts that contain media. Each qualifying post is returned as a full post record, and every media item within that post is also returned as a separate download record. Supports all Sort Order and Time Filter options.


🔍 Search Type — Input: Search Keyword

search_results Runs a keyword search across all of Reddit. Returns matching posts (title, score, comment count, URL, author, flair, subreddit, body text) and matching subreddits (name, description, subscriber count). Sort by relevance, new, or top. Apply a Time Filter for time-bounded searches.


🔀 ALL Mode

all The most powerful mode of this Reddit scraper. Fill in any combination of Username, Subreddit, Post URLs, and Search Keyword. The actor automatically builds and runs every applicable scraper type for each field you have filled in — in parallel, as separate jobs. A single run with a username, a subreddit, and a keyword will produce user posts, user comments, user profile, subreddit posts, subreddit about, search results, and more, all in one dataset.


Input Fields

FieldTypeRequiredDescription
scraper_typeselectThe type of data to scrape (see 15 types above)
usernametextconditionalReddit username — required for user types and posts_with_media
subreddittextconditionalSubreddit name — required for subreddit types and posts_with_media
post_urlsURL listconditionalOne or more Reddit post/video URLs — required for post and downloader types
search_keywordtextconditionalSearch phrase — required for search_results
max_resultsnumberCap the number of items per data bucket (0 = no limit)
sort_byselectSort order: hot, new, top, controversial, rising, relevance
time_filterselectTime window for top/controversial sorts: hour, day, week, month, year, all
min_scorenumberExclude posts/comments below this upvote score (0 = no filter)

Accepted Input Formats

Username: spez or u/spez
Subreddit: python or r/python
Post URL: https://www.reddit.com/r/learnpython/comments/1rnzcbl/
Video URL: https://v.redd.it/abc123def456
Keyword: python tutorial

Output Format

Every record pushed to the Apify dataset includes:

{
"_item_type": "post",
"_scraper_type": "subreddit_posts",
"_reddit_input": "r/python",
"_scraped_at": "2026-03-10T11:03:32Z",
"_engines": ["E1", "E2", "E3", "E4"],
"post_id": "1rnzcbl",
"title": "What is the best way to learn Python in 2026?",
"score": 2847,
"author": "user123",
"subreddit": "learnpython",
"scraped_by": ["E1", "E3"]
}

The _item_type field can be: post, comment, user, subreddit, media, trophy, subreddit_rule, generic.


Filters

Max Results

Caps the number of items returned per data bucket after deduplication. Useful when you need exactly N posts without processing the full feed. Set to 0 to return everything collected.

Tip: Reddit returns up to 100 items per feed request. Leave Sort Order empty to have this Reddit scraper fetch all available sort feeds in parallel and merge the results — giving you more unique items than any single feed.

Sort Order

Controls which feed is fetched. Leave empty (default) to fetch all applicable sort orders simultaneously for maximum coverage. Set a specific value to fetch only that feed.

  • hot — currently trending content
  • new — most recently submitted
  • top — highest scored (pair with Time Filter for a specific window)
  • controversial — most debated (pair with Time Filter)
  • rising — gaining traction right now (subreddit feeds only)
  • relevance — best keyword match (search only)

Time Filter

Restricts top and controversial sort results to a specific time window: hour, day, week, month, year, or all.

Minimum Score

Filters out posts and comments below a given upvote score. Useful for collecting only high-engagement content.


Use Cases

This Reddit scraper is used for:

  • Market research — monitor brand mentions, product feedback, and competitor discussions across relevant subreddits
  • Sentiment analysis — collect posts and comments at scale for NLP and opinion mining pipelines
  • Academic research — gather structured Reddit datasets for social science and computational research
  • Content discovery — find top-performing posts, trending topics, and rising content in any community
  • Media collection — download images, videos, and galleries from subreddits or user profiles in bulk
  • Lead generation — find active users in niche subreddits discussing specific topics
  • Competitor monitoring — track mentions and discussions of competing products or services
  • Trend tracking — use time-filtered top/controversial feeds to surface what the Reddit community cares about in a given period

Frequently Asked Questions

Do I need a Reddit account or API key? No. This Reddit scraper uses publicly available Reddit data endpoints. No account, API key, or OAuth credentials are required.

Can I scrape multiple subreddits or users in one run? Use all mode and fill in all the input fields you need. For batch processing across many inputs, run the actor multiple times via the Apify API or schedule.

How do I get more than 100 posts? Leave the Sort Order field empty. The Reddit scraper will fetch hot, new, top, and controversial feeds in parallel and merge the results, giving you a much larger pool of unique posts than any single feed provides.

Can I download Reddit videos? Yes. Use video_downloader for a single post's best video and audio URL, or post_media_downloader to expand all media in a post into individual download records.

Why are some posts missing? Reddit's public feeds may not include all posts (spam-filtered, removed, very new, or low-scored posts may be excluded). Using multiple sort feeds via the empty Sort Order setting gives the highest coverage.

What does the all mode do exactly? It looks at every input field you have filled in and generates a full job list — every applicable scraper type for each input. For example: if you fill in a username and a subreddit, it runs user_profile, user_posts, user_comments, user_trophies, posts_with_media (for the user), subreddit_posts, subreddit_about, subreddit_rules, and posts_with_media (for the subreddit) — all in sequence, outputting everything to a single dataset.


Example Runs

Scrape a user's complete profile and post history:

  • Scraper Type: user_posts
  • Username: spez
  • Sort Order: (empty — all feeds)

Get all top posts from a subreddit this year:

  • Scraper Type: subreddit_posts
  • Subreddit: python
  • Sort Order: top
  • Time Filter: year

Download all media from a subreddit:

  • Scraper Type: posts_with_media
  • Subreddit: EarthPorn
  • Max Results: 200

Search Reddit for a keyword and filter by quality:

  • Scraper Type: search_results
  • Search Keyword: machine learning tutorial
  • Sort Order: relevance
  • Min Score: 50

Full data extraction in one run:

  • Scraper Type: all
  • Username: spez
  • Subreddit: python
  • Search Keyword: python tips

This Reddit scraper is built for reliability, completeness, and scale. If you find a Reddit data type that is not covered, please raise a feature request through Apify's support channels.