# 🔥 Reddit Scraper + Ads Library - Posts \[NO COOKIES] ✅ (`unseenuser/reddit-scraper`) Actor

The most complete Reddit scraper on Apify. Extract posts, comments with full reply chains, subreddit metadata, user profiles, AND ads from the Reddit Ads Library - all without API keys, cookies, or login. Built for market researchers, content marketers, and competitive intel teams.

- **URL**: https://apify.com/unseenuser/reddit-scraper.md
- **Developed by:** [Unseen User](https://apify.com/unseenuser) (community)
- **Categories:** Developer tools, Lead generation, Social media
- **Stats:** 2 total users, 1 monthly users, 100.0% runs succeeded, 2 bookmarks
- **User rating**: 5.00 out of 5 stars

## Pricing

$4.00 / 1,000 results

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## 🔥 Reddit Scraper + Reddit Ads Library - The Only Dual-Mode Reddit Tool on Apify

> **Reddit content + Reddit Ads Library in one Actor.** Every other Reddit scraper on Apify gives you posts and comments. This one also gives you the **Reddit Ads Library** - the one feature no competitor offers.

**Search across all of Reddit · Pull subreddit feeds and full comment trees · Spy on competitor ads.** No Reddit account, no API key from Reddit, no Pushshift workarounds.

---

### 🟢 Two modes, one Actor - what makes this different

| | **Mode 1: Reddit content** 📰💬🔎 | **Mode 2: Reddit Ads Library** 💰🎯 |
|---|---|---|
| **What it does** | Scrape any subreddit, every comment thread, search across all of Reddit | Search the official Reddit Ads Library - see real competitor ads, creative, targeting |
| **Why it's special** | Works without a Reddit Data API license, no rate-limit pain, no Pushshift detours | **Almost unique on Apify** - virtually no other actor surfaces Reddit ad creative |
| **Best for** | Customer voice research, sentiment analysis, brand monitoring, viral content tracking | Competitor ad-spy on Reddit (still niche, far less competitive than Meta/Google ad-spy markets) |
| **7 modes total** | `subreddit_posts`, `subreddit_search`, `subreddit_info`, `post_comments`, `reddit_search` | `ads_search`, `ad_details` |

If you only care about content, you get the cleanest non-API Reddit scraper on the platform. If you also care about ads, you get the only Reddit ad-spy tool on Apify - bundled.

---

### ⚡ Why this Actor (vs. Reddit's own API and other scrapers)

- **No Reddit account, no Reddit API key.** Reddit's official Data API has gotten progressively more restrictive since 2023 - tight rate limits, costly for commercial use, and explicit AI-training prohibitions in their terms. This Actor goes around that pain entirely.
- **Reddit Ads Library access** - the killer feature that doesn't exist anywhere else on Apify. Filter by industry, budget, format, placement, and objective.
- **7 modes in one actor** - subreddit feeds, in-subreddit search, site-wide search, full comment trees, subreddit metadata, ad library search, single-ad detail.
- **Comment trees are nested**, not flat - the actor reassembles `parentCommentId -> replies` into a true tree client-side, with depth pruning.
- **Pay-per-result** - you only pay for what you actually pull, no monthly subscription.
- **Sentiment-ready output** - comments include scores, awards, OP flag, removed/deleted flags so removal-rate and toxicity analyses work out of the box.

---

### 🎯 Use cases - by mode

#### 📰💬🔎 Content mode

1. **Customer voice research** - mine niche subreddits for unfiltered customer language. Reddit is the highest-signal text-social platform for "how real people actually talk about a problem."
2. **Sentiment analysis** - posts + comments with upvote scores feed straight into sentiment pipelines. Removed/deleted flags let you measure toxicity and moderator activity.
3. **Brand monitoring** - search across all of Reddit for every mention of your brand, weekly. Cheaper and higher-signal than most listening tools.
4. **Viral content tracking** - sort by `top` over `day`/`week`/`month` to see what's catching fire in any subreddit.
5. **Subreddit moderator analytics** - track your community + competitors' communities (rules, mod lists, post velocity).
6. **Academic research / journalism** - Reddit is one of the canonical text-social corpora.

#### 💰🎯 Ads Library mode

1. **Competitor ad-spy on Reddit** - see exactly which creative your competitors are running, what budget tier (LOW/MEDIUM/HIGH), what objective (conversions, clicks, video views), and what placements (feed vs. comments).
2. **Industry benchmarking** - filter by industry (FINANCIAL_SERVICES, HEALTH_AND_BEAUTY, TECH_B2C, etc.) to see who's spending where.
3. **Creative ideation** - the headline + body + CTA from every ad becomes a swipe file.
4. **Format analysis** - which industries lean into VIDEO vs. IMAGE vs. CAROUSEL vs. FREE_FORM.
5. **First-mover advantage** - Reddit ad-spy is far less saturated than Meta or Google ad-spy. The data is there; almost nobody is selling tooling around it yet.

---

### 🚀 Quick Start

#### Subreddit posts (content mode)

```json
{
  "mode": "subreddit_posts",
  "subreddits": ["r/SaaS", "r/Entrepreneur"],
  "sortBy": "top",
  "timeFilter": "week",
  "maxPostsPerSubreddit": 100
}
````

#### Brand monitoring across all of Reddit (content mode)

```json
{
  "mode": "reddit_search",
  "searchQueries": ["your brand name"],
  "sortBy": "new",
  "timeFilter": "week",
  "maxPostsPerSubreddit": 200
}
```

#### Full comment tree for a post (content mode)

```json
{
  "mode": "post_comments",
  "postUrls": [
    "https://www.reddit.com/r/AskReddit/comments/1ldr6b9/..."
  ],
  "sortBy": "top",
  "maxCommentsPerPost": 200,
  "maxCommentDepth": 5
}
```

#### Reddit Ads Library spy (ads mode) - the killer feature

```json
{
  "mode": "ads_search",
  "adSearchQueries": ["insurance"],
  "adIndustries": ["FINANCIAL_SERVICES"],
  "adBudgets": ["HIGH"],
  "adFormats": ["VIDEO"],
  "maxAdsPerSearch": 30
}
```

#### Single ad detail (ads mode)

```json
{
  "mode": "ad_details",
  "adIds": ["79e005f1e09ec72245e904d87d2a0869"]
}
```

#### Subreddit metadata (content mode)

```json
{
  "mode": "subreddit_info",
  "subreddits": ["AskReddit", "MachineLearning"]
}
```

#### Search inside specific subreddits (content mode)

```json
{
  "mode": "subreddit_search",
  "subreddits": ["MachineLearning"],
  "searchQueries": ["AI agents"],
  "sortBy": "top",
  "timeFilter": "month"
}
```

***

### 🧭 How the input form is laid out

The input form is grouped into clearly-emoji'd sections. **You only fill the section that matches your mode** - everything else stays collapsed.

1. **STEP 1: What do you want to scrape?** - pick the mode. The dropdown labels point you to the section to fill.
2. **📰 Section** - subreddit posts / subreddit info / subreddit search (`subreddits`, sort, time window)
3. **🔎 Section** - reddit search / subreddit search (`searchQueries`, scope, optional restrict-to-subreddit)
4. **💬 Section** - post comments (`postUrls`)
5. **💰 Section** - ads library search (`adSearchQueries` + optional industry/budget/format/placement/objective filters)
6. **🎯 Section** - ad details (`adIds`)
7. **⚙️ Limits** - caps on posts, comments, depth, ads (apply to every mode)

Every field title leads with the emoji of the mode it belongs to, every description starts with `REQUIRED FOR:` or `ONLY USED FOR:`. Most fields ship with sensible prefills, so for the common cases you can hit **Start** without typing anything.

***

### ⚙️ All 7 modes

| Mode | Section to fill | Purpose |
|---|---|---|
| `subreddit_posts` | 📰 | Posts from a subreddit, sorted/timeframed |
| `reddit_search` | 🔎 | Search across all of Reddit (brand monitoring) |
| `post_comments` | 💬 | Full nested comment tree for a post URL |
| `subreddit_search` | 📰 + 🔎 | Search inside specific subreddits |
| `subreddit_info` | 📰 | Subreddit metadata - subscribers, rules, description |
| `ads_search` | 💰 | Reddit Ads Library - filter by industry, budget, format, placement, objective |
| `ad_details` | 🎯 | Full detail for a single ad ID |

***

### 🔧 Technical Details

- **API Provider:** Scrape Creators (`https://scrapecreators.com`)
- **Auth:** `x-api-key` header, configured via `SCRAPECREATORS_API_KEY` env var
- **Endpoints used (7):**
  - `GET /v1/reddit/subreddit/details`
  - `GET /v1/reddit/subreddit`
  - `GET /v1/reddit/subreddit/search`
  - `GET /v1/reddit/post/comments`
  - `GET /v1/reddit/search`
  - `GET /v1/reddit/ads/search`
  - `GET /v1/reddit/ad`
- **Pagination:** `after` cursor, propagated automatically until `maxPostsPerSubreddit` is reached
- **Comment tree:** assembled client-side with `maxCommentDepth` pruning + total-node cap
- **Subreddit names:** accept `r/foo`, `/r/foo`, or `foo` - prefix is stripped silently
- **Reddit Ads Library:** Reddit caps results at ~30 per query; `maxAdsPerSearch` is a hard cap on top of that

***

### 📤 Output

Every row written to the dataset carries two top-level discriminator fields so you can filter cleanly:

| Field | Meaning |
|---|---|
| `_recordType` | `post` · `post_with_comments` · `subreddit_info` · `ad` |
| `_sourceMode` | The input mode that produced this row (e.g. `subreddit_posts`, `reddit_search`, `ads_search`) |

Null fields and empty arrays are stripped before push, so rows are compact and only carry data that's actually present.

#### 🗂 Dataset views (preset tables in the Apify UI)

The Actor ships with five views you can switch between in the dataset preview:

| View | What it shows |
|---|---|
| **🗂 Overview** | All rows, key columns, sortable. Great first look. |
| **📰 Posts** | Subreddit/search posts only - title, author, upvotes, media, flair |
| **💬 Threads** | One row per post URL; expand to see the full nested comment tree |
| **💰 Ads Library** | Ad ID, advertiser, headline, body, industry, budget, placements, thumbnail |
| **ℹ️ Subreddits** | Display name, description, subscriber count, rules, mods |

Views are defined in [`.actor/dataset_schema.json`](./.actor/dataset_schema.json).

#### 🗝 Key-value store collections

In addition to the dataset, the Actor writes a few high-value records as standalone JSON files in the run's key-value store, organized by prefix:

| Prefix | What's there |
|---|---|
| `RUN_SUMMARY` | One JSON object per run - mode, inputs, item count, timestamps |
| `subreddit-<name>` | One file per subreddit returned by `subreddit_info` |
| `thread-<postId>` | One file per post returned by `post_comments` (full nested tree) |
| `ad-<adId>` | One file per ad returned by `ads_search` or `ad_details` |

Defined in [`.actor/key_value_store_schema.json`](./.actor/key_value_store_schema.json).

#### 🌐 OpenAPI (run the Actor over HTTP)

[`.actor/openapi.json`](./.actor/openapi.json) is a full OpenAPI 3 description of how to invoke this Actor over Apify's API - including a synchronous `run-sync-get-dataset-items` endpoint and request/response schemas for every record type. Drop it into Postman/Insomnia/Bruno and you have a typed client in seconds.

***

### 🧩 Output row schemas - side by side

#### Content mode (post)

```json
{
  "_recordType": "post",
  "_sourceMode": "subreddit_posts",
  "postId": "1ldr6b9",
  "url": "https://www.reddit.com/r/AskReddit/comments/1ldr6b9/...",
  "permalink": "https://www.reddit.com/r/AskReddit/comments/1ldr6b9/...",
  "title": "What are your thoughts on...",
  "selftext": "...",
  "authorUsername": "Ecstatic-Medium-6320",
  "authorId": "t2_aelahp9al",
  "subreddit": "AskReddit",
  "subredditId": "t5_2qh1i",
  "postedAt": "2025-06-17T15:48:36.000Z",
  "scoreUpvotes": 12606,
  "upvoteRatio": 0.93,
  "commentCount": 1921,
  "isStickied": false,
  "isLocked": false,
  "isNsfw": false,
  "isSpoiler": false,
  "mediaType": "text",
  "mediaUrls": ["..."],
  "domain": "self.AskReddit",
  "scrapedAt": "..."
}
```

#### Content mode (comment, recursive)

```json
{
  "commentId": "...",
  "postId": "...",
  "parentCommentId": null,
  "depth": 0,
  "authorUsername": "...",
  "text": "...",
  "postedAt": "...",
  "scoreUpvotes": 42,
  "isStickied": false,
  "isOP": false,
  "isDeleted": false,
  "isRemoved": false,
  "replies": [ /* same shape, recursive */ ]
}
```

#### Ads Library mode (ad)

```json
{
  "_recordType": "ad",
  "_sourceMode": "ads_search",
  "adId": "79e005f1e09ec72245e904d87d2a0869",
  "advertiserName": "u_thepennyhoarder",
  "headline": "What is a rich person's money tip you wish you knew sooner?",
  "body": "...full ad copy...",
  "industry": "OTHER",
  "budgetCategory": "HIGH",
  "objective": "CONVERSIONS",
  "placements": ["FEED", "COMMENTS_PAGE"],
  "mediaType": "TEXT",
  "ctaText": null,
  "ctaUrl": "self.thepennyhoarder",
  "thumbnailUrl": "https://b.thumbs.redditmedia.com/...",
  "postUrl": "https://www.reddit.com/r/u_thepennyhoarder/comments/.../",
  "scrapedAt": "..."
}
```

***

### ❓ FAQ

**Q: Reddit's official API got expensive and restrictive. Does this still work?**
A: Yes. This Actor doesn't touch Reddit's official Data API. Public Reddit data is accessed via Scrape Creators' Reddit endpoints, so you don't need a Reddit account, a Reddit API key, an OAuth app, or any of the rate-limited / pay-per-call pain that came with Reddit's 2023-2024 API changes. Pushshift's death also doesn't affect this Actor.

**Q: Why is Reddit's official API a worse path than this Actor?**
A: Three reasons:

1. **Cost.** Reddit's Data API charges per call once you exceed free-tier limits, and commercial use can run into thousands of dollars per month.
2. **Restrictions.** Reddit's Data API Terms explicitly restrict AI training and many commercial uses; this Actor lets you do the data extraction step, with downstream-use compliance left to you.
3. **Friction.** Setting up a Reddit OAuth app, managing tokens, handling rate limits, and rotating credentials is a project. This Actor is one env var.

**Q: What's special about the Reddit Ads Library mode?**
A: Reddit publishes an official ads transparency feed (similar to Meta's Ad Library), but it's poorly tooled and almost no Apify actor surfaces it. This Actor is one of the only ones that does, with full filter support: industry, budget tier, format, placement, objective.

**Q: How does the Reddit Ads Library compare to Meta or Google ad-spy?**
A: Reddit ad-spy is far less saturated. Meta and Google ad-libraries have dozens of paid third-party tools at $50-$200/month. Reddit has barely any. If you sell into Reddit-active audiences (gaming, tech, finance, niche communities), this is a market the rest of the ad-spy ecosystem hasn't caught up to yet.

**Q: How does comment depth work?**
A: Comments come back as a nested tree. `maxCommentDepth=5` keeps replies up to 5 levels deep; deeper threads are pruned. `maxCommentsPerPost` is a hard cap on total nodes (depth-first counting).

**Q: Are deleted/removed comments returned?**
A: Yes - structure is preserved with `isDeleted` / `isRemoved` flags so you can analyze removal rates without breaking thread shape.

**Q: Can I train AI on Reddit data?**
A: Reddit's Data API Terms restrict AI training. Even via third-party APIs, training commercial AI on Reddit content typically requires separate licensing from Reddit. Reddit has actively litigated this - see *Reddit v. Perplexity AI*. This Actor is not a substitute for a Reddit data license. See [Reddit's user agreement](https://www.redditinc.com/policies/user-agreement) and the addendum below.

**Q: Is this affiliated with Reddit?**
A: No.

**Q: How do you handle Reddit usernames?**
A: Returned as-is. Usernames are pseudonymous but should be treated as personal data under privacy law in your downstream processing.

**Q: How does pagination work?**
A: Each listing endpoint returns an `after` token. The Actor follows it until `maxPostsPerSubreddit` is satisfied or the cursor exhausts.

**Q: What about NSFW / quarantined subreddits?**
A: Returned as normal. Use `isNsfw` and `isQuarantined` flags if you need to filter downstream.

**Q: I want a brand-new ad-spy stack across every platform. Where do I start?**
A: See **Related scrapers** below - this Reddit Actor is one of four UnseenUser ad-library actors. Combined they cover Meta, LinkedIn, Google, and Reddit ads.

***

### 🔁 Related scrapers - build a full cross-platform ad intelligence suite

This Actor is one of UnseenUser's ad-library / social-listening series. Combine them for the most complete ad-spy stack on Apify:

- **[Meta Ad Library Scraper](https://apify.com/unseenuser/meta-ads)** - Facebook, Instagram, Threads, WhatsApp ads
- **[LinkedIn Ad Library Scraper](https://apify.com/unseenuser/LinkedIn-ads)** - B2B ad spy
- **[Google Ads Transparency Scraper](https://apify.com/unseenuser/Google-ads)** - Search, YouTube, Display, Shopping

[See all 16 scrapers by unseenuser →](https://apify.com/unseenuser)

**Bundle pitches:**

- **Cross-platform ad-spy:** Reddit + Meta + LinkedIn + Google = every paid channel a competitor is buying
- **Cross-platform brand monitoring:** Reddit + X/Twitter + YouTube comments = full text-social listening
- **Sentiment stack:** Reddit posts/comments are the highest-signal raw material for consumer sentiment work

***

## 📜 Master Terms of Service V4.0

**Version:** 4.0
**Effective Date:** May 5, 2026

***

### 0. ACCEPTANCE BY USE - IMPORTANT

**Read this section first.**

These Terms of Service ("Terms") form a binding legal agreement between you ("User," "you," "your") and **UnseenUser**, the Publisher of this Apify actor ("UnseenUser," "the Publisher," "we," "us," "our").

#### 0.1 How You Accept These Terms

You accept these Terms by any of the following actions, each of which constitutes a clear, affirmative act of acceptance:

- **(a) Running the Actor** - Initiating any execution of the Actor on the Apify platform
- **(b) Using any output returned by the Actor** for any purpose
- **(c) Continuing to access the Actor's listing or documentation** after these Terms are visible

#### 0.2 Continuing Acceptance

Each time you run the Actor or use its outputs, you reaffirm your acceptance of the then-current Terms. If you do not agree to these Terms or any subsequent update, you must stop using the Actor immediately.

#### 0.3 No Anonymous Acceptance

You cannot disclaim acceptance by:

- Failing to read these Terms before running the Actor
- Running the Actor through automated systems
- Sharing your Apify account with others who may not have read these Terms

By the act of running the Actor on Apify, you bind yourself, your organization (if applicable), and any individuals or systems acting on your behalf or under your authority.

#### 0.4 If You Do Not Accept

If you do not agree to these Terms, you must not run the Actor. No use is authorized without acceptance.

***

### PREAMBLE - UNDERSTANDING THE ARCHITECTURE

Before using the Actor, please understand the technical architecture of the service:

#### The Data Flow

```
You (User) → Apify Platform → Actor (software) → Third-Party API → Source Platform
                                                       ↓
You (User) ← Apify Platform ← Actor (software) ← Third-Party API
```

#### What Each Party Does

- **You (the User):** Run the Actor on the Apify platform with input parameters you choose
- **Apify:** Operates the cloud infrastructure that hosts and executes Actors. Apify is a Czech-incorporated company (Apify Technologies s.r.o.) governed by its own Terms of Service.
- **The Publisher (us):** Publishes software code (the Actor) on Apify's platform. The Actor is a thin wrapper that translates your input into requests to a third-party API and returns the API's responses to you. The Publisher does not operate scraping infrastructure. The Publisher does not store or retain data returned by the Actor. The Publisher does not see, log, or process the personal data of any individuals returned in the Actor's outputs beyond what is incidental to passing the data through.
- **Third-Party API Provider:** HarvestAPI (https://harvest-api.com) or Scrape Creators (https://scrapecreators.com). These are independent third-party companies that operate scraping infrastructure and return data from source platforms.
- **Source Platform:** LinkedIn, TikTok, YouTube, Reddit, Linktree, etc. These are the platforms whose publicly visible data is accessed by the Third-Party API Providers.

#### Why This Matters

Your relationship with the Publisher is that of a software user to a software vendor. The Publisher has the responsibilities of a software vendor (functional code, accurate documentation) and the limits of one (the Publisher is not responsible for how you use the data you obtain).

***

These Terms operate alongside but do not replace:

- Apify's Terms of Service and Acceptable Use Policy (governing your relationship with Apify)
- HarvestAPI Terms of Service and Scrape Creators Terms of Service (governing the underlying data infrastructure)
- Source Platform terms (LinkedIn, TikTok, etc.) governing the public data accessed
- Applicable law in your jurisdiction and the jurisdictions of data subjects

These Terms incorporate the actor-specific addendum published in each Actor's individual listing ("Addendum"). In the event of a conflict, the more restrictive provision applies.

***

### 1. NATURE OF THE SERVICE

#### 1.1 What the Actor Is

The Actor is a software program published on the Apify platform. Each Actor:

- (a) Accepts structured input from you on the Apify platform
- (b) Translates that input into HTTP requests to a third-party API operated by HarvestAPI or Scrape Creators
- (c) Receives HTTP responses from that third-party API
- (d) Returns the response data to you in a structured format on the Apify platform

The Actor's source code is hosted on Apify's infrastructure. The Actor runs in Apify's cloud, not on the Publisher's servers. The Publisher operates no servers running the Actor.

#### 1.2 What the Actor Is Not

The Actor is not:

- (a) A scraping tool - the Publisher does not operate scraping infrastructure, proxies, headless browsers, or fake accounts
- (b) A direct connection to any source platform - connections to source platforms are made by HarvestAPI / Scrape Creators
- (c) A data storage or data retention service - the Publisher does not maintain a database of any data the Actor returns
- (d) A licensed access channel to LinkedIn, TikTok, YouTube, Reddit, X (Twitter), Meta, Linktree, or any other source platform
- (e) Affiliated with, endorsed by, sponsored by, or authorized by any source platform

#### 1.3 The Publisher's Limited Role

The Publisher's role is limited to:

- (a) Designing and writing the Actor's source code
- (b) Publishing the Actor on the Apify Store
- (c) Maintaining the Actor (updating it when API providers change schemas)
- (d) Providing documentation and customer support via Apify's contact mechanism

The Publisher is a software vendor, similar to a developer who publishes an app on the Apple App Store or Google Play Store. The Publisher is not a data provider, data broker, data processor, or data controller for purposes of GDPR, CCPA, Israel's Privacy Protection Law, or equivalent.

#### 1.4 The Third-Party API Providers' Role

HarvestAPI (https://harvest-api.com) and Scrape Creators (https://scrapecreators.com) are independent third-party companies. They:

- (a) Operate the actual data scraping infrastructure
- (b) Maintain relationships with source platforms (or accept the operational risk of accessing public data without such relationships)
- (c) Are themselves Apify publishers (HarvestAPI publishes 9+ actors directly; Scrape Creators publishes 10+)
- (d) Provide their own Terms of Service governing their operations
- (e) Are responsible for compliance obligations relating to the data collection itself

The Publisher is a customer of these providers. The Publisher is not their agent, partner, or representative.

***

### 2. WHO MAY USE THE ACTOR

#### 2.1 Eligibility

You may use the Actor only if:

- (a) You are at least 18 years old or the age of majority in your jurisdiction
- (b) You have legal capacity to enter into binding contracts
- (c) You are not located in or resident of a country subject to comprehensive sanctions by the United States, European Union, United Kingdom, or Israel
- (d) You are not on any prohibited persons list

#### 2.2 User Representations

By using any Actor, you represent and warrant that:

- (a) **Truthful identity:** Information you provide about your identity and intended use is accurate
- (b) **Lawful intent:** Your intended use complies with applicable law in your jurisdiction
- (c) **Source platform compliance:** You will independently comply with the Terms of Service of any source platform whose data you obtain through the Actor
- (d) **Data subject rights:** Where Actor outputs include personal data, you will respect data subject rights under applicable law
- (e) **No prohibited use:** You will not use the Actor for any of the purposes prohibited in Section 4

These representations are continuous - they must remain true throughout your use.

***

### 3. PERMITTED USES

The Actor may be used for any lawful purpose, including:

- Market research and competitive analysis
- Academic research
- Journalism and investigative reporting
- Internal business intelligence
- Brand monitoring
- Recruitment research where consistent with applicable employment law
- Building products that further process publicly available information lawfully

Specific permitted uses for each Actor are described in that Actor's individual listing and Addendum.

***

### 4. PROHIBITED USES

You may not use the Actor for any of the following:

#### 4.1 Illegal Activity

Activity illegal under the law of your jurisdiction, the User's jurisdiction, or the jurisdiction of any data subjects.

#### 4.2 Harassment, Stalking, and Personal Targeting

- Compiling profiles for harassment, stalking, or doxxing
- Tracking individuals' movements or activities without their knowledge
- Building profiles of journalists, activists, dissidents, or vulnerable populations for retaliatory purposes

#### 4.3 Discrimination

- Using outputs for discriminatory employment, lending, housing, or insurance decisions based on protected characteristics
- Building lists for discriminatory purposes

#### 4.4 Spam and Unsolicited Commercial Communication

- Sending unsolicited marketing in violation of CAN-SPAM, CASL, GDPR, PECR, Israeli Anti-Spam Law (סעיף 30א לחוק התקשורת), or equivalent laws
- Building "lead lists" from scraped contacts without proper consent infrastructure
- Reselling contact data for spam purposes

#### 4.5 Fraud and Deception

- Identity theft or impersonation
- Generation of fake reviews, testimonials, or coordinated inauthentic behavior
- Election interference or political disinformation
- Securities fraud

#### 4.6 Source Platform Abuse

- Using outputs to circumvent technical protection measures of source platforms
- Creating fake accounts on source platforms based on Actor outputs
- Vote manipulation, engagement manipulation, or platform algorithm gaming
- Building services that competitively substitute for source platforms

#### 4.7 Reselling the Actor's Service

- Reselling raw Actor outputs as your own data product or scraping-as-a-service
- Sharing your Apify credentials to provide third parties indirect access
- Building competing API services using Actor outputs

#### 4.8 AI Training Without Authorization

- Using Actor outputs as training data for commercial AI/ML models without separate licensing authority from the source platform

#### 4.9 Sensitive Targeting

- Specifically targeting or profiling based on health conditions, sexual orientation, religious beliefs, political opinions, or other sensitive characteristics
- Targeting children under 16 (or local age of consent for data processing)

#### 4.10 Privacy Law Violations

- Processing personal data of EU/UK/California/Israeli residents without complying with applicable privacy law
- Failing to honor data subject access, deletion, or objection requests
- Processing data for purposes incompatible with its publication context

***

### 5. SOURCE PLATFORM TERMS - YOUR RESPONSIBILITY

#### 5.1 Acknowledgment

The Actor accesses publicly visible data on third-party platforms ("Source Platforms") through the Third-Party API Providers (HarvestAPI / Scrape Creators). Source Platforms include LinkedIn, TikTok, YouTube, Reddit, X (Twitter), Meta/Facebook, Linktree, Komi, Pillar, Linkbio, Linkme, and Amazon.

#### 5.2 Your Sole Responsibility

You acknowledge:

- (a) You are solely responsible for ensuring your downstream use of data obtained through the Actor complies with the Source Platform's Terms of Service
- (b) The Publisher makes no representation that any specific use is permitted under any Source Platform's terms
- (c) The Third-Party API Providers, not the Publisher, bear responsibility for the lawfulness of the data collection itself
- (d) You should review Source Platform terms before commercial use:
  - LinkedIn: https://www.linkedin.com/legal/user-agreement
  - TikTok: https://www.tiktok.com/legal/page/global/terms-of-service/en
  - YouTube: https://www.youtube.com/static?template=terms
  - X: https://twitter.com/en/tos
  - Reddit: https://www.redditinc.com/policies/user-agreement
  - Meta: https://www.facebook.com/legal/terms
  - Linktree: https://linktr.ee/s/terms/

#### 5.3 Cease-and-Desist Compliance

If you receive a cease-and-desist letter or other legal demand from a Source Platform regarding your use of Actor outputs, you must:

- (a) Cease the contested use immediately
- (b) Notify UnseenUser within 48 hours via UnseenUser's Apify profile contact form (https://apify.com/UnseenUser)
- (c) Cooperate with the Publisher as needed to mitigate
- (d) Not assert against the Publisher any claim arising from your inability to use the Actor for that Source Platform

***

### 6. DATA PROTECTION - REFLECTING ACTUAL ARCHITECTURE

#### 6.1 Roles Under Privacy Law

For purposes of GDPR, UK GDPR, CCPA, Israel's Privacy Protection Law (PPL) including Amendment 13, and equivalents:

- **You (the User) are the Data Controller** of any personal data you obtain through the Actor and subsequently process for your own purposes
- **HarvestAPI and Scrape Creators** are the entities that collect data from source platforms - they bear the responsibilities of data processors or controllers (depending on context) for the collection itself
- **The Publisher acts solely as a software vendor**, not as a data controller or processor, because the Publisher does not store, retain, or substantively process personal data - the Actor merely passes API responses through

#### 6.2 No Data Retention by the Publisher

The Publisher confirms:

- (a) The Publisher does not maintain a database of personal data obtained through the Actor
- (b) The Actor passes data from the Third-Party API directly to you on the Apify platform - data does not flow through the Publisher's infrastructure
- (c) Apify's standard execution and operational logging may include limited information about Actor runs (input parameters, run duration, data volume) - this is governed by Apify's own privacy practices
- (d) The Publisher does not access, view, or analyze your Actor outputs except as needed for technical support if you specifically share them with the Publisher

#### 6.3 Your Obligations as Data Controller

Where your use of the Actor involves processing personal data, you are responsible for:

- (a) Establishing a lawful basis for your processing (consent, legitimate interest with documented balancing test, contract, etc.)
- (b) Providing transparent notice to data subjects as required by applicable law
- (c) Honoring data subject access, rectification, erasure, restriction, and portability requests
- (d) Implementing appropriate security measures
- (e) Conducting Data Protection Impact Assessments where required
- (f) Appointing a Data Protection Officer if your operations require one
- (g) Registering databases with applicable supervisory authorities
- (h) Honoring opt-out requests for direct marketing
- (i) Cross-border transfer safeguards where data crosses borders

#### 6.4 Israel's Amendment 13 - User Compliance

If your use of the Actor involves Israeli residents' personal data, you must comply with the Privacy Protection Law as amended (Amendment 13, effective August 14, 2025). These obligations are yours as the data controller, not the Publisher's as the software vendor.

#### 6.5 Sensitive Data Targeting Restrictions

You will not use the Actor to specifically target, profile, or build datasets focused on:

- Health or medical conditions
- Religious beliefs
- Political opinions
- Sexual orientation or gender identity
- Genetic or biometric data
- Criminal history
- Children under 16

***

### 7. INTELLECTUAL PROPERTY

#### 7.1 Actor Code

The Actor's source code, schemas, documentation, and branding are owned by the Publisher. You receive a limited, non-exclusive, non-transferable, revocable license to use the Actor for permitted purposes during your active subscription/run with Apify.

#### 7.2 Output Data

The Publisher claims no ownership over the public data the Actor returns. Source Platforms may have copyright, database rights, or other rights in their data; data subjects may have copyright in user-generated content. Your use of output data must respect these rights independently.

#### 7.3 Restrictions

You may not reverse engineer, decompile, or reuse the Actor's code in a competing actor.

#### 7.4 Feedback

Feedback you provide may be used by the Publisher to improve products without compensation to you.

***

### 8. PRICING AND PAYMENT

#### 8.1 Apify Platform Billing

Pricing is administered through Apify's pricing models. Apify processes all payments. Apify's payment terms govern refunds and disputes.

#### 8.2 Pricing Changes

The Publisher may change Actor pricing with at least 14 days' notice via the Actor's Apify listing.

#### 8.3 No Refunds for Misuse

If your access is suspended or terminated for breach of these Terms, you forfeit any unused balance and are not entitled to refunds.

***

### 9. SERVICE AVAILABILITY AND CHANGES

#### 9.1 No Uptime Guarantee

The Actor depends on:

- (a) The Apify platform
- (b) Underlying API providers (HarvestAPI, Scrape Creators)
- (c) Source Platforms' continued public accessibility

Any of these may change behavior, restrict access, or become unavailable without notice. The Publisher makes no uptime guarantees.

#### 9.2 Service Discontinuation

The Publisher may discontinue any Actor at any time. Reasonable notice will be provided when feasible.

***

### 10. DISCLAIMERS

#### 10.1 "AS IS" Service

THE ACTOR IS PROVIDED "AS IS" AND "AS AVAILABLE" WITHOUT WARRANTIES OF ANY KIND, INCLUDING WARRANTIES OF MERCHANTABILITY, FITNESS FOR PURPOSE, NON-INFRINGEMENT, OR ACCURACY OF DATA.

#### 10.2 No Representation of Lawfulness

The Publisher makes no representation that your specific use of the Actor or the data it returns is lawful in your jurisdiction or under any Source Platform's terms. The burden of determining lawfulness for your use case is yours.

#### 10.3 No Endorsement of Source Content

Content returned by the Actor was created by third parties. The Publisher does not endorse, verify, or take responsibility for it.

***

### 11. LIMITATION OF LIABILITY

#### 11.1 Aggregate Liability Cap

TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, IN NO EVENT SHALL THE AGGREGATE LIABILITY OF THE PUBLISHER FOR ALL CLAIMS RELATING TO THE ACTOR EXCEED THE GREATER OF:

- (a) ONE HUNDRED U.S. DOLLARS (US $100), OR
- (b) THE AMOUNTS YOU PAID THROUGH APIFY FOR USE OF THE ACTOR IN THE THREE (3) MONTHS IMMEDIATELY PRECEDING THE EVENT

#### 11.2 Excluded Damages

THE PUBLISHER IS NOT LIABLE FOR INDIRECT, INCIDENTAL, SPECIAL, CONSEQUENTIAL, EXEMPLARY, OR PUNITIVE DAMAGES, OR FOR LOSS OF PROFITS, REVENUE, OR DATA, EVEN IF ADVISED OF THE POSSIBILITY.

#### 11.3 Time Limit

Any claim must be brought within one (1) year of the event.

***

### 12. INDEMNIFICATION

#### 12.1 Your Indemnification of the Publisher

You agree to defend, indemnify, and hold harmless the Publisher from any:

- Claims arising from your use of the Actor
- Claims arising from your violation of these Terms
- Claims arising from your violation of any law (including privacy law)
- Claims arising from your violation of any Source Platform's Terms of Service
- Claims arising from your processing of personal data obtained through the Actor
- Reasonable attorneys' fees and costs of defending such claims

#### 12.2 Defense

The Publisher may assume defense at your expense. You will cooperate with the Publisher's defense.

#### 12.3 Scope

The indemnification covers reasonable, foreseeable third-party claims arising from your use. It does not extend to:

- Claims arising from the Publisher's gross negligence or willful misconduct
- Claims regarding the Actor's source code itself (those are the Publisher's responsibility)
- Claims regarding the Third-Party API Provider's data collection (those are their responsibility)

***

### 13. SUSPENSION AND TERMINATION

#### 13.1 Termination by the Publisher

The Publisher may terminate your access for material breach, illegal use, breach of warranty, or upon credible legal demand.

#### 13.2 Effects of Termination

Your license ends, you must cease use, and applicable provisions survive.

#### 13.3 Termination by You

You may stop using the Actor at any time on Apify.

***

### 14. DISPUTE RESOLUTION

#### 14.1 Informal Resolution First

Send a detailed written description of the dispute via UnseenUser's Apify profile contact form (https://apify.com/UnseenUser) and wait 60 days for resolution attempt before any formal claim.

#### 14.2 Governing Law

These Terms are governed by the substantive laws of the State of Israel, without regard to conflict of law principles.

#### 14.3 Exclusive Jurisdiction

Any dispute shall be brought exclusively in the competent civil courts of Tel Aviv-Jaffa, Israel.

#### 14.4 No Class Actions

You agree to bring claims only in your individual capacity.

#### 14.5 Attorneys' Fees

The prevailing party recovers reasonable attorneys' fees.

***

### 15. MISCELLANEOUS

#### 15.1 Entire Agreement

These Terms (with Addendum and incorporated documents) are the entire agreement.

#### 15.2 Severability

Unenforceable provisions are reformed to the minimum extent or severed.

#### 15.3 Assignment

You may not assign without the Publisher's consent. The Publisher may assign to affiliates, successors, or acquirers.

#### 15.4 Force Majeure

Neither party is liable for failure due to events beyond reasonable control, including changes by Source Platforms or Third-Party API Providers, or actions by Apify.

#### 15.5 Third-Party Beneficiaries

Apify, HarvestAPI, and Scrape Creators are intended third-party beneficiaries of Sections 4 (Prohibited Uses), 5 (Source Platform Compliance), and 12 (Indemnification).

#### 15.6 Survival

Sections 0 (Acceptance), 4, 5, 6, 7, 10, 11, 12, 14, and 15 survive termination.

#### 15.7 Language

English controls. Translations are for convenience only.

#### 15.8 Publisher Identification for Legal Process

The Publisher operates on the Apify platform under the username **UnseenUser** (apify.com/UnseenUser). The Publisher is a registered legal entity. Upon receipt of valid legal process (subpoena, court order, or equivalent) directed through Apify's official channels, the Publisher's full legal identity may be disclosed as required by law. This Section ensures that you have a valid path to legal recourse if needed.

***

### 16. ACKNOWLEDGMENT

By using any Actor, you acknowledge that:

- (a) You have read these Terms
- (b) You understand the architecture: you are using software (the Actor) on Apify's platform that calls third-party APIs
- (c) You accept responsibility for your use, including for compliance with Source Platform terms
- (d) Your indemnification obligations cover third-party claims arising from your use
- (e) Disputes are resolved in Israeli courts
- (f) The Publisher's identity, while not publicly disclosed in this listing, can be obtained through valid legal process via Apify

For questions, use UnseenUser's Apify profile contact form (https://apify.com/UnseenUser) before running the Actor.

***

## 🛡️ Actor-Specific ToS Addendum - Reddit Intelligence Suite

This addendum supplements the Master Terms of Service V4.0. By running this Actor, you accept both the Master ToS and this addendum.

#### A. Architectural Disclosure

This Actor is a software wrapper. It accepts your input parameters, calls Scrape Creators' Reddit endpoints (subreddit, posts, comments, ad library, search), and returns the response data to you on the Apify platform. The Publisher does not store, log, or substantively process the data returned.

#### B. Nature of Data Returned

Subreddit data, posts (with usernames - pseudonymous), comments (with username attribution), search results, and Reddit Ads Library content. Reddit usernames are typically pseudonymous but can be linked to real identities through other data points. **Treat usernames as personal data under privacy law in your downstream processing.**

#### C. Permitted Use Cases

Brand mention monitoring and reputation management, sentiment analysis and consumer research, competitive ad intelligence (via Ads Library), academic research, journalism, anti-disinformation tools, subreddit moderator analytics.

#### D. Specifically Prohibited Uses

In addition to Master ToS Section 4 prohibitions, you may NOT:

- **De-anonymize users** - attempt to link Reddit usernames to real identities outside legitimate journalism with ethical review
- **Build harassment tools** for brigading or coordinated attacks
- **Train commercial AI/LLMs** on Reddit data without complying with Reddit's data licensing terms
- **Republish full posts/comments** in commercial products that compete with Reddit
- **Manipulate subreddit dynamics** - vote manipulation, fake account networks, astroturfing
- **Generate engagement on competitor ads** - click-fraud or ad-spam
- **Track individual users' activity** across subreddits without lawful purpose

#### E. Reddit Platform ToS Considerations

Reddit's Data API Terms have changed significantly in 2023-2024 to restrict AI training and commercial scraping. Reddit may consider commercial use - particularly AI training - to violate their Data API Terms. Reddit has actively pursued litigation against companies training AI on Reddit data without licensing (e.g., *Reddit v. Perplexity AI*). If you intend to train AI models on outputs, license data directly from Reddit instead.

#### F. Pseudonymity and Personal Data

Reddit usernames are pseudonymous but become personal data when combined with other identifiers, used in contexts where the user's real identity is known, or aggregated across enough posts to identify the individual.

#### G. Reddit Ads Library - Special Provisions

Reddit Ads Library data is provided by Reddit for transparency purposes. May be used for competitive intelligence and research. May NOT be used to organize click-fraud against competitors or to identify and contact ad-buyers for unsolicited B2B outreach without proper compliance.

#### H. Sensitive Subreddits

Reddit hosts subreddits dealing with sensitive topics (mental health, addiction, sexuality, politics, religion). Do not specifically target users from sensitive subreddits for marketing or use participation as basis for discrimination.

# Actor input Schema

## `mode` (type: `string`):

Pick one. After you pick, scroll down and only fill the section labelled with the same emoji as your choice. Everything else is ignored.

## `subreddits` (type: `array`):

REQUIRED FOR: 📰 Subreddit posts, 🔍 Subreddit search, ℹ️ Subreddit info. Paste subreddit names - 'r/SaaS', '/r/SaaS', and 'SaaS' all work.

## `sortBy` (type: `string`):

How to sort. For 📰 Subreddit posts use hot / new / top / rising. For 🔎 / 🔍 Search use relevance / top / new / comments.

## `timeFilter` (type: `string`):

Reddit only honours this when sort is top / controversial / relevance. Ignored otherwise.

## `searchQueries` (type: `array`):

REQUIRED FOR: 🔎 Reddit search and 🔍 Subreddit search. Each line runs as a separate search.

## `searchScope` (type: `string`):

ONLY USED FOR 🔎 Reddit search. Pick 'All of Reddit' for brand monitoring across the whole site.

## `searchSubreddit` (type: `string`):

Used only when 🔎 Search scope = 'Restrict to one subreddit'. Leave blank otherwise.

## `postUrls` (type: `array`):

REQUIRED FOR: 💬 Post comments. Paste full Reddit post URLs - one per line.

## `adSearchQueries` (type: `array`):

REQUIRED FOR: 💰 Ads Library search. Each query is one Ads Library lookup. Reddit caps results at ~30 per query.

## `adIndustries` (type: `array`):

Optional. Allowed values: RETAIL\_AND\_ECOMMERCE, TECH\_B2B, TECH\_B2C, EDUCATION, ENTERTAINMENT, GAMING, FINANCIAL\_SERVICES, HEALTH\_AND\_BEAUTY, CONSUMER\_PACKAGED\_GOODS, EMPLOYMENT, AUTO, TRAVEL, REAL\_ESTATE, GAMBLING\_AND\_FANTASY\_SPORTS, POLITICS\_AND\_GOVERNMENT, OTHER.

## `adBudgets` (type: `array`):

Optional. Allowed values: LOW, MEDIUM, HIGH.

## `adFormats` (type: `array`):

Optional. Allowed values: IMAGE, VIDEO, CAROUSEL, FREE\_FORM.

## `adPlacements` (type: `array`):

Optional. Allowed values: FEED, COMMENTS\_PAGE.

## `adObjectives` (type: `array`):

Optional. Allowed values: IMPRESSIONS, CLICKS, CONVERSIONS, VIDEO\_VIEWABLE\_IMPRESSIONS, APP\_INSTALLS.

## `adIds` (type: `array`):

REQUIRED FOR: 🎯 Ad details. Paste the ad IDs returned by 💰 Ads Library search - one per line.

## `maxPostsPerSubreddit` (type: `integer`):

Cap on posts pulled per subreddit (or per search query). 100 is a good default. Lower = cheaper.

## `maxCommentsPerPost` (type: `integer`):

Cap on total comment nodes returned per post. Counts nested replies.

## `maxCommentDepth` (type: `integer`):

How deep to follow reply threads. 0 = top-level only. 5 is a good default.

## `maxAdsPerSearch` (type: `integer`):

Reddit's Ads Library natively returns up to ~30 per query. Going higher will not help.

## Actor input object example

```json
{
  "mode": "subreddit_posts",
  "subreddits": [
    "r/SaaS",
    "r/Entrepreneur"
  ],
  "sortBy": "hot",
  "timeFilter": "week",
  "searchQueries": [
    "your brand name"
  ],
  "searchScope": "all_reddit",
  "searchSubreddit": "",
  "postUrls": [
    "https://www.reddit.com/r/AskReddit/comments/1ldr6b9/"
  ],
  "adSearchQueries": [
    "insurance"
  ],
  "adIndustries": [],
  "adBudgets": [],
  "adFormats": [],
  "adPlacements": [],
  "adObjectives": [],
  "adIds": [],
  "maxPostsPerSubreddit": 100,
  "maxCommentsPerPost": 200,
  "maxCommentDepth": 5,
  "maxAdsPerSearch": 30
}
```

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "subreddits": [
        "r/SaaS",
        "r/Entrepreneur"
    ],
    "searchQueries": [
        "your brand name"
    ],
    "postUrls": [
        "https://www.reddit.com/r/AskReddit/comments/1ldr6b9/"
    ],
    "adSearchQueries": [
        "insurance"
    ]
};

// Run the Actor and wait for it to finish
const run = await client.actor("unseenuser/reddit-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "subreddits": [
        "r/SaaS",
        "r/Entrepreneur",
    ],
    "searchQueries": ["your brand name"],
    "postUrls": ["https://www.reddit.com/r/AskReddit/comments/1ldr6b9/"],
    "adSearchQueries": ["insurance"],
}

# Run the Actor and wait for it to finish
run = client.actor("unseenuser/reddit-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "subreddits": [
    "r/SaaS",
    "r/Entrepreneur"
  ],
  "searchQueries": [
    "your brand name"
  ],
  "postUrls": [
    "https://www.reddit.com/r/AskReddit/comments/1ldr6b9/"
  ],
  "adSearchQueries": [
    "insurance"
  ]
}' |
apify call unseenuser/reddit-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=unseenuser/reddit-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "🔥 Reddit Scraper + Ads Library - Posts [NO COOKIES] ✅",
        "description": "The most complete Reddit scraper on Apify. Extract posts, comments with full reply chains, subreddit metadata, user profiles, AND ads from the Reddit Ads Library - all without API keys, cookies, or login. Built for market researchers, content marketers, and competitive intel teams.",
        "version": "0.0",
        "x-build-id": "xfHx5GTusFVhRuWYi"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/unseenuser~reddit-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-unseenuser-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/unseenuser~reddit-scraper/runs": {
            "post": {
                "operationId": "runs-sync-unseenuser-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/unseenuser~reddit-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-unseenuser-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "mode"
                ],
                "properties": {
                    "mode": {
                        "title": "STEP 1 - What do you want to scrape?",
                        "enum": [
                            "subreddit_posts",
                            "reddit_search",
                            "post_comments",
                            "subreddit_search",
                            "subreddit_info",
                            "ads_search",
                            "ad_details"
                        ],
                        "type": "string",
                        "description": "Pick one. After you pick, scroll down and only fill the section labelled with the same emoji as your choice. Everything else is ignored.",
                        "default": "subreddit_posts"
                    },
                    "subreddits": {
                        "title": "📰 Subreddits",
                        "type": "array",
                        "description": "REQUIRED FOR: 📰 Subreddit posts, 🔍 Subreddit search, ℹ️ Subreddit info. Paste subreddit names - 'r/SaaS', '/r/SaaS', and 'SaaS' all work.",
                        "default": [],
                        "items": {
                            "type": "string"
                        }
                    },
                    "sortBy": {
                        "title": "📰 Sort order (subreddit posts / search)",
                        "enum": [
                            "hot",
                            "new",
                            "top",
                            "rising",
                            "controversial",
                            "best",
                            "relevance",
                            "comments"
                        ],
                        "type": "string",
                        "description": "How to sort. For 📰 Subreddit posts use hot / new / top / rising. For 🔎 / 🔍 Search use relevance / top / new / comments.",
                        "default": "hot"
                    },
                    "timeFilter": {
                        "title": "📰 Time window (only matters for sort = top, controversial, or any search)",
                        "enum": [
                            "hour",
                            "day",
                            "week",
                            "month",
                            "year",
                            "all"
                        ],
                        "type": "string",
                        "description": "Reddit only honours this when sort is top / controversial / relevance. Ignored otherwise.",
                        "default": "week"
                    },
                    "searchQueries": {
                        "title": "🔎 Search queries",
                        "type": "array",
                        "description": "REQUIRED FOR: 🔎 Reddit search and 🔍 Subreddit search. Each line runs as a separate search.",
                        "default": [],
                        "items": {
                            "type": "string"
                        }
                    },
                    "searchScope": {
                        "title": "🔎 Search scope (Reddit search only)",
                        "enum": [
                            "all_reddit",
                            "in_subreddit"
                        ],
                        "type": "string",
                        "description": "ONLY USED FOR 🔎 Reddit search. Pick 'All of Reddit' for brand monitoring across the whole site.",
                        "default": "all_reddit"
                    },
                    "searchSubreddit": {
                        "title": "🔎 Restrict to subreddit (Reddit search only)",
                        "type": "string",
                        "description": "Used only when 🔎 Search scope = 'Restrict to one subreddit'. Leave blank otherwise.",
                        "default": ""
                    },
                    "postUrls": {
                        "title": "💬 Post URLs",
                        "type": "array",
                        "description": "REQUIRED FOR: 💬 Post comments. Paste full Reddit post URLs - one per line.",
                        "default": [],
                        "items": {
                            "type": "string"
                        }
                    },
                    "adSearchQueries": {
                        "title": "💰 Ad search queries",
                        "type": "array",
                        "description": "REQUIRED FOR: 💰 Ads Library search. Each query is one Ads Library lookup. Reddit caps results at ~30 per query.",
                        "default": [],
                        "items": {
                            "type": "string"
                        }
                    },
                    "adIndustries": {
                        "title": "💰 Industry filter (optional)",
                        "type": "array",
                        "description": "Optional. Allowed values: RETAIL_AND_ECOMMERCE, TECH_B2B, TECH_B2C, EDUCATION, ENTERTAINMENT, GAMING, FINANCIAL_SERVICES, HEALTH_AND_BEAUTY, CONSUMER_PACKAGED_GOODS, EMPLOYMENT, AUTO, TRAVEL, REAL_ESTATE, GAMBLING_AND_FANTASY_SPORTS, POLITICS_AND_GOVERNMENT, OTHER.",
                        "default": [],
                        "items": {
                            "type": "string"
                        }
                    },
                    "adBudgets": {
                        "title": "💰 Budget filter (optional)",
                        "type": "array",
                        "description": "Optional. Allowed values: LOW, MEDIUM, HIGH.",
                        "default": [],
                        "items": {
                            "type": "string"
                        }
                    },
                    "adFormats": {
                        "title": "💰 Format filter (optional)",
                        "type": "array",
                        "description": "Optional. Allowed values: IMAGE, VIDEO, CAROUSEL, FREE_FORM.",
                        "default": [],
                        "items": {
                            "type": "string"
                        }
                    },
                    "adPlacements": {
                        "title": "💰 Placement filter (optional)",
                        "type": "array",
                        "description": "Optional. Allowed values: FEED, COMMENTS_PAGE.",
                        "default": [],
                        "items": {
                            "type": "string"
                        }
                    },
                    "adObjectives": {
                        "title": "💰 Objective filter (optional)",
                        "type": "array",
                        "description": "Optional. Allowed values: IMPRESSIONS, CLICKS, CONVERSIONS, VIDEO_VIEWABLE_IMPRESSIONS, APP_INSTALLS.",
                        "default": [],
                        "items": {
                            "type": "string"
                        }
                    },
                    "adIds": {
                        "title": "🎯 Ad IDs",
                        "type": "array",
                        "description": "REQUIRED FOR: 🎯 Ad details. Paste the ad IDs returned by 💰 Ads Library search - one per line.",
                        "default": [],
                        "items": {
                            "type": "string"
                        }
                    },
                    "maxPostsPerSubreddit": {
                        "title": "⚙️ Max posts per subreddit / per search query",
                        "minimum": 1,
                        "maximum": 1000,
                        "type": "integer",
                        "description": "Cap on posts pulled per subreddit (or per search query). 100 is a good default. Lower = cheaper.",
                        "default": 100
                    },
                    "maxCommentsPerPost": {
                        "title": "⚙️ Max comments per post (💬 Post comments only)",
                        "minimum": 1,
                        "type": "integer",
                        "description": "Cap on total comment nodes returned per post. Counts nested replies.",
                        "default": 200
                    },
                    "maxCommentDepth": {
                        "title": "⚙️ Max comment depth (💬 Post comments only)",
                        "minimum": 0,
                        "type": "integer",
                        "description": "How deep to follow reply threads. 0 = top-level only. 5 is a good default.",
                        "default": 5
                    },
                    "maxAdsPerSearch": {
                        "title": "⚙️ Max ads per query (💰 Ads Library only)",
                        "minimum": 1,
                        "type": "integer",
                        "description": "Reddit's Ads Library natively returns up to ~30 per query. Going higher will not help.",
                        "default": 30
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
