# Reddit \[Only $0.50💰] Posts | Users | Scraper (`memo23/reddit-scraper`) Actor

\[Only $0.50💰] Scrape Reddit posts (and optionally full comment threads with replies) from public JSON — site-wide & subreddit search, feed browse, post/comment date filters, NSFW toggle, strict phrase/token match, proxy rotation. Legacy: deduplicated authors from subreddit listing URLs

- **URL**: https://apify.com/memo23/reddit-scraper.md
- **Developed by:** [Muhamed Didovic](https://apify.com/memo23) (community)
- **Categories:** Social media, Agents, Lead generation
- **Stats:** 14 total users, 13 monthly users, 100.0% runs succeeded, NaN bookmarks
- **User rating**: 5.00 out of 5 stars

## Pricing

from $0.50 / 1,000 results

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Reddit Scraper

**Scrape Reddit posts (and optionally full comment threads with replies) from public JSON — site-wide & subreddit search, feed browse, post/comment date filters, NSFW toggle, strict phrase/token match, proxy rotation. Legacy: deduplicated authors from subreddit listing URLs (key-value store).**

#### How it works

![How Reddit Scraper works](https://raw.githubusercontent.com/muhamed-didovic/muhamed-didovic.github.io/main/assets/how-it-works-reddit.png)

### Why Use This Scraper?

- **Multiple starting points** — site-wide keywords, one subreddit (search or feed browse), or legacy **subreddit URLs** for author collection.
- **Post-shaped rows** — search modes write one dataset item per post, with Reddit’s own nested `preview` / `media` when present.
- **Optional comments** — separate dataset rows per comment, with configurable caps and `morechildren` depth.
- **Practical filters** — NSFW toggle, post/comment date windows, strict phrase and token filters, “maximize coverage” budgets, residential-friendly proxy support.
- **Export-friendly** — default Apify Dataset; download as JSON, CSV, Excel (CSV flattens nested keys with dots).

### Overview

This actor is for **research, monitoring, content analysis pipelines, and internal tools** that need structured Reddit data without maintaining your own scraping stack.

- **Search all of Reddit** and **Search one subreddit** resolve into **post rows** in the dataset. Enable comments to add **comment rows** in the same dataset (`dataType: "comment"`, ids like `t1_…`).
- **User scraper (legacy)** targets **deduplicated usernames** stored under the key-value key **`API_DETAILS`**. The dataset is secondary in that mode.

The actor does **not** compute sentiment or topic categories; related input flags are ignored.

### Supported Inputs

| You provide | When | Notes |
|-------------|------|--------|
| **`mode`** = `searchGlobal` | Site-wide search | Set **`searchQueries`** (string list). **`searchSort`**, **`searchTimeframe`**, **`includeOver18`** apply. |
| **`mode`** = `searchSubreddit` | One subreddit | **`subredditSearchUrl`** (`https://…/r/name`, `r/name`, or bare name). Empty **`subredditSearchQueries`** → **feed** listing; non-empty → **in-subreddit search**. |
| **`mode`** = `subredditUsers` | Legacy authors | **`startUrls`** — subreddit home URLs (e.g. `https://www.reddit.com/r/webscraping`). |
| **`maxItems`** | All modes | Search: max **posts** (split across keyword lines). Legacy: max **unique authors**. |
| **`proxy`** | All modes | Use **residential** or quality proxies if Reddit serves block pages or throttles. |

**Search options in the Console schema** also include: `searchIncludeComments`, `searchMaxCommentsPerPost`, `searchListingLimitPerPage`, `searchPostDateFrom` / `searchPostDateTo`, `searchCommentDateFrom` / `searchCommentDateTo`, `searchForceNewSortWhenDateFiltered`, `searchStrictPhrase`, `searchStrictTokenFilter`, `searchMaximizeCoverage`, `maxConcurrency` / `minConcurrency` / `maxRequestRetries` (legacy crawler), etc.

**Advanced JSON-only:** `searchHttpTransport` (`internalParallel` | `internal`), `searchParallelQueryConcurrency`. **Legacy aliases** still read in code: `queries`, `maxPosts`, `includeNsfw`, `scrapeComments`, `maxComments`, `dateFrom` / `dateTo`, `forceSortNewForTimeFilteredRuns`, `strictSearch`, `strictTokenFilter`, `maximize_coverage`.

**Not supported:** logged-in sessions, guaranteed access to private communities, compliance with your jurisdiction’s rules for Reddit data (your responsibility).

### Use Cases

| Audience | Typical use |
|----------|-------------|
| **Researchers** | Post and comment samples for NLP or social science. |
| **Marketing & brand** | Mention tracking, campaign feedback, subreddit tone. |
| **Agencies** | Client-ready Reddit exports on a schedule. |
| **Product & data teams** | Dashboards fed from Dataset webhooks. |
| **Developers** | Baseline Reddit HTTP + parsing without owning infra. |

### How It Works

1. Choose **`mode`** and fill only the inputs that apply (Console sections match this).
2. The actor requests **`search.json`**, subreddit **listing** or **search** URLs on **old.reddit**, or (legacy) listing JSON on **www.reddit.com**.
3. Each accepted post is mapped to a **flat row** (legacy Reddit field names plus optional merged aliases such as `dataType`, `scrapedAt`, `thread_url`).
4. If **`searchIncludeComments`** is on, the actor loads thread JSON, walks replies, and may call **`/api/morechildren`** until per-post limits and round budgets are reached.
5. You export the **Dataset** (and for legacy mode, read **`API_DETAILS`** from the **Key-value store**).

### Input Configuration

**Search all of Reddit** (minimal):

```json
{
    "mode": "searchGlobal",
    "searchHttpTransport": "internalParallel",
    "searchQueries": ["Cheesecake"],
    "searchSort": "relevance",
    "searchTimeframe": "all",
    "includeOver18": false,
    "maxItems": 100,
    "searchIncludeComments": false,
    "proxy": { "useApifyProxy": true }
}
````

**Search one subreddit:**

```json
{
    "mode": "searchSubreddit",
    "searchHttpTransport": "internalParallel",
    "subredditSearchUrl": "https://www.reddit.com/r/technology",
    "subredditSearchQueries": ["api"],
    "subredditSearchSort": "relevance",
    "subredditSearchTimeframe": "all",
    "maxItems": 50,
    "proxy": { "useApifyProxy": true }
}
```

**User scraper (legacy)** — author list:

```json
{
    "mode": "subredditUsers",
    "startUrls": [{ "url": "https://www.reddit.com/r/webscraping" }],
    "maxItems": 500,
    "maxConcurrency": 5,
    "proxy": { "useApifyProxy": true }
}
```

### Output Overview

- **Dataset** — one JSON object per line item. Search runs are mostly **`kind: "post"`** rows. Comments, when enabled, are additional items with **`dataType: "comment"`**.
- **Legacy user mode** — primary output is **`API_DETAILS`** (unique usernames) in the **default key-value store**, not the same post schema.
- **Downloads** — Apify offers JSON, CSV, Excel, etc. Nested objects (e.g. `preview.images`) flatten in CSV.
- **Honest variability** — Reddit omits or nulls fields by post type (text vs link vs gallery, removed authors, ads). Keys are stable when the mapper adds them; values are not.

### Output Samples

Shortened **post** object (real shape; `preview…resolutions` trimmed for readability). The first record in repo **`data.json`** shows the full `resolutions` ladder.

```json
{
    "kind": "post",
    "query": "Cheesecake",
    "id": "futnih",
    "title": "Different kinds of cheesecake",
    "body": "",
    "author": "cttrv",
    "score": 43222,
    "upvote_ratio": 0.95,
    "num_comments": 1204,
    "subreddit": "coolguides",
    "created_utc": "2020-04-04T13:23:45.000Z",
    "url": "https://i.redd.it/tczgv8mgysq41.jpg",
    "permalink": "/r/coolguides/comments/futnih/different_kinds_of_cheesecake/",
    "canonical_url": "https://www.reddit.com/r/coolguides/comments/futnih/different_kinds_of_cheesecake/",
    "old_reddit_url": "https://old.reddit.com/r/coolguides/comments/futnih/different_kinds_of_cheesecake/",
    "flair": null,
    "post_hint": "image",
    "over_18": false,
    "is_self": false,
    "spoiler": false,
    "locked": false,
    "is_video": false,
    "domain": "i.redd.it",
    "thumbnail": "https://b.thumbs.redditmedia.com/BjwCwDT6OG40X5VFEYWsYSUJ_lrLZvUZizm4q_WG8hk.jpg",
    "subreddit_id": "t5_310rm",
    "subreddit_name_prefixed": "r/coolguides",
    "subreddit_subscribers": 6028638,
    "preview": {
        "images": [
            {
                "source": {
                    "url": "https://preview.redd.it/tczgv8mgysq41.jpg?auto=webp&s=7b17fbc8e9050ee4b242f7ece7de63ae7b0ee43b",
                    "width": 1200,
                    "height": 1200
                },
                "resolutions": [
                    {
                        "url": "https://preview.redd.it/tczgv8mgysq41.jpg?width=108&crop=smart&auto=webp&s=f1107cc1f996a0013ad8b321c6453442d5f576ea",
                        "width": 108,
                        "height": 108
                    },
                    {
                        "url": "https://preview.redd.it/tczgv8mgysq41.jpg?width=216&crop=smart&auto=webp&s=4f45cf02c049215a7462669b4e25268aba62ff49",
                        "width": 216,
                        "height": 216
                    }
                ],
                "variants": {},
                "id": "FJE3K_tyrkJobmnqvZME7_YGilzr4GXSmCvMUqExuhU"
            }
        ],
        "enabled": true
    },
    "media_metadata": null,
    "media": null
}
```

**Comment** row (when `searchIncludeComments` is true) — illustrative:

```json
{
    "id": "t1_xyz789",
    "parsedId": "xyz789",
    "dataType": "comment",
    "query": "Cheesecake",
    "url": "https://www.reddit.com/r/Baking/comments/abc123/title/def456/",
    "postId": "t3_abc123",
    "parentId": "t3_abc123",
    "username": "commenter",
    "body": "Comment text",
    "createdAt": "2025-01-15T14:00:00.000Z",
    "scrapedAt": "2026-04-16T10:00:01.000Z",
    "upVotes": 3,
    "numberOfreplies": 0
}
```

#### Full-field sample

For every key on the **first** object in **`data.json`** (including all `preview.images[0].resolutions[]` entries), inspect that file in the repository or re-export from a fresh run. Newer runs may add merged compatibility fields (`dataType`, `scrapedAt`, `reddit_fullname`, `thread_url`, `imageUrls`, …) not present in older exports.

### Key Output Fields

| Group | Examples | Meaning |
|-------|----------|---------|
| **Identity** | `kind`, `id`, `query` | Post vs comment, short id, search line. |
| **Content** | `title`, `body`, `author`, `created_utc` | Headline, selftext, author, ISO time. |
| **Engagement** | `score`, `upvote_ratio`, `num_comments` | Reddit score, ratio, comment count. |
| **Community** | `subreddit`, `subreddit_name_prefixed`, `subreddit_id`, `subreddit_subscribers` | Bare name, `r/…`, `t5_…`, subscriber snapshot. |
| **URLs** | `url`, `permalink`, `canonical_url`, `old_reddit_url` | Outbound link, path, www thread, old.reddit thread. |
| **Flags** | `over_18`, `is_self`, `spoiler`, `locked`, `is_video`, `post_hint` | NSFW, text post, spoiler, lock, video, render hint. |
| **Media** | `thumbnail`, `preview`, `media_metadata`, `media` | Thumb; nested previews; gallery dict; embed. |
| **Job context** | `search_scope`, `subreddit_search`, `subreddit_fetch_mode`, sorts | When the mapper adds subreddit/global metadata. |
| **Compatibility** | `dataType`, `scrapedAt`, `reddit_fullname`, `parsedId`, `username`, `upVotes`, `thread_url`, `imageUrls` | Extra aliases on the same object for downstream tools. |

**Comments:** `postId`, `parentId`, `communityName`, `category`, `html`, `authorFlair`, `userId`, etc.

### FAQ

**Which URLs work?**\
`searchGlobal` uses **keywords**, not URLs. `searchSubreddit` needs a **subreddit** URL or `r/name`. `subredditUsers` needs **subreddit home** URLs in `startUrls`.

**Posts or users?**\
Search modes → **posts** (+ optional **comments**). Legacy mode → **usernames** in **`API_DETAILS`**.

**Are comments always in the dataset?**\
No. Turn on **`searchIncludeComments`** (or legacy `scrapeComments`). **`maxItems`** caps **posts** only; use **`searchMaxCommentsPerPost`** / `maxComments` for comment volume.

**Why are some fields null?**\
Data is whatever Reddit returns for that post type and state.

**Private or logged-in content?**\
Not supported — public JSON only.

**Sentiment or categories?**\
Not implemented.

### Support

- **Issues** — use the Issues tab for this actor in the [Apify Console](https://console.apify.com/).
- **Website** — <https://muhamed-didovic.github.io/>
- **Email** — <muhamed.didovic@gmail.com>
- **Store** — <https://apify.com/memo23>

### Additional Services

- Customization or full dataset delivery: <muhamed.didovic@gmail.com>
- Other scraping needs or actor changes: <muhamed.didovic@gmail.com>
- API-style usage: <muhamed.didovic@gmail.com>

### Explore More Scrapers

Browse the author’s Apify store: <https://apify.com/memo23>

# Actor input Schema

## `mode` (type: `string`):

<strong>Search all of Reddit</strong> — site-wide keyword search. <strong>Search one subreddit</strong> — search inside a sub or browse its feed (leave subreddit keywords empty). <strong>User scraper</strong> — legacy Cheerio flow to collect authors from listing URLs.

## `searchQueries` (type: `array`):

Used when <code>mode</code> is <strong>Search all of Reddit</strong>. Each entry runs a separate global search on old.reddit.

## `searchSort` (type: `string`):

Global search <code>sort</code>. Omitted when sort is <code>new</code>.

## `searchTimeframe` (type: `string`):

Global search <code>t</code>. Ignored when sort is <code>new</code>.

## `subredditSearchUrl` (type: `string`):

Used when <code>mode</code> is <strong>Search one subreddit</strong>. Example: <code>https://www.reddit.com/r/technology</code> or <code>r/technology</code>. No subreddit keywords → browse that sub’s feed (hot/new/top). With keywords → search inside the sub.

## `subredditSearchQueries` (type: `array`):

In-subreddit search. <strong>Leave empty</strong> to use the subreddit feed instead of search.

## `subredditSearchSort` (type: `string`):

With keywords: in-subreddit search sort. Without keywords: <code>relevance</code>/<code>hot</code> → hot feed; <code>new</code>/<code>top</code> → that listing; <code>comments</code> → wildcard search.

## `subredditSearchTimeframe` (type: `string`):

<code>t=</code> for subreddit search and for <code>top</code> on the feed. Ignored for <code>new</code> and hot listing.

## `startUrls` (type: `array`):

Used when <code>mode</code> is <strong>User scraper</strong>. One or more subreddit URLs (e.g. <code>https://www.reddit.com/r/webscraping</code>).

## `includeOver18` (type: `boolean`):

Maps to <code>include\_over\_18</code> on Reddit JSON for <strong>Search all of Reddit</strong> and <strong>Search one subreddit</strong>.

## `searchIncludeComments` (type: `boolean`):

Fetch each result post’s thread and save comment rows. Extra requests. <code>maxItems</code> still caps <strong>posts</strong> only.

## `searchMaxCommentsPerPost` (type: `integer`):

When comment threads are on. Also sets the first <code>thread.json?limit=</code> (competitor-style). Uses <code>/api/morechildren</code> to fill truncated trees until this cap.

## `searchPostDateFrom` (type: `string`):

<code>YYYY-MM-DD</code>. Posts older than this day are skipped (not saved). With <strong>Force new sort when filtering by date</strong>, search URLs use <code>sort=new</code> for better coverage.

## `searchPostDateTo` (type: `string`):

Optional inclusive end date <code>YYYY-MM-DD</code>.

## `searchCommentDateFrom` (type: `string`):

Optional <code>YYYY-MM-DD</code>. Comment rows outside the range are not saved.

## `searchCommentDateTo` (type: `string`):

Optional inclusive end <code>YYYY-MM-DD</code>.

## `searchForceNewSortWhenDateFiltered` (type: `boolean`):

If post from/until is set, use Reddit <code>sort=new</code> on search URLs (competitor: <code>forceSortNewForTimeFilteredRuns</code>).

## `searchStrictPhrase` (type: `boolean`):

Wrap each keyword line in double quotes in the Reddit <code>q</code> parameter (competitor: <code>strictSearch</code>).

## `searchStrictTokenFilter` (type: `boolean`):

After Reddit returns results, keep only posts whose title+body contain every whitespace-separated token from the keyword line (competitor: <code>strictTokenFilter</code>). Subreddit feed mode has no keyword line — filter does not apply.

## `searchMaximizeCoverage` (type: `boolean`):

Higher caps on search pagination pages and <code>morechildren</code> expansions (competitor: <code>maximize\_coverage</code>).

## `maxItems` (type: `integer`):

Search modes: max <strong>posts</strong> total (split across keyword lines). User scraper: max unique authors.

## `maxConcurrency` (type: `integer`):

For <strong>User scraper</strong> (Cheerio). Ignored for search modes.

## `minConcurrency` (type: `integer`):

Minimum pages processed in parallel.

## `maxRequestRetries` (type: `integer`):

Retries per failed request before giving up.

## `proxy` (type: `object`):

Specifies proxy servers that will be used by the scraper in order to hide its origin.<br><br>For details, see <a href='https://apify.com/apify/web-scraper#proxy-configuration' target='_blank' rel='noopener'>Proxy configuration</a> in README.

## Actor input object example

```json
{
  "mode": "searchGlobal",
  "searchQueries": [
    "Cheesecake"
  ],
  "searchSort": "relevance",
  "searchTimeframe": "all",
  "subredditSearchUrl": "https://www.reddit.com/r/technology",
  "subredditSearchSort": "relevance",
  "subredditSearchTimeframe": "all",
  "startUrls": [
    {
      "url": "https://www.reddit.com/r/webscraping"
    }
  ],
  "includeOver18": false,
  "searchIncludeComments": false,
  "searchMaxCommentsPerPost": 200,
  "searchForceNewSortWhenDateFiltered": false,
  "searchStrictPhrase": false,
  "searchStrictTokenFilter": false,
  "searchMaximizeCoverage": false,
  "maxItems": 100,
  "maxConcurrency": 10,
  "minConcurrency": 1,
  "maxRequestRetries": 3,
  "proxy": {
    "useApifyProxy": true,
    "apifyProxyGroups": [
      "RESIDENTIAL"
    ],
    "apifyProxyCountry": "US"
  }
}
```

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "searchQueries": [
        "Cheesecake"
    ],
    "subredditSearchUrl": "https://www.reddit.com/r/technology",
    "startUrls": [
        {
            "url": "https://www.reddit.com/r/webscraping"
        }
    ],
    "proxy": {
        "useApifyProxy": true,
        "apifyProxyGroups": [
            "RESIDENTIAL"
        ],
        "apifyProxyCountry": "US"
    }
};

// Run the Actor and wait for it to finish
const run = await client.actor("memo23/reddit-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "searchQueries": ["Cheesecake"],
    "subredditSearchUrl": "https://www.reddit.com/r/technology",
    "startUrls": [{ "url": "https://www.reddit.com/r/webscraping" }],
    "proxy": {
        "useApifyProxy": True,
        "apifyProxyGroups": ["RESIDENTIAL"],
        "apifyProxyCountry": "US",
    },
}

# Run the Actor and wait for it to finish
run = client.actor("memo23/reddit-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "searchQueries": [
    "Cheesecake"
  ],
  "subredditSearchUrl": "https://www.reddit.com/r/technology",
  "startUrls": [
    {
      "url": "https://www.reddit.com/r/webscraping"
    }
  ],
  "proxy": {
    "useApifyProxy": true,
    "apifyProxyGroups": [
      "RESIDENTIAL"
    ],
    "apifyProxyCountry": "US"
  }
}' |
apify call memo23/reddit-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=memo23/reddit-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit [Only $0.50💰] Posts | Users | Scraper",
        "description": "[Only $0.50💰] Scrape Reddit posts (and optionally full comment threads with replies) from public JSON — site-wide & subreddit search, feed browse, post/comment date filters, NSFW toggle, strict phrase/token match, proxy rotation. Legacy: deduplicated authors from subreddit listing URLs",
        "version": "0.0",
        "x-build-id": "jAAlO0CtSp8k6ffOK"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/memo23~reddit-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-memo23-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/memo23~reddit-scraper/runs": {
            "post": {
                "operationId": "runs-sync-memo23-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/memo23~reddit-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-memo23-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "mode"
                ],
                "properties": {
                    "mode": {
                        "title": "Run mode",
                        "enum": [
                            "searchGlobal",
                            "searchSubreddit",
                            "subredditUsers"
                        ],
                        "type": "string",
                        "description": "<strong>Search all of Reddit</strong> — site-wide keyword search. <strong>Search one subreddit</strong> — search inside a sub or browse its feed (leave subreddit keywords empty). <strong>User scraper</strong> — legacy Cheerio flow to collect authors from listing URLs.",
                        "default": "searchGlobal"
                    },
                    "searchQueries": {
                        "title": "Keywords",
                        "type": "array",
                        "description": "Used when <code>mode</code> is <strong>Search all of Reddit</strong>. Each entry runs a separate global search on old.reddit.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "searchSort": {
                        "title": "Sort",
                        "enum": [
                            "relevance",
                            "hot",
                            "top",
                            "new",
                            "comments"
                        ],
                        "type": "string",
                        "description": "Global search <code>sort</code>. Omitted when sort is <code>new</code>.",
                        "default": "relevance"
                    },
                    "searchTimeframe": {
                        "title": "Time range",
                        "enum": [
                            "all",
                            "year",
                            "month",
                            "week",
                            "day",
                            "hour"
                        ],
                        "type": "string",
                        "description": "Global search <code>t</code>. Ignored when sort is <code>new</code>.",
                        "default": "all"
                    },
                    "subredditSearchUrl": {
                        "title": "Subreddit URL or r/name",
                        "type": "string",
                        "description": "Used when <code>mode</code> is <strong>Search one subreddit</strong>. Example: <code>https://www.reddit.com/r/technology</code> or <code>r/technology</code>. No subreddit keywords → browse that sub’s feed (hot/new/top). With keywords → search inside the sub."
                    },
                    "subredditSearchQueries": {
                        "title": "Keywords inside subreddit",
                        "type": "array",
                        "description": "In-subreddit search. <strong>Leave empty</strong> to use the subreddit feed instead of search.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "subredditSearchSort": {
                        "title": "Sort",
                        "enum": [
                            "relevance",
                            "hot",
                            "top",
                            "new",
                            "comments"
                        ],
                        "type": "string",
                        "description": "With keywords: in-subreddit search sort. Without keywords: <code>relevance</code>/<code>hot</code> → hot feed; <code>new</code>/<code>top</code> → that listing; <code>comments</code> → wildcard search.",
                        "default": "relevance"
                    },
                    "subredditSearchTimeframe": {
                        "title": "Time range",
                        "enum": [
                            "all",
                            "year",
                            "month",
                            "week",
                            "day",
                            "hour"
                        ],
                        "type": "string",
                        "description": "<code>t=</code> for subreddit search and for <code>top</code> on the feed. Ignored for <code>new</code> and hot listing.",
                        "default": "all"
                    },
                    "startUrls": {
                        "title": "Subreddit URLs",
                        "type": "array",
                        "description": "Used when <code>mode</code> is <strong>User scraper</strong>. One or more subreddit URLs (e.g. <code>https://www.reddit.com/r/webscraping</code>).",
                        "items": {
                            "type": "object",
                            "required": [
                                "url"
                            ],
                            "properties": {
                                "url": {
                                    "type": "string",
                                    "title": "URL of a web page",
                                    "format": "uri"
                                }
                            }
                        }
                    },
                    "includeOver18": {
                        "title": "Include NSFW",
                        "type": "boolean",
                        "description": "Maps to <code>include_over_18</code> on Reddit JSON for <strong>Search all of Reddit</strong> and <strong>Search one subreddit</strong>.",
                        "default": false
                    },
                    "searchIncludeComments": {
                        "title": "Include comment threads",
                        "type": "boolean",
                        "description": "Fetch each result post’s thread and save comment rows. Extra requests. <code>maxItems</code> still caps <strong>posts</strong> only.",
                        "default": false
                    },
                    "searchMaxCommentsPerPost": {
                        "title": "Max comments per post",
                        "minimum": 0,
                        "maximum": 500,
                        "type": "integer",
                        "description": "When comment threads are on. Also sets the first <code>thread.json?limit=</code> (competitor-style). Uses <code>/api/morechildren</code> to fill truncated trees until this cap.",
                        "default": 200
                    },
                    "searchPostDateFrom": {
                        "title": "Keep posts from (UTC date)",
                        "type": "string",
                        "description": "<code>YYYY-MM-DD</code>. Posts older than this day are skipped (not saved). With <strong>Force new sort when filtering by date</strong>, search URLs use <code>sort=new</code> for better coverage."
                    },
                    "searchPostDateTo": {
                        "title": "Keep posts until (UTC date)",
                        "type": "string",
                        "description": "Optional inclusive end date <code>YYYY-MM-DD</code>."
                    },
                    "searchCommentDateFrom": {
                        "title": "Keep comments from (UTC date)",
                        "type": "string",
                        "description": "Optional <code>YYYY-MM-DD</code>. Comment rows outside the range are not saved."
                    },
                    "searchCommentDateTo": {
                        "title": "Keep comments until (UTC date)",
                        "type": "string",
                        "description": "Optional inclusive end <code>YYYY-MM-DD</code>."
                    },
                    "searchForceNewSortWhenDateFiltered": {
                        "title": "Force new sort when post dates filtered",
                        "type": "boolean",
                        "description": "If post from/until is set, use Reddit <code>sort=new</code> on search URLs (competitor: <code>forceSortNewForTimeFilteredRuns</code>).",
                        "default": false
                    },
                    "searchStrictPhrase": {
                        "title": "Strict phrase search (quoted q)",
                        "type": "boolean",
                        "description": "Wrap each keyword line in double quotes in the Reddit <code>q</code> parameter (competitor: <code>strictSearch</code>).",
                        "default": false
                    },
                    "searchStrictTokenFilter": {
                        "title": "Strict token filter (local)",
                        "type": "boolean",
                        "description": "After Reddit returns results, keep only posts whose title+body contain every whitespace-separated token from the keyword line (competitor: <code>strictTokenFilter</code>). Subreddit feed mode has no keyword line — filter does not apply.",
                        "default": false
                    },
                    "searchMaximizeCoverage": {
                        "title": "Maximize coverage",
                        "type": "boolean",
                        "description": "Higher caps on search pagination pages and <code>morechildren</code> expansions (competitor: <code>maximize_coverage</code>).",
                        "default": false
                    },
                    "maxItems": {
                        "title": "Max items",
                        "type": "integer",
                        "description": "Search modes: max <strong>posts</strong> total (split across keyword lines). User scraper: max unique authors.",
                        "default": 100
                    },
                    "maxConcurrency": {
                        "title": "Max concurrency",
                        "type": "integer",
                        "description": "For <strong>User scraper</strong> (Cheerio). Ignored for search modes.",
                        "default": 10
                    },
                    "minConcurrency": {
                        "title": "Min concurrency",
                        "type": "integer",
                        "description": "Minimum pages processed in parallel.",
                        "default": 1
                    },
                    "maxRequestRetries": {
                        "title": "Max request retries",
                        "type": "integer",
                        "description": "Retries per failed request before giving up.",
                        "default": 3
                    },
                    "proxy": {
                        "title": "Proxy configuration",
                        "type": "object",
                        "description": "Specifies proxy servers that will be used by the scraper in order to hide its origin.<br><br>For details, see <a href='https://apify.com/apify/web-scraper#proxy-configuration' target='_blank' rel='noopener'>Proxy configuration</a> in README.",
                        "default": {
                            "useApifyProxy": true,
                            "apifyProxyGroups": [
                                "RESIDENTIAL"
                            ],
                            "apifyProxyCountry": "US"
                        }
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
