# Reddit Scraper - Posts, Comments, Communities & Users (`whoareyouanas/reddit-scraper`) Actor

Scrape Reddit posts, comments, subreddits, and user profiles by URL or keyword search. No login required. Full comment trees, NSFW + date filters, pay only for what you scrape ($0.005 per result).

- **URL**: https://apify.com/whoareyouanas/reddit-scraper.md
- **Developed by:** [Anas Nadeem](https://apify.com/whoareyouanas) (community)
- **Categories:** Social media, Automation, Lead generation
- **Stats:** 2 total users, 1 monthly users, 100.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

from $5.00 / 1,000 results

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Reddit Scraper — Posts, Comments, Communities & Users

Scrape Reddit at scale — posts, comments, communities (subreddits), and user profiles. Works by **direct URL** or **keyword search**, supports nested comment trees, NSFW + date filters, and global item caps. No Reddit account or API key needed.

### What does Reddit Scraper do?

This actor pulls structured data from Reddit's public JSON API. Drop in any Reddit URL — a subreddit, post, user profile, or search results — and it returns clean rows ready for analytics, monitoring, or LLM ingestion. You can also run a keyword search across posts, comments, communities, and users.

It runs on a lightweight HTTP path (no browser), so it's fast and cheap. Comment trees are walked depth-first and `more` stubs are expanded against `/api/morechildren` automatically.

### Key Features

- **Multiple input modes** — Start URLs, keyword search, or leaderboard fallback (popular subreddits)
- **Mixed inputs in one run** — Combine subreddit URLs, post URLs, and user profiles freely
- **Full comment trees** — Walks nested replies and expands collapsed branches via `/api/morechildren`
- **4 result categories** — Posts (`t3`), comments (`t1`), communities (`t5`), and users (`t2`)
- **Granular limits** — Per-category caps (`maxPostCount`, `maxComments`, `maxCommunitiesCount`, `maxUserCount`) plus a global `maxItems` ceiling
- **Date and NSFW filters** — `postDateLimit`, `commentDateLimit`, `includeNSFW`
- **Skip toggles** — `skipComments`, `skipUserPosts`, `skipCommunity` for narrower runs
- **Apify residential proxy** — Recommended for production; defaults are pre-wired

### Input Modes

The actor picks one of three modes based on what you provide:

1. **Start URLs** (preferred) — When `startUrls` is non-empty, every other input mode is ignored.
2. **Search** — When `startUrls` is empty but `searches` has at least one query.
3. **Leaderboard** — When neither is set, the actor falls back to scraping `r/popular`'s top communities.

#### Supported URL Shapes

| URL pattern | What gets scraped |
|---|---|
| `reddit.com/r/<sub>/` | Subreddit posts (sort/time honored), optional community-about, optional comments per post |
| `reddit.com/r/<sub>/comments/<id>/` | Single post + its comment tree |
| `reddit.com/user/<name>/` | User profile + their submitted posts + their comment history |
| `reddit.com/search?q=...` | Keyword search (post / comment / sr / user, depending on flags) |
| `reddit.com/r/<sub>/search?q=...` | Search restricted to one subreddit |

`old.reddit.com` and `www.reddit.com` are both accepted; URLs are normalized internally.

### Output Data

Every dataset row carries a `dataType` discriminator so you can split them downstream.

#### Post (`dataType: "post"`)

| Field | Type | Description |
|---|---|---|
| `id` | string | Reddit fullname (`t3_xxx`) |
| `parsedId` | string | Base-36 id without prefix |
| `url` | string | Permalink to the post (or external URL for link posts) |
| `username` | string | Author |
| `title` | string | Post title |
| `communityName` | string | `r/<subreddit>` |
| `parsedCommunityName` | string | Subreddit name without `r/` prefix |
| `body` | string | Self-text (or external URL for link posts) |
| `html` | string | Rendered HTML for self-text |
| `numberOfComments` | number | `num_comments` from Reddit |
| `upVotes` | number | Score |
| `authorFlair` | string \| null | Author flair text |
| `isVideo` | boolean | True for video posts |
| `isAd` | boolean | True for promoted/ad posts |
| `over18` | boolean | NSFW flag |
| `createdAt` | string | ISO 8601 |
| `scrapedAt` | string | ISO 8601 |

#### Comment (`dataType: "comment"`)

| Field | Type | Description |
|---|---|---|
| `id` | string | `t1_xxx` |
| `parsedId` | string | Base-36 id |
| `url` | string | Permalink to the comment |
| `parentId` | string | Parent fullname (`t3_*` for top-level, `t1_*` for replies) |
| `username` | string | Author |
| `authorFlair` | string \| null | Flair text |
| `category` | string | Subreddit name |
| `communityName` | string | `r/<subreddit>` |
| `body` | string | Comment text (markdown) |
| `html` | string | Rendered HTML |
| `upVotes` | number | Score |
| `numberOfReplies` | number | Recursive count of `t1` replies underneath |
| `createdAt` | string | ISO 8601 |
| `scrapedAt` | string | ISO 8601 |

#### Community (`dataType: "community"`)

| Field | Type | Description |
|---|---|---|
| `id` | string | `t5_xxx` |
| `name` | string | Display name (no `r/` prefix) |
| `title` | string | Long-form community title |
| `headerImage` | string | Banner / header image URL |
| `description` | string | Public description |
| `over18` | boolean | NSFW community flag |
| `numberOfMembers` | number | Subscribers |
| `url` | string | Absolute permalink |
| `createdAt` | string | ISO 8601 |
| `scrapedAt` | string | ISO 8601 |

#### User (`dataType: "user"`)

| Field | Type | Description |
|---|---|---|
| `id` | string | `t2_xxx` |
| `url` | string | Profile permalink |
| `username` | string | Reddit handle |
| `userIcon` | string | Avatar URL |
| `postKarma` | number | Link karma |
| `commentKarma` | number | Comment karma |
| `description` | string | Profile description |
| `over18` | boolean | NSFW profile flag |
| `createdAt` | string | ISO 8601 |
| `scrapedAt` | string | ISO 8601 |

### Sample Output

```json
{
  "dataType": "post",
  "id": "t3_1t16uqd",
  "parsedId": "1t16uqd",
  "url": "https://www.reddit.com/r/AskReddit/comments/1t16uqd/...",
  "username": "IIlustriousTea",
  "title": "US birth rates just hit another record low...",
  "communityName": "r/AskReddit",
  "parsedCommunityName": "AskReddit",
  "body": "",
  "html": "",
  "numberOfComments": 8892,
  "upVotes": 7657,
  "authorFlair": null,
  "isVideo": false,
  "isAd": false,
  "over18": false,
  "createdAt": "2026-05-01T21:40:45.000Z",
  "scrapedAt": "2026-05-02T05:53:19.442Z"
}
````

### Input Parameters

#### Direct URLs

| Parameter | Type | Default | Description |
|---|---|---|---|
| `startUrls` | array | `[]` | Reddit URLs to scrape. Mix any of: subreddit, post, user, or search URLs. |
| `ignoreStartUrls` | boolean | `false` | Force-bypass the URLs field (helpful for tools like Zapier). |

#### Search

| Parameter | Type | Default | Description |
|---|---|---|---|
| `searches` | string\[] | `[]` | Keywords to search. Used only when `startUrls` is empty. |
| `searchCommunityName` | string | `""` | Restrict every search to one subreddit. |
| `searchPosts` | boolean | `true` | Include posts in search results. |
| `searchComments` | boolean | `false` | Include comments (best-effort — Reddit's comment search returns parent posts). |
| `searchCommunities` | boolean | `false` | Include matching communities. |
| `searchUsers` | boolean | `false` | Include matching user profiles. |
| `sort` | enum | `new` | `relevance` / `hot` / `top` / `new` / `rising` / `comments`. |
| `time` | enum | `""` | `all` / `hour` / `day` / `week` / `month` / `year`. Most useful with `sort=top`. |

#### Filters

| Parameter | Type | Default | Description |
|---|---|---|---|
| `includeNSFW` | boolean | `true` | Include adult-rated posts and subreddits. |
| `skipComments` | boolean | `false` | Don't scrape comments when going through posts. |
| `skipUserPosts` | boolean | `false` | Don't scrape a user's submitted posts when going through their profile. |
| `skipCommunity` | boolean | `false` | Don't push community metadata when going through a subreddit. |
| `postDateLimit` | ISO date | — | Only keep posts created after this date. |
| `commentDateLimit` | ISO date | — | Only keep comments created after this date. |

#### Limits

| Parameter | Type | Default | Description |
|---|---|---|---|
| `maxItems` | integer | `10` | Hard global cap on dataset rows across **all** categories. |
| `maxPostCount` | integer | `10` | Per-listing cap on posts. |
| `maxComments` | integer | `10` | Per-post cap on comments (or global cap on comment-search/user-comments). |
| `maxCommunitiesCount` | integer | `2` | Cap on communities returned from search or leaderboard. |
| `maxUserCount` | integer | `2` | Cap on user profiles returned from search. |

#### Advanced

| Parameter | Type | Default | Description |
|---|---|---|---|
| `proxy` | object | `Apify Residential` | Apify proxy or your own proxy URLs. Residential is strongly recommended. |
| `debugMode` | boolean | `false` | Verbose Crawlee logging. |

### How It Works

The actor sends authenticated-style HTTP requests to `reddit.com/*.json` using a descriptive non-browser User-Agent — Reddit's anonymous JSON endpoints reject Chrome-like UAs without browser cookies, so we explicitly disable Crawlee's automatic browser-fingerprint header injection. This keeps unauthenticated rate limits at their generous default (~100 requests/min) instead of falling back to the strict ~10/min anti-bot tier.

Comment trees are walked depth-first up to `maxComments`. Collapsed `more` stubs are expanded by POSTing to `/api/morechildren.json` in batches of 100 children — no extra request per comment.

The crawler aborts as soon as `maxItems` is hit, so over-runs are not a concern even with deep trees.

### Pricing

This actor uses **pay-per-event** pricing:

| Event | Price |
|---|---|
| Actor start | $0.00005 |
| Result extracted (per dataset row) | $0.005 |

You only pay for what you scrape. Apify platform compute and proxy usage are billed separately based on your plan.

### Limitations

- **Comment search** returns parent posts only (Reddit's API behavior); the actor enqueues those posts so their comment trees are still scraped. Treat it as best-effort.
- **Removed/deleted posts** return a 404 envelope; they're logged and skipped without retry.
- **Login-walled content** (private subreddits, NSFW-locked content for unauth) is not accessible via the JSON API and is silently skipped.

# Actor input Schema

## `startUrls` (type: `array`):

Reddit URLs to scrape — posts, subreddits, users, or search-result pages. When provided, all Search options below are ignored.

## `ignoreStartUrls` (type: `boolean`):

Set to true to bypass the Start URLs field (used as a fix for tools like Zapier).

## `searches` (type: `array`):

Keywords used to search Reddit. Used only when Start URLs is empty.

## `searchCommunityName` (type: `string`):

If provided, search is performed only inside this community (e.g. 'programming'). Leave empty to search all of Reddit.

## `searchPosts` (type: `boolean`):

Include matching posts in search results.

## `searchComments` (type: `boolean`):

Include matching comments in search results (best-effort; Reddit's comment search is limited).

## `searchCommunities` (type: `boolean`):

Include matching communities (subreddits) in search results.

## `searchUsers` (type: `boolean`):

Include matching user profiles in search results.

## `sort` (type: `string`):

Sort order for posts and search results.

## `time` (type: `string`):

Limit results to a recent time window (most useful with sort = top or controversial).

## `includeNSFW` (type: `boolean`):

Include adult-rated posts and subreddits in results.

## `skipComments` (type: `boolean`):

Skip scraping comments when going through posts.

## `skipUserPosts` (type: `boolean`):

Skip scraping a user's submitted posts when going through their profile.

## `skipCommunity` (type: `boolean`):

Skip scraping community (subreddit) metadata, but still scrape its posts.

## `postDateLimit` (type: `string`):

Only retrieve posts published after this date (ISO 8601, e.g. 2024-01-01).

## `commentDateLimit` (type: `string`):

Only retrieve comments published after this date (ISO 8601).

## `maxItems` (type: `integer`):

Hard cap on the total number of items saved to the dataset across all categories.

## `maxPostCount` (type: `integer`):

Maximum number of posts scraped from each subreddit / search / user submissions page.

## `maxComments` (type: `integer`):

Maximum number of comments scraped per post.

## `maxCommunitiesCount` (type: `integer`):

Maximum number of communities returned from search or leaderboard.

## `maxUserCount` (type: `integer`):

Maximum number of user profiles returned from search.

## `scrollTimeout` (type: `integer`):

Timeout (seconds) for page scrolling in the browser fallback path. Ignored on the JSON API path.

## `proxy` (type: `object`):

Use Apify proxy or your own proxy servers. Residential is strongly recommended.

## `debugMode` (type: `boolean`):

Enable verbose request and extraction logs.

## Actor input object example

```json
{
  "startUrls": [
    {
      "url": "https://www.reddit.com/r/programming/"
    }
  ],
  "ignoreStartUrls": false,
  "searchPosts": true,
  "searchComments": false,
  "searchCommunities": false,
  "searchUsers": false,
  "sort": "new",
  "time": "",
  "includeNSFW": true,
  "skipComments": false,
  "skipUserPosts": false,
  "skipCommunity": false,
  "maxItems": 10,
  "maxPostCount": 10,
  "maxComments": 10,
  "maxCommunitiesCount": 2,
  "maxUserCount": 2,
  "scrollTimeout": 40,
  "proxy": {
    "useApifyProxy": true,
    "apifyProxyGroups": [
      "RESIDENTIAL"
    ]
  },
  "debugMode": false
}
```

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "startUrls": [
        {
            "url": "https://www.reddit.com/r/programming/"
        }
    ]
};

// Run the Actor and wait for it to finish
const run = await client.actor("whoareyouanas/reddit-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = { "startUrls": [{ "url": "https://www.reddit.com/r/programming/" }] }

# Run the Actor and wait for it to finish
run = client.actor("whoareyouanas/reddit-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "startUrls": [
    {
      "url": "https://www.reddit.com/r/programming/"
    }
  ]
}' |
apify call whoareyouanas/reddit-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=whoareyouanas/reddit-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit Scraper - Posts, Comments, Communities & Users",
        "description": "Scrape Reddit posts, comments, subreddits, and user profiles by URL or keyword search. No login required. Full comment trees, NSFW + date filters, pay only for what you scrape ($0.005 per result).",
        "version": "1.0",
        "x-build-id": "RkvpvfyBxs9tUiVv1"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/whoareyouanas~reddit-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-whoareyouanas-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/whoareyouanas~reddit-scraper/runs": {
            "post": {
                "operationId": "runs-sync-whoareyouanas-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/whoareyouanas~reddit-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-whoareyouanas-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "proxy"
                ],
                "properties": {
                    "startUrls": {
                        "title": "Start URLs",
                        "type": "array",
                        "description": "Reddit URLs to scrape — posts, subreddits, users, or search-result pages. When provided, all Search options below are ignored.",
                        "items": {
                            "type": "object",
                            "required": [
                                "url"
                            ],
                            "properties": {
                                "url": {
                                    "type": "string",
                                    "title": "URL of a web page",
                                    "format": "uri"
                                }
                            }
                        }
                    },
                    "ignoreStartUrls": {
                        "title": "Ignore Start URLs",
                        "type": "boolean",
                        "description": "Set to true to bypass the Start URLs field (used as a fix for tools like Zapier).",
                        "default": false
                    },
                    "searches": {
                        "title": "Search terms",
                        "type": "array",
                        "description": "Keywords used to search Reddit. Used only when Start URLs is empty.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "searchCommunityName": {
                        "title": "Restrict search to community",
                        "type": "string",
                        "description": "If provided, search is performed only inside this community (e.g. 'programming'). Leave empty to search all of Reddit."
                    },
                    "searchPosts": {
                        "title": "Search posts",
                        "type": "boolean",
                        "description": "Include matching posts in search results.",
                        "default": true
                    },
                    "searchComments": {
                        "title": "Search comments",
                        "type": "boolean",
                        "description": "Include matching comments in search results (best-effort; Reddit's comment search is limited).",
                        "default": false
                    },
                    "searchCommunities": {
                        "title": "Search communities",
                        "type": "boolean",
                        "description": "Include matching communities (subreddits) in search results.",
                        "default": false
                    },
                    "searchUsers": {
                        "title": "Search users",
                        "type": "boolean",
                        "description": "Include matching user profiles in search results.",
                        "default": false
                    },
                    "sort": {
                        "title": "Sort",
                        "enum": [
                            "",
                            "relevance",
                            "hot",
                            "top",
                            "new",
                            "rising",
                            "comments"
                        ],
                        "type": "string",
                        "description": "Sort order for posts and search results.",
                        "default": "new"
                    },
                    "time": {
                        "title": "Time filter",
                        "enum": [
                            "",
                            "all",
                            "hour",
                            "day",
                            "week",
                            "month",
                            "year"
                        ],
                        "type": "string",
                        "description": "Limit results to a recent time window (most useful with sort = top or controversial).",
                        "default": ""
                    },
                    "includeNSFW": {
                        "title": "Include NSFW content",
                        "type": "boolean",
                        "description": "Include adult-rated posts and subreddits in results.",
                        "default": true
                    },
                    "skipComments": {
                        "title": "Skip comments",
                        "type": "boolean",
                        "description": "Skip scraping comments when going through posts.",
                        "default": false
                    },
                    "skipUserPosts": {
                        "title": "Skip user posts",
                        "type": "boolean",
                        "description": "Skip scraping a user's submitted posts when going through their profile.",
                        "default": false
                    },
                    "skipCommunity": {
                        "title": "Skip community info",
                        "type": "boolean",
                        "description": "Skip scraping community (subreddit) metadata, but still scrape its posts.",
                        "default": false
                    },
                    "postDateLimit": {
                        "title": "Posts after date (ISO)",
                        "type": "string",
                        "description": "Only retrieve posts published after this date (ISO 8601, e.g. 2024-01-01)."
                    },
                    "commentDateLimit": {
                        "title": "Comments after date (ISO)",
                        "type": "string",
                        "description": "Only retrieve comments published after this date (ISO 8601)."
                    },
                    "maxItems": {
                        "title": "Max items (total)",
                        "minimum": 1,
                        "type": "integer",
                        "description": "Hard cap on the total number of items saved to the dataset across all categories.",
                        "default": 10
                    },
                    "maxPostCount": {
                        "title": "Max posts per page",
                        "minimum": 1,
                        "type": "integer",
                        "description": "Maximum number of posts scraped from each subreddit / search / user submissions page.",
                        "default": 10
                    },
                    "maxComments": {
                        "title": "Max comments per post",
                        "minimum": 0,
                        "type": "integer",
                        "description": "Maximum number of comments scraped per post.",
                        "default": 10
                    },
                    "maxCommunitiesCount": {
                        "title": "Max communities",
                        "minimum": 1,
                        "type": "integer",
                        "description": "Maximum number of communities returned from search or leaderboard.",
                        "default": 2
                    },
                    "maxUserCount": {
                        "title": "Max users",
                        "minimum": 1,
                        "type": "integer",
                        "description": "Maximum number of user profiles returned from search.",
                        "default": 2
                    },
                    "scrollTimeout": {
                        "title": "Browser scroll timeout (s)",
                        "minimum": 5,
                        "type": "integer",
                        "description": "Timeout (seconds) for page scrolling in the browser fallback path. Ignored on the JSON API path.",
                        "default": 40
                    },
                    "proxy": {
                        "title": "Proxy configuration",
                        "type": "object",
                        "description": "Use Apify proxy or your own proxy servers. Residential is strongly recommended.",
                        "default": {
                            "useApifyProxy": true,
                            "apifyProxyGroups": [
                                "RESIDENTIAL"
                            ]
                        }
                    },
                    "debugMode": {
                        "title": "Debug mode",
                        "type": "boolean",
                        "description": "Enable verbose request and extraction logs.",
                        "default": false
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
