# Reddit Posts Scraper — (with Comments & Replies) (`khadinakbar/reddit-posts-scraper`) Actor

Extract Reddit posts, comments & subreddit data with no login required. Returns title, score, author, flair, body text, and dates. JSON API-powered for 99%+ reliability.

- **URL**: https://apify.com/khadinakbar/reddit-posts-scraper.md
- **Developed by:** [Khadin Akbar](https://apify.com/khadinakbar) (community)
- **Categories:** Social media, MCP servers, SEO tools
- **Stats:** 1 total users, 0 monthly users, 0.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

from $2.00 / 1,000 post scrapeds

This Actor is paid per event and usage. You are charged both the fixed price for specific events and for Apify platform usage.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## 🔍 Reddit Scraper — Posts & Comments | No Login

### What does Reddit Scraper do?

Reddit Scraper extracts posts, comments, and metadata from any subreddit, search query, or direct Reddit URL — with no login, no API key, and no cookies required. It uses Reddit's public JSON API for **99%+ reliability** and returns clean, structured data ready for analysis, AI pipelines, or lead generation workflows.

### Why use this Reddit Scraper?

- **No login or Reddit API key needed** — works out of the box on any public subreddit or search
- **99%+ success rate** — powered by Reddit's own JSON API, not fragile HTML scraping
- **50% cheaper than the leading competitor** — $0.002 per result vs $0.004 elsewhere
- **Full MCP/AI compatibility** — every output field has semantic names and metadata so Claude and other AI agents understand exactly what data they're getting
- **Advanced filtering** — filter by minimum score, flair, author, date, comment count, and NSFW status

### What data can Reddit Scraper extract?

| Field | Description | Example |
|-------|-------------|---------|
| `post_id` | Reddit post identifier | `t3_abc123` |
| `title` | Full post title | `"Best Python resources in 2025?"` |
| `author` | Reddit username | `"curious_dev"` |
| `subreddit` | Community name | `"learnprogramming"` |
| `url` | Full post URL | `"https://reddit.com/r/..."` |
| `body_text` | Post text content | `"I've been coding for 2 years..."` |
| `score` | Net upvotes | `482` |
| `upvote_ratio` | % upvoted | `0.97` |
| `num_comments` | Total comments | `84` |
| `flair` | Post flair label | `"Question"` |
| `external_url` | Link post URL | `"https://github.com/..."` |
| `thumbnail_url` | Thumbnail image | `"https://b.thumbs..."` |
| `is_nsfw` | NSFW flag | `false` |
| `is_video` | Video post flag | `false` |
| `created_at` | Post creation time (UTC) | `"2025-11-15T14:32:00Z"` |
| `scraped_at` | When scraped | `"2026-04-09T10:00:00Z"` |
| `data_type` | Record type | `"post"` or `"comment"` |

---

### How to scrape Reddit

#### Step 1 — Choose your input source

**Option A: By Subreddits**
Enter subreddit names (with or without `r/` prefix). The scraper fetches posts sorted by your chosen method.

```json
{
  "subreddits": ["programming", "learnpython", "MachineLearning"],
  "sort": "top",
  "time": "week",
  "maxResults": 100
}
````

**Option B: By Search Query**
Search across all of Reddit for any keyword or phrase.

```json
{
  "searchQueries": ["AI news 2025", "best side hustle"],
  "sort": "relevance",
  "time": "month",
  "maxResults": 200
}
```

**Option C: By Direct URL**
Pass any Reddit URL and the scraper extracts data from it directly.

```json
{
  "startUrls": [
    { "url": "https://www.reddit.com/r/datascience/top/" },
    { "url": "https://www.reddit.com/r/programming/search/?q=typescript" }
  ],
  "maxResults": 50
}
```

#### Step 2 — Optional: Enable comment scraping

Set `includeComments: true` to also pull the top comments for each post.

```json
{
  "subreddits": ["AskReddit"],
  "maxResults": 20,
  "includeComments": true,
  "maxCommentsPerPost": 50
}
```

Comment records are saved alongside posts in the same dataset, with `data_type: "comment"`.

***

### Input Parameters

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `subreddits` | string\[] | — | Subreddit names to scrape |
| `searchQueries` | string\[] | — | Keywords to search on Reddit |
| `startUrls` | URL\[] | — | Direct Reddit URLs to scrape |
| `sort` | string | `hot` | Sort by: hot, new, top, rising, controversial, relevance |
| `time` | string | `all` | Time filter: hour, day, week, month, year, all |
| `maxResults` | number | `50` | Maximum total posts to save |
| `includeComments` | boolean | `false` | Scrape top comments for each post |
| `maxCommentsPerPost` | number | `20` | Max comments per post |
| `includeNSFW` | boolean | `false` | Include NSFW posts |
| `minScore` | number | — | Minimum upvote score filter |
| `maxScore` | number | — | Maximum upvote score filter |
| `minComments` | number | — | Minimum comment count filter |
| `flairFilter` | string | — | Only include posts with this exact flair |
| `authorFilter` | string | — | Only include posts from this username |
| `postDateLimit` | string | — | Exclude posts older than this date (YYYY-MM-DD) |
| `proxyConfiguration` | object | Residential | Proxy settings |

***

### Output Example

```json
{
  "post_id": "t3_1abc23",
  "title": "What's the best way to learn Python in 2025?",
  "author": "curious_dev",
  "subreddit": "learnprogramming",
  "url": "https://www.reddit.com/r/learnprogramming/comments/abc123/",
  "permalink": "/r/learnprogramming/comments/abc123/whats_the_best_way/",
  "body_text": "I've been coding JavaScript for 2 years and want to branch out...",
  "score": 482,
  "upvote_ratio": 0.97,
  "num_comments": 84,
  "flair": "Question",
  "domain": "self.learnprogramming",
  "external_url": null,
  "thumbnail_url": null,
  "is_nsfw": false,
  "is_video": false,
  "is_self": true,
  "created_at": "2025-11-15T14:32:00.000Z",
  "scraped_at": "2026-04-09T10:00:00.000Z",
  "source_url": "https://www.reddit.com/r/learnprogramming/comments/abc123/",
  "data_type": "post"
}
```

***

### Use Cases

**Market Research & Consumer Insights**
Scrape product-related subreddits to understand what real customers say about your product or competitors. Reddit users are unusually candid — ideal for genuine sentiment analysis.

**AI & NLP Training Data**
Build large, diverse text datasets for fine-tuning LLMs or sentiment classifiers. Reddit's wide range of topics, writing styles, and community sizes makes it one of the best public text sources.

**Brand Monitoring**
Set up scheduled runs on keyword searches for your brand name, product, or competitors. Catch PR issues early or spot positive sentiment to amplify.

**Content Strategy & Trend Discovery**
Track which posts get the most upvotes in your niche each week. Use the `sort: top` + `time: week` combo to find what resonates with your target audience before creating content.

**Lead Generation & Community Analysis**
Find engaged community members in your niche. Use `minScore` to filter for only high-signal discussions.

***

### How to run via API

```javascript
import { ApifyClient } from 'apify-client';

const client = new ApifyClient({ token: 'YOUR_APIFY_API_TOKEN' });

const run = await client.actor('USERNAME/reddit-posts-scraper').call({
  subreddits: ['programming', 'learnpython'],
  sort: 'top',
  time: 'week',
  maxResults: 500,
});

const { items } = await client.dataset(run.defaultDatasetId).listItems();
console.log(items);
```

Export scraped data, run the scraper via API, schedule and monitor runs, or integrate with other tools. Data is available in JSON, CSV, Excel, XML, and HTML formats.

***

### Pricing

This actor uses **pay-per-event pricing** — you only pay for what you actually scrape.

| Plan | Price per Post |
|------|----------------|
| Free | $0.002 |
| Bronze | $0.0018 |
| Silver | $0.0016 |
| Gold+ | $0.0015 |

**Cost examples:**

- 100 posts → ~$0.20
- 1,000 posts → ~$2.00
- 10,000 posts → ~$15–20

The Free Apify plan includes $5 in monthly credits — enough to scrape **2,000+ posts for free** every month.

***

### FAQ

**Does this require a Reddit account or API key?**
No. This scraper uses Reddit's public JSON API endpoints which are accessible without authentication. No cookies, no login, no Reddit API key needed.

**Why is the success rate higher than other Reddit scrapers?**
Most Reddit scrapers use Playwright (a browser) to render pages. This actor queries Reddit's own JSON API directly using lightweight HTTP requests, which is more reliable, faster, and harder to block than headless browser traffic.

**Can I scrape private subreddits?**
No. This scraper only accesses publicly available Reddit data. Private or quarantined subreddits require authentication and are not supported.

**Why do some posts show `[deleted]` as the author?**
Reddit accounts that were deleted after posting will show `[deleted]`. This is Reddit's own value — the scraper preserves it accurately.

**How do I scrape more than 1,000 posts from a subreddit?**
Reddit limits browsing to ~1,000 posts per listing view. To collect more, use `searchQueries` with `sort: new` and `postDateLimit` to fetch posts in time-based windows. This breaks the 1,000-post cap.

**Do proxies cost extra?**
Apify Residential proxies are included in your Apify subscription. The scraper uses them automatically when configured.

***

### Legal Disclaimer

*This actor is designed for lawful data collection from publicly available Reddit content. Users are solely responsible for ensuring compliance with Reddit's Terms of Service, applicable laws, data protection regulations (GDPR, CCPA), and any other legal requirements in their jurisdiction. Do not use this tool to collect data in violation of Reddit's Terms of Service or for any unlawful purpose. Anthropic and the actor developer assume no liability for misuse.*

# Actor input Schema

## `subreddits` (type: `array`):

Use this field when the user provides subreddit names (with or without r/ prefix). Each subreddit fetches posts using the Sort and Time Filter settings. Leave empty if using Search Queries or Start URLs instead. Examples: 'programming', 'r/learnpython', 'MachineLearning'.

## `searchQueries` (type: `array`):

Use this field when the user describes a topic or keyword to search across all of Reddit. Each query runs a full Reddit search. Do NOT use this when the user provides a subreddit name — use Subreddits instead. Examples: 'best laptop 2025', 'AI news', 'side hustle ideas'.

## `startUrls` (type: `array`):

Use this field when the user provides a specific Reddit URL. Supports any Reddit URL: subreddit pages, post pages, search pages, or user profile pages. Examples: https://www.reddit.com/r/datascience/, https://www.reddit.com/r/programming/top/.

## `redditClientId` (type: `string`):

Optional Reddit app client\_id for OAuth. Needed when running on cloud servers (Apify cloud IPs are blocked by Reddit). Get one free in 2 min: go to reddit.com/prefs/apps → create app → choose "installed app" → copy the ID shown under the app name. No user login or secret required.

## `sort` (type: `string`):

How to sort posts. 'hot' = currently popular. 'new' = most recent. 'top' = highest score (use with Time Filter). 'rising' = gaining momentum. 'controversial' = most debated. 'relevance' = best match for search queries only.

## `time` (type: `string`):

Time range for 'top', 'controversial', and search results. Ignored for 'hot', 'new', and 'rising'. Options: hour, day, week, month, year, all.

## `maxResults` (type: `integer`):

Maximum number of posts to save in total across all sources. Each page returns up to 100 posts. Set to 0 for unlimited (up to Reddit's per-source caps). Default: 50.

## `includeComments` (type: `boolean`):

If enabled, scrapes the top comments for each post scraped. Significantly increases run time and data output. Each comment is saved as a separate record with data\_type: 'comment'.

## `maxCommentsPerPost` (type: `integer`):

Maximum number of comments to scrape per post (only when Include Comments is enabled). Comments are flattened from nested threads. Default: 20.

## `includeNSFW` (type: `boolean`):

Include posts marked as NSFW (Not Safe For Work / 18+). Disabled by default.

## `minScore` (type: `integer`):

Only include posts with at least this many upvotes. Leave empty for no minimum. Useful for filtering to only high-quality posts.

## `maxScore` (type: `integer`):

Only include posts with at most this many upvotes. Leave empty for no maximum.

## `minComments` (type: `integer`):

Only include posts with at least this many comments. Leave empty for no minimum.

## `flairFilter` (type: `string`):

Only include posts with this exact flair text. Case-sensitive. Example: 'Question', 'Discussion', 'Tutorial'. Leave empty to include all flairs.

## `authorFilter` (type: `string`):

Only include posts from this specific Reddit username (without u/). Leave empty to include all authors.

## `postDateLimit` (type: `string`):

Only include posts created after this date. Format: YYYY-MM-DD or ISO 8601 (e.g., 2025-01-01). Pagination stops automatically when posts older than this date are encountered.

## `proxyConfiguration` (type: `object`):

Proxy settings for Reddit scraping. Reddit blocks some datacenter IPs. Using Apify Residential proxies improves reliability significantly. Highly recommended for large-scale runs.

## Actor input object example

```json
{
  "subreddits": [
    "programming",
    "learnpython"
  ],
  "searchQueries": [],
  "startUrls": [],
  "sort": "hot",
  "time": "all",
  "maxResults": 50,
  "includeComments": false,
  "maxCommentsPerPost": 20,
  "includeNSFW": false,
  "proxyConfiguration": {
    "useApifyProxy": true,
    "apifyProxyGroups": [
      "RESIDENTIAL"
    ]
  }
}
```

# Actor output Schema

## `dataset` (type: `string`):

Dataset containing all scraped Reddit posts and comments. Each item includes title, author, subreddit, score, comment count, flair, body text, URLs, timestamps, and metadata flags.

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "subreddits": [
        "programming",
        "learnpython"
    ],
    "searchQueries": [],
    "startUrls": [],
    "sort": "hot",
    "time": "all",
    "maxResults": 50,
    "includeComments": false,
    "maxCommentsPerPost": 20,
    "includeNSFW": false,
    "proxyConfiguration": {
        "useApifyProxy": true,
        "apifyProxyGroups": [
            "RESIDENTIAL"
        ]
    }
};

// Run the Actor and wait for it to finish
const run = await client.actor("khadinakbar/reddit-posts-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "subreddits": [
        "programming",
        "learnpython",
    ],
    "searchQueries": [],
    "startUrls": [],
    "sort": "hot",
    "time": "all",
    "maxResults": 50,
    "includeComments": False,
    "maxCommentsPerPost": 20,
    "includeNSFW": False,
    "proxyConfiguration": {
        "useApifyProxy": True,
        "apifyProxyGroups": ["RESIDENTIAL"],
    },
}

# Run the Actor and wait for it to finish
run = client.actor("khadinakbar/reddit-posts-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "subreddits": [
    "programming",
    "learnpython"
  ],
  "searchQueries": [],
  "startUrls": [],
  "sort": "hot",
  "time": "all",
  "maxResults": 50,
  "includeComments": false,
  "maxCommentsPerPost": 20,
  "includeNSFW": false,
  "proxyConfiguration": {
    "useApifyProxy": true,
    "apifyProxyGroups": [
      "RESIDENTIAL"
    ]
  }
}' |
apify call khadinakbar/reddit-posts-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=khadinakbar/reddit-posts-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit Posts Scraper — (with Comments & Replies)",
        "description": "Extract Reddit posts, comments & subreddit data with no login required. Returns title, score, author, flair, body text, and dates. JSON API-powered for 99%+ reliability.",
        "version": "1.0",
        "x-build-id": "UPRqd96ZMU4LkX99z"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/khadinakbar~reddit-posts-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-khadinakbar-reddit-posts-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/khadinakbar~reddit-posts-scraper/runs": {
            "post": {
                "operationId": "runs-sync-khadinakbar-reddit-posts-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/khadinakbar~reddit-posts-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-khadinakbar-reddit-posts-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "properties": {
                    "subreddits": {
                        "title": "Subreddits",
                        "type": "array",
                        "description": "Use this field when the user provides subreddit names (with or without r/ prefix). Each subreddit fetches posts using the Sort and Time Filter settings. Leave empty if using Search Queries or Start URLs instead. Examples: 'programming', 'r/learnpython', 'MachineLearning'.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "searchQueries": {
                        "title": "Search Queries",
                        "type": "array",
                        "description": "Use this field when the user describes a topic or keyword to search across all of Reddit. Each query runs a full Reddit search. Do NOT use this when the user provides a subreddit name — use Subreddits instead. Examples: 'best laptop 2025', 'AI news', 'side hustle ideas'.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "startUrls": {
                        "title": "Start URLs",
                        "type": "array",
                        "description": "Use this field when the user provides a specific Reddit URL. Supports any Reddit URL: subreddit pages, post pages, search pages, or user profile pages. Examples: https://www.reddit.com/r/datascience/, https://www.reddit.com/r/programming/top/.",
                        "items": {
                            "type": "object",
                            "required": [
                                "url"
                            ],
                            "properties": {
                                "url": {
                                    "type": "string",
                                    "title": "URL of a web page",
                                    "format": "uri"
                                }
                            }
                        }
                    },
                    "redditClientId": {
                        "title": "Reddit Client ID (for cloud use)",
                        "type": "string",
                        "description": "Optional Reddit app client_id for OAuth. Needed when running on cloud servers (Apify cloud IPs are blocked by Reddit). Get one free in 2 min: go to reddit.com/prefs/apps → create app → choose \"installed app\" → copy the ID shown under the app name. No user login or secret required."
                    },
                    "sort": {
                        "title": "Sort By",
                        "enum": [
                            "hot",
                            "new",
                            "top",
                            "rising",
                            "controversial",
                            "relevance"
                        ],
                        "type": "string",
                        "description": "How to sort posts. 'hot' = currently popular. 'new' = most recent. 'top' = highest score (use with Time Filter). 'rising' = gaining momentum. 'controversial' = most debated. 'relevance' = best match for search queries only.",
                        "default": "hot"
                    },
                    "time": {
                        "title": "Time Filter",
                        "enum": [
                            "all",
                            "hour",
                            "day",
                            "week",
                            "month",
                            "year"
                        ],
                        "type": "string",
                        "description": "Time range for 'top', 'controversial', and search results. Ignored for 'hot', 'new', and 'rising'. Options: hour, day, week, month, year, all.",
                        "default": "all"
                    },
                    "maxResults": {
                        "title": "Max Posts",
                        "minimum": 1,
                        "maximum": 1000000,
                        "type": "integer",
                        "description": "Maximum number of posts to save in total across all sources. Each page returns up to 100 posts. Set to 0 for unlimited (up to Reddit's per-source caps). Default: 50.",
                        "default": 50
                    },
                    "includeComments": {
                        "title": "Include Comments",
                        "type": "boolean",
                        "description": "If enabled, scrapes the top comments for each post scraped. Significantly increases run time and data output. Each comment is saved as a separate record with data_type: 'comment'.",
                        "default": false
                    },
                    "maxCommentsPerPost": {
                        "title": "Max Comments Per Post",
                        "minimum": 1,
                        "maximum": 500,
                        "type": "integer",
                        "description": "Maximum number of comments to scrape per post (only when Include Comments is enabled). Comments are flattened from nested threads. Default: 20.",
                        "default": 20
                    },
                    "includeNSFW": {
                        "title": "Include NSFW Content",
                        "type": "boolean",
                        "description": "Include posts marked as NSFW (Not Safe For Work / 18+). Disabled by default.",
                        "default": false
                    },
                    "minScore": {
                        "title": "Minimum Score (Upvotes)",
                        "type": "integer",
                        "description": "Only include posts with at least this many upvotes. Leave empty for no minimum. Useful for filtering to only high-quality posts."
                    },
                    "maxScore": {
                        "title": "Maximum Score (Upvotes)",
                        "type": "integer",
                        "description": "Only include posts with at most this many upvotes. Leave empty for no maximum."
                    },
                    "minComments": {
                        "title": "Minimum Comments",
                        "type": "integer",
                        "description": "Only include posts with at least this many comments. Leave empty for no minimum."
                    },
                    "flairFilter": {
                        "title": "Flair Filter",
                        "type": "string",
                        "description": "Only include posts with this exact flair text. Case-sensitive. Example: 'Question', 'Discussion', 'Tutorial'. Leave empty to include all flairs."
                    },
                    "authorFilter": {
                        "title": "Author Filter",
                        "type": "string",
                        "description": "Only include posts from this specific Reddit username (without u/). Leave empty to include all authors."
                    },
                    "postDateLimit": {
                        "title": "Post Date Limit (exclude posts older than)",
                        "type": "string",
                        "description": "Only include posts created after this date. Format: YYYY-MM-DD or ISO 8601 (e.g., 2025-01-01). Pagination stops automatically when posts older than this date are encountered."
                    },
                    "proxyConfiguration": {
                        "title": "Proxy Configuration",
                        "type": "object",
                        "description": "Proxy settings for Reddit scraping. Reddit blocks some datacenter IPs. Using Apify Residential proxies improves reliability significantly. Highly recommended for large-scale runs."
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
