# Reddit Intelligence Scraper (Pay per Event) (`eiv/reddit-intelligence-scraper`) Actor

Scrape Reddit posts, full comment trees, user profiles, and search results. Features subreddit monitoring with webhook alerts, batch comparison across multiple subreddits, and AI-native markdown output ready for LLM pipelines and vector databases.

- **URL**: https://apify.com/eiv/reddit-intelligence-scraper.md
- **Developed by:** [Eimantas V](https://apify.com/eiv) (community)
- **Categories:** AI, Developer tools, Social media
- **Stats:** 3 total users, 2 monthly users, 95.5% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

from $3.00 / 1,000 results

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Reddit Intelligence Scraper

Extract posts, full comment trees, user profiles, search results, and trending topics from Reddit — with **AI-native structured output** designed to drop directly into LLM pipelines, vector databases, and RAG systems without preprocessing.

### 🚀 What Can This Reddit Scraper Extract?

| Data Type | Fields Extracted |
|-----------|-----------------|
| **Posts** | Title, body (markdown + plain text), score, upvote ratio, awards, flair, author, timestamps, crosspost data |
| **Comments** | Full nested tree (all depths), per-comment score, author, edited flag, reply count |
| **Users** | Karma breakdown, account age, post/comment history, profile bio |
| **Search Results** | Full-text Reddit search with subreddit filtering, sorting, and time windows |
| **Subreddit Metadata** | Subscriber count, active users, description, creation date, icons |
| **Batch Comparison** | Side-by-side stats for 10+ subreddits in a single run |

### ✨ Key Features

- **🔄 Subreddit monitoring mode** — Poll any subreddit for new posts matching keyword filters and deliver alerts via webhook in real-time
- **🌲 Full comment tree traversal** — Not just top-level comments. Fetches deeply nested replies via Reddit's `morechildren` API, up to configurable depth
- **🤖 AI-native output** — Every result includes a `_markdown_document` field: a clean, structured markdown document ready for LLM context windows or vector embedding
- **📊 Batch subreddit comparison** — Pull top posts from up to 20 subreddits in one run with aggregated stats — ideal for market research and competitive analysis
- **⚡ Reliable session rotation** — Rotates User-Agents, respects `X-Ratelimit-*` headers, and uses exponential backoff — the #1 failure mode for Reddit scrapers, solved
- **🔍 Advanced filtering** — Filter by flair, keyword, score threshold, date range, NSFW flag, and sort order (hot/new/top/rising)
- **📋 Schema-versioned output** — Every item carries `_schema: "reddit-intelligence/v1"` so your pipeline always knows what it's getting

---

### 📖 How to Use the Reddit Intelligence Scraper

#### Step 1 — Choose a mode

| Mode | What it does |
|------|-------------|
| `subreddit` | Scrape posts from one or more subreddits |
| `post` | Scrape a specific post URL with all comments |
| `user` | Scrape a user's profile, posts, and comment history |
| `search` | Full-text Reddit search with filters |
| `batch` | Compare top posts across multiple subreddits |
| `monitor` | Watch subreddits for new posts and deliver webhook alerts |

#### Step 2 — Configure your run

**Scrape the top posts from r/MachineLearning this week:**
```json
{
  "mode": "subreddit",
  "subreddits": ["MachineLearning"],
  "sortBy": "top",
  "timeFilter": "week",
  "maxPostsPerSubreddit": 50,
  "includeComments": true,
  "maxCommentsPerPost": 100,
  "outputFormat": "both"
}
````

**Compare 5 subreddits for market research:**

```json
{
  "mode": "batch",
  "subreddits": ["entrepreneur", "startups", "SaaS", "indiehackers", "smallbusiness"],
  "sortBy": "top",
  "timeFilter": "month",
  "maxPostsPerSubreddit": 10
}
```

**Monitor r/ArtificialIntelligence for mentions of "GPT" and alert via webhook:**

```json
{
  "mode": "monitor",
  "subreddits": ["ArtificialIntelligence"],
  "keywordFilter": ["GPT", "Claude", "Gemini", "LLM"],
  "monitoringInterval": 5,
  "webhookUrl": "https://your-server.com/webhooks/reddit"
}
```

**Scrape a specific post with full comment tree:**

```json
{
  "mode": "post",
  "postUrls": ["https://www.reddit.com/r/MachineLearning/comments/abc123/example_post/"],
  "maxCommentsPerPost": 500,
  "commentDepth": 10
}
```

#### Step 3 — Use the output

Every post result includes a ready-to-use markdown document in the `_markdown_document` field:

```
## Why GPT-4 is changing enterprise software

**r/MachineLearning** | Score: **4,231** (96% upvoted) | Comments: **312**
Author: u/ml_researcher | Posted: 2024-03-15T14:22:00Z

### Post Content

The shift from rule-based to generative AI...

### Top Comments

#### u/ai_engineer (Score: 847)
This is exactly what we're seeing in production...

> ##### u/skeptic99 (Score: 234)
> Worth noting the cost implications here...
```

Paste this directly into your LLM prompt or chunk it for RAG.

***

### 💰 How Much Does It Cost to Scrape Reddit?

Reddit Intelligence Scraper is priced per result (pay-per-event):

| Task | Approximate Cost |
|------|-----------------|
| 1,000 posts (metadata only) | ~$3.00 |
| 1,000 posts with 100 comments each | ~$3.00 |
| User profile (1 user, 25 posts) | ~$0.12 |
| Batch comparison (10 subs × 10 posts) | ~$0.30 |
| Monitor run (24h, low-traffic sub) | ~$1.50–6.00 |

**Pricing:** $3.00 per 1,000 results. Each post, comment thread, user profile, or search result page counts as one result.

**Tip:** Disable `includeComments` and set `outputFormat: "json"` for faster runs when you only need post metadata.

***

### 📤 Output Format

#### Post object (JSON)

```json
{
  "_schema": "reddit-intelligence/v1",
  "_scraped_at": "2024-03-15T14:30:00.000Z",
  "type": "post",
  "id": "abc123",
  "url": "https://www.reddit.com/r/MachineLearning/comments/abc123/...",
  "subreddit": "MachineLearning",
  "title": "Why GPT-4 is changing enterprise software",
  "body_markdown": "The shift from rule-based to generative AI...",
  "body_text": "The shift from rule-based to generative AI...",
  "score": 4231,
  "upvote_ratio": 0.96,
  "num_comments": 312,
  "total_awards_received": 7,
  "flair_text": "Discussion",
  "author": "ml_researcher",
  "created_utc": "2024-03-15T14:22:00.000Z",
  "comments": [...],
  "subreddit_meta": {
    "subscribers": 2800000,
    "active_user_count": 4200,
    ...
  },
  "_markdown_document": "## Why GPT-4 is changing..."
}
```

#### Webhook payload (monitor mode)

```json
{
  "event": "keyword_match",
  "timestamp": "2024-03-15T14:35:00.000Z",
  "subreddit": "ArtificialIntelligence",
  "matched_keywords": ["GPT", "LLM"],
  "post": {
    "id": "xyz789",
    "title": "New GPT-4 benchmark results are wild",
    "url": "https://www.reddit.com/r/...",
    "score": 142,
    "body_preview": "Just ran the full MMLU suite..."
  }
}
```

***

### 🤔 Frequently Asked Questions

#### Is scraping Reddit legal?

Reddit's public data is accessible without authentication. This actor only scrapes publicly available content — the same data accessible in a browser without logging in. Always review Reddit's [Terms of Service](https://www.redditinc.com/policies/user-agreement) and ensure your use complies with applicable laws. This tool is intended for research, analytics, and AI training use cases.

#### Why is the actor not returning all 1000 posts I requested?

Reddit's `top` sort with longer time windows (`year`, `all`) is the best way to get high-quality historical posts. The API occasionally returns fewer results than requested — this is a Reddit limitation. Try increasing `maxPostsPerSubreddit` and setting `sortBy: "new"` for completeness.

#### What's the difference between `outputFormat: "json"`, `"markdown"`, and `"both"`?

- `json` — returns the full structured JSON object, ideal for data pipelines and databases
- `markdown` — returns only the `_markdown_document` field (the AI-ready version), minimal storage
- `both` — returns full JSON **and** the markdown document in every result

#### Can I use this to feed Reddit data into a vector database?

Yes — this is a primary use case. Use `outputFormat: "markdown"`, split on `### Top Comments` to get post and comment chunks, and embed each chunk separately.

#### Does it handle private or restricted subreddits?

No. This actor only accesses public Reddit content. Private subreddits require OAuth authentication with approved account credentials.

#### How does the monitoring mode work exactly?

On first run, the actor seeds its state with the current latest posts (no webhook fires). On subsequent polling cycles (default: every 5 minutes), any new post that matches your filters triggers a dataset push and optionally a webhook. State is persisted in Apify Key-Value Store so it survives between runs.

#### Why use a proxy?

Without a proxy, repeated scraping from a single IP can trigger Reddit's rate limiting (HTTP 429). Apify's residential proxy pool rotates IPs automatically, making your scraper much more reliable at scale.

***

### 🔗 Related Actors

- **Web Scraper** — General-purpose web scraping
- **Twitter Scraper** — Social media monitoring on X/Twitter
- **YouTube Scraper** — Video and comment data from YouTube

***

### 📬 Support & Feedback

Found a bug or have a feature request? Open an issue or contact us through the Apify platform. We monitor this actor actively and publish updates regularly.

# Actor input Schema

## `mode` (type: `string`):

Select what to scrape: subreddit posts, a specific post, user profiles, search results, batch comparison across subreddits, or live monitoring.

## `subreddits` (type: `array`):

List of subreddit names to scrape (without r/ prefix). Used for subreddit, batch, and monitor modes.

## `postUrls` (type: `array`):

Full Reddit post URLs to scrape (including all comments). Used in post mode.

## `usernames` (type: `array`):

Reddit usernames to scrape (without u/ prefix). Used in user mode.

## `searchQuery` (type: `string`):

Search query string. Used in search mode.

## `searchSubreddits` (type: `array`):

Restrict search to specific subreddits. Leave empty to search all of Reddit.

## `sortBy` (type: `string`):

How to sort posts or search results.

## `timeFilter` (type: `string`):

Time window for top/search sort. Only applies when Sort Order is 'top' or 'relevance'.

## `maxPostsPerSubreddit` (type: `integer`):

Maximum number of posts to fetch per subreddit. Higher values increase cost.

## `includeComments` (type: `boolean`):

Fetch and include full comment trees for each post. Disable for faster, cheaper runs when you only need post metadata.

## `maxCommentsPerPost` (type: `integer`):

Maximum number of comments to fetch per post. Includes replies at all depths.

## `commentDepth` (type: `integer`):

Maximum depth for nested comment replies (1 = top-level only, 10 = full tree).

## `flairFilter` (type: `string`):

Only include posts matching this flair text (case-insensitive, partial match).

## `keywordFilter` (type: `array`):

Only include posts/comments containing at least one of these keywords (case-insensitive).

## `minScore` (type: `integer`):

Skip posts with score below this threshold. Useful for filtering out low-quality content.

## `dateFrom` (type: `string`):

Only include posts created after this date/time (ISO 8601 format, e.g. 2024-01-01T00:00:00Z). Note: Reddit's API doesn't natively support date filtering — posts will be fetched and filtered client-side.

## `dateTo` (type: `string`):

Only include posts created before this date/time (ISO 8601 format).

## `includeNSFW` (type: `boolean`):

Include posts marked as NSFW (Not Safe For Work).

## `includeSubredditMeta` (type: `boolean`):

Attach subreddit metadata (subscriber count, description, etc.) to each result.

## `outputFormat` (type: `string`):

json: structured JSON only. markdown: AI-ready markdown document. both: include both in every result.

## `webhookUrl` (type: `string`):

POST endpoint to receive new matching posts in monitor mode. Required for monitor mode.

## `monitoringInterval` (type: `integer`):

How often to poll Reddit for new posts in monitor mode. Minimum 2 minutes.

## `proxyConfiguration` (type: `object`):

Apify proxy configuration. Strongly recommended to avoid rate limiting.

## `maxConcurrency` (type: `integer`):

Maximum number of parallel requests. Keep low (1-3) to respect Reddit rate limits.

## `debugMode` (type: `boolean`):

Enable verbose logging for troubleshooting.

## Actor input object example

```json
{
  "mode": "subreddit",
  "subreddits": [
    "wallstreetbets",
    "technology",
    "MachineLearning"
  ],
  "postUrls": [
    "https://www.reddit.com/r/technology/comments/abc123/example_post/"
  ],
  "usernames": [
    "spez",
    "gallowboob"
  ],
  "searchQuery": "GPT-4 use cases",
  "searchSubreddits": [
    "MachineLearning",
    "artificial"
  ],
  "sortBy": "hot",
  "timeFilter": "week",
  "maxPostsPerSubreddit": 25,
  "includeComments": true,
  "maxCommentsPerPost": 100,
  "commentDepth": 5,
  "flairFilter": "Discussion",
  "keywordFilter": [
    "AI",
    "GPT",
    "machine learning"
  ],
  "minScore": 0,
  "dateFrom": "2024-01-01T00:00:00Z",
  "dateTo": "2024-12-31T23:59:59Z",
  "includeNSFW": false,
  "includeSubredditMeta": true,
  "outputFormat": "both",
  "webhookUrl": "https://hooks.example.com/reddit-alerts",
  "monitoringInterval": 5,
  "proxyConfiguration": {
    "useApifyProxy": true,
    "apifyProxyGroups": [
      "RESIDENTIAL"
    ]
  },
  "maxConcurrency": 2,
  "debugMode": false
}
```

# Actor output Schema

## `results` (type: `string`):

All scraped items in the default dataset. Each item carries a `type` field — post, user, search, batch, or monitor\_event — and a `_markdown_document` field with AI-ready formatted output.

## `datasetOverview` (type: `string`):

View and filter results in the Apify dataset browser.

## `keyValueStore` (type: `string`):

Stores monitor mode state (seen post IDs per subreddit) and the actor input. Used to persist state across monitor polling cycles.

## `liveStatus` (type: `string`):

Real-time progress endpoint. Available during the run at the actor container URL.

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "subreddits": [
        "technology"
    ],
    "maxPostsPerSubreddit": 25,
    "maxCommentsPerPost": 100,
    "commentDepth": 5,
    "proxyConfiguration": {
        "useApifyProxy": true,
        "apifyProxyGroups": [
            "RESIDENTIAL"
        ]
    }
};

// Run the Actor and wait for it to finish
const run = await client.actor("eiv/reddit-intelligence-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "subreddits": ["technology"],
    "maxPostsPerSubreddit": 25,
    "maxCommentsPerPost": 100,
    "commentDepth": 5,
    "proxyConfiguration": {
        "useApifyProxy": True,
        "apifyProxyGroups": ["RESIDENTIAL"],
    },
}

# Run the Actor and wait for it to finish
run = client.actor("eiv/reddit-intelligence-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "subreddits": [
    "technology"
  ],
  "maxPostsPerSubreddit": 25,
  "maxCommentsPerPost": 100,
  "commentDepth": 5,
  "proxyConfiguration": {
    "useApifyProxy": true,
    "apifyProxyGroups": [
      "RESIDENTIAL"
    ]
  }
}' |
apify call eiv/reddit-intelligence-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=eiv/reddit-intelligence-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit Intelligence Scraper (Pay per Event)",
        "description": "Scrape Reddit posts, full comment trees, user profiles, and search results. Features subreddit monitoring with webhook alerts, batch comparison across multiple subreddits, and AI-native markdown output ready for LLM pipelines and vector databases.",
        "version": "0.0",
        "x-build-id": "GDFSR2pDyt4l3W0lT"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/eiv~reddit-intelligence-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-eiv-reddit-intelligence-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/eiv~reddit-intelligence-scraper/runs": {
            "post": {
                "operationId": "runs-sync-eiv-reddit-intelligence-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/eiv~reddit-intelligence-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-eiv-reddit-intelligence-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "mode"
                ],
                "properties": {
                    "mode": {
                        "title": "Scraping Mode",
                        "enum": [
                            "subreddit",
                            "post",
                            "user",
                            "search",
                            "batch",
                            "monitor"
                        ],
                        "type": "string",
                        "description": "Select what to scrape: subreddit posts, a specific post, user profiles, search results, batch comparison across subreddits, or live monitoring.",
                        "default": "subreddit"
                    },
                    "subreddits": {
                        "title": "Subreddits",
                        "type": "array",
                        "description": "List of subreddit names to scrape (without r/ prefix). Used for subreddit, batch, and monitor modes.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "postUrls": {
                        "title": "Post URLs",
                        "type": "array",
                        "description": "Full Reddit post URLs to scrape (including all comments). Used in post mode.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "usernames": {
                        "title": "Usernames",
                        "type": "array",
                        "description": "Reddit usernames to scrape (without u/ prefix). Used in user mode.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "searchQuery": {
                        "title": "Search Query",
                        "type": "string",
                        "description": "Search query string. Used in search mode."
                    },
                    "searchSubreddits": {
                        "title": "Search within Subreddits",
                        "type": "array",
                        "description": "Restrict search to specific subreddits. Leave empty to search all of Reddit.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "sortBy": {
                        "title": "Sort Order",
                        "enum": [
                            "hot",
                            "new",
                            "top",
                            "rising",
                            "relevance",
                            "comments"
                        ],
                        "type": "string",
                        "description": "How to sort posts or search results.",
                        "default": "hot"
                    },
                    "timeFilter": {
                        "title": "Time Filter",
                        "enum": [
                            "hour",
                            "day",
                            "week",
                            "month",
                            "year",
                            "all"
                        ],
                        "type": "string",
                        "description": "Time window for top/search sort. Only applies when Sort Order is 'top' or 'relevance'.",
                        "default": "week"
                    },
                    "maxPostsPerSubreddit": {
                        "title": "Max Posts Per Subreddit",
                        "minimum": 1,
                        "maximum": 1000,
                        "type": "integer",
                        "description": "Maximum number of posts to fetch per subreddit. Higher values increase cost.",
                        "default": 25
                    },
                    "includeComments": {
                        "title": "Include Comments",
                        "type": "boolean",
                        "description": "Fetch and include full comment trees for each post. Disable for faster, cheaper runs when you only need post metadata.",
                        "default": true
                    },
                    "maxCommentsPerPost": {
                        "title": "Max Comments Per Post",
                        "minimum": 0,
                        "maximum": 500,
                        "type": "integer",
                        "description": "Maximum number of comments to fetch per post. Includes replies at all depths.",
                        "default": 100
                    },
                    "commentDepth": {
                        "title": "Max Comment Depth",
                        "minimum": 1,
                        "maximum": 10,
                        "type": "integer",
                        "description": "Maximum depth for nested comment replies (1 = top-level only, 10 = full tree).",
                        "default": 5
                    },
                    "flairFilter": {
                        "title": "Flair Filter",
                        "type": "string",
                        "description": "Only include posts matching this flair text (case-insensitive, partial match)."
                    },
                    "keywordFilter": {
                        "title": "Keyword Filter",
                        "type": "array",
                        "description": "Only include posts/comments containing at least one of these keywords (case-insensitive).",
                        "items": {
                            "type": "string"
                        }
                    },
                    "minScore": {
                        "title": "Minimum Score",
                        "minimum": 0,
                        "type": "integer",
                        "description": "Skip posts with score below this threshold. Useful for filtering out low-quality content.",
                        "default": 0
                    },
                    "dateFrom": {
                        "title": "Date From (UTC)",
                        "type": "string",
                        "description": "Only include posts created after this date/time (ISO 8601 format, e.g. 2024-01-01T00:00:00Z). Note: Reddit's API doesn't natively support date filtering — posts will be fetched and filtered client-side."
                    },
                    "dateTo": {
                        "title": "Date To (UTC)",
                        "type": "string",
                        "description": "Only include posts created before this date/time (ISO 8601 format)."
                    },
                    "includeNSFW": {
                        "title": "Include NSFW Content",
                        "type": "boolean",
                        "description": "Include posts marked as NSFW (Not Safe For Work).",
                        "default": false
                    },
                    "includeSubredditMeta": {
                        "title": "Include Subreddit Metadata",
                        "type": "boolean",
                        "description": "Attach subreddit metadata (subscriber count, description, etc.) to each result.",
                        "default": true
                    },
                    "outputFormat": {
                        "title": "Output Format",
                        "enum": [
                            "json",
                            "markdown",
                            "both"
                        ],
                        "type": "string",
                        "description": "json: structured JSON only. markdown: AI-ready markdown document. both: include both in every result.",
                        "default": "both"
                    },
                    "webhookUrl": {
                        "title": "Webhook URL",
                        "type": "string",
                        "description": "POST endpoint to receive new matching posts in monitor mode. Required for monitor mode."
                    },
                    "monitoringInterval": {
                        "title": "Monitoring Interval (minutes)",
                        "minimum": 2,
                        "maximum": 60,
                        "type": "integer",
                        "description": "How often to poll Reddit for new posts in monitor mode. Minimum 2 minutes.",
                        "default": 5
                    },
                    "proxyConfiguration": {
                        "title": "Proxy Configuration",
                        "type": "object",
                        "description": "Apify proxy configuration. Strongly recommended to avoid rate limiting."
                    },
                    "maxConcurrency": {
                        "title": "Max Concurrency",
                        "minimum": 1,
                        "maximum": 5,
                        "type": "integer",
                        "description": "Maximum number of parallel requests. Keep low (1-3) to respect Reddit rate limits.",
                        "default": 2
                    },
                    "debugMode": {
                        "title": "Debug Mode",
                        "type": "boolean",
                        "description": "Enable verbose logging for troubleshooting.",
                        "default": false
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
