# Reddit Scraper - Posts, Comments, Search, AI Answers (`openclawai/reddit-scraper`) Actor

The only Reddit scraper with AI Answers + TLS fingerprinting. Scrape posts, comments, search results & subreddits. No API key, no login. AI-ready JSON output. 6 actions in 1   Actor. Browser-grade anti-detection. Parallel comment fetching. $5/1k posts.

- **URL**: https://apify.com/openclawai/reddit-scraper.md
- **Developed by:** [Pika Choo](https://apify.com/openclawai) (community)
- **Categories:** Lead generation, Agents, SEO tools
- **Stats:** 21 total users, 4 monthly users, 86.9% runs succeeded, 5 bookmarks
- **User rating**: 4.88 out of 5 stars

## Pricing

from $2.00 / 1,000 post scrapeds

This Actor is paid per event and usage. You are charged both the fixed price for specific events and for Apify platform usage.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Reddit Scraper — Posts, Comments, Search & AI Answers (No API Key)

**The most reliable Reddit API alternative on Apify.** Scrape any Reddit post, comment, subreddit, or search result without the official Reddit API. No developer account. No login. No rate limits. Built for AI training datasets, brand monitoring, market research, SEO, lead generation, OSINT, and academic research.

If you've been hit by Reddit API pricing, banned PRAW, or 429 rate-limit errors — this is the drop-in replacement. AI-ready JSON output. Bulk scraping at scale. Browser-grade anti-detection.

---

### Why use this Reddit scraper

| Pain point | Solution |
|---|---|
| Reddit killed the free API in 2023 | **No Reddit API needed** — direct scrape, no developer account, no auth |
| PRAW / Snoowrap broken or rate-limited | **PRAW alternative** — TLS fingerprinting mimics a real browser |
| Other scrapers get blocked | **8 rotating user-agents + auto-fallback** to old.reddit.com + PullPush |
| Comments missing or truncated | **Full nested comment trees** with parallel fetching (up to 20 threads) |
| Can't scale past a few hundred posts | **Bulk Reddit scraping** — millions of posts/day with residential proxies |
| No way to query Reddit's AI engine | **Reddit AI Answers** built in — exclusive on Apify |
| Subreddit discovery is manual | **Keyword-based subreddit finder** with mention counts + sample posts |

---

### What you can do with it

#### 6 actions in 1 actor

1. **Scrape a subreddit** — posts, metadata, media URLs, optional full comment trees
2. **Search Reddit for keywords** — global search or scoped to a subreddit
3. **Search comments** — find every comment matching your keyword
4. **Find subreddits by topic** — discover where your audience hangs out
5. **Fetch any single post** by URL — for dataset enrichment or verification
6. **Reddit AI Answers** — query Reddit's built-in AI engine for synthesized answers

---

### Use cases

**Reddit Data for AI Training** — Bulk export Reddit discussions for fine-tuning LLMs, building classifiers, RAG pipelines, sentiment models. Structured JSON output plugs straight into your data pipeline.

**Brand Monitoring on Reddit** — Track every mention of your brand, product, or competitor across all subreddits in real time. Daily/weekly sweeps with delta detection.

**Market Research & Sentiment Analysis** — Pull thousands of posts from r/wallstreetbets, r/cryptocurrency, r/SaaS, r/Entrepreneur, r/buildapc, or any niche. AI-ready output for NLP processing.

**SEO & Content Marketing** — Reddit dominates Google SERPs. Find which threads rank for your target keywords and what people are actually saying. Mine title patterns and pain points for content ideas.

**Lead Generation** — Find users actively asking questions your product solves. Cross-reference with LinkedIn Profile Scraper to enrich Reddit usernames into B2B contacts.

**Reddit OSINT** — Investigative research on usernames, post histories, deleted-content recovery (PullPush fallback), subreddit moderation patterns.

**Academic / Social Science Research** — Structured data collection for computational social science, communication studies, NLP papers. Reddit OSINT-grade output with full thread context.

**Competitor Intelligence** — Track what users say about competitors on r/SaaS, product subreddits, and review threads. Identify pain points your product can solve.

**PRAW / Snoowrap / Pushshift Replacement** — If your Reddit pipeline broke after the 2023 API changes, this is your migration target. Same data, no auth, no rate limits.

---

### Sample output

#### Post with comments
```json
{
    "post_id": "1abc123",
    "permalink": "/r/technology/comments/1abc123/title/",
    "subreddit_name": "technology",
    "author_name": "user123",
    "title": "Example post title",
    "body": "Full post body text...",
    "media": ["https://i.redd.it/image.jpg"],
    "num_comments": 42,
    "num_upvotes": 1500,
    "post_timestamp": "2026-04-01T12:00:00Z",
    "comments": [
        {
            "author_name": "commenter1",
            "body": "Top-level comment",
            "media": [],
            "parent_id": "t3_1abc123"
        }
    ]
}
````

#### Reddit AI Answer

```json
{
    "markdown": "### Full AI-generated answer in markdown...",
    "follow_ups": ["Related question 1", "Related question 2"],
    "source_posts": ["post_id_1", "post_id_2"],
    "source_subreddits": ["subreddit1", "subreddit2"]
}
```

***

### Action examples

```json
{
    "action": "scrape_subreddit",
    "subreddit": "technology",
    "sort": "top",
    "timeFilter": "week",
    "limit": 100,
    "includeComments": true
}
```

```json
{
    "action": "search_posts",
    "query": "best CRM software",
    "sort": "relevance",
    "limit": 100
}
```

```json
{
    "action": "search_subreddits",
    "query": "machine learning",
    "limit": 50
}
```

```json
{
    "action": "reddit_answers",
    "query": "What are the best tools for web scraping in 2026?"
}
```

***

### How it beats the Reddit API

1. **No API key, no OAuth** — direct scraping; nothing to register, nothing to renew.
2. **TLS fingerprinting** — every request looks like a real Chrome/Firefox/Safari browser, not a bot.
3. **8 rotating browser user-agents** across Windows / macOS / Linux.
4. **Smart fallback chain** — `www.reddit.com` → `old.reddit.com` → PullPush API as disaster recovery for deleted posts/comments.
5. **Exponential backoff with jitter** — never crashes on a 429.
6. **Parallel comment fetching** — 1–20 worker pool, configurable, with random jitter.
7. **Pay-per-result** — only billed for data actually returned. No subscription. No minimum.

***

### Pricing

| What you scrape | Cost |
|---|---|
| Posts (without comments) | **$1 / 1,000 posts** |
| Posts (with full comment trees) | **$5 / 1,000 posts** |
| Search results | **$2 / 1,000 results** |
| Subreddits found | **$2 / 1,000 subreddits** |
| Single post fetch | **$5 / 1,000 fetches** |
| Reddit AI Answers | **$10 / 1,000 queries** |

You only pay for results delivered. No monthly subscription, no minimum, no API credits to burn.

***

### FAQ

**Do I need a Reddit account or developer key?**
No. This scraper does not use the Reddit API at all. No login, no OAuth, no developer registration.

**Is this a PRAW or Pushshift alternative?**
Yes. If your pipeline broke after Reddit's 2023 API price changes or after Pushshift was discontinued, this is the drop-in replacement. Same Reddit data, structured JSON output.

**Will Reddit block me?**
The scraper uses TLS fingerprinting, rotating user-agents, residential-proxy support, and an automatic fallback chain (www → old.reddit.com → PullPush). It's the most resilient Reddit scraping setup on Apify.

**Can I scrape millions of posts?**
Yes. With residential proxies and parallel workers, the scraper handles bulk Reddit scraping at scale. Start with `limit: 100` to test, then scale up.

**Can I scrape deleted posts and comments?**
Partially — the PullPush fallback layer recovers some deleted content via the Reddit archive. Full historical recovery depends on PullPush's archive coverage.

**Does it work on private subreddits?**
No. Private/restricted subreddits require Reddit auth, which this scraper does not use by design.

**What about Reddit's AI Answers feature?**
This is the **only** scraper on Apify with built-in Reddit AI Answers. Query Reddit's AI engine and get markdown-formatted answers with source posts and follow-up questions.

**How is this different from `trudax/reddit-scraper`?**
This actor adds: TLS fingerprinting, Reddit AI Answers, parallel comment fetching, smart fallback chain, subreddit discovery by keyword, and 6 actions in one actor (vs. single-action competitors).

**Is web scraping Reddit legal?**
Public Reddit data is publicly accessible. The scraper does not bypass authentication or access private data. Always comply with Reddit's terms and your local data-protection laws when using the data.

***

### Tips for best results

- Use **residential proxies** for reliable scraping at scale — select "Residential" in the proxy configuration
- Start small (`limit: 10–50`) to test, then scale up
- Set `includeComments: false` for faster, cheaper runs when you only need post metadata
- Use `sort: "top"` with `timeFilter: "week"` for the most popular recent content
- For AI Answers, ask natural-language questions like you would on Google
- Combine with the LinkedIn Profile Scraper to enrich Reddit usernames into B2B contacts

***

⭐ **If this saved you from the Reddit API mess, please leave a review.** Reviews help the actor reach more users who are stuck migrating off PRAW/Pushshift.

# Actor input Schema

## `action` (type: `string`):

Choose your action — then scroll down and fill only the fields marked for that action:

• **Scrape Subreddit** → ✅ Subreddit
• **Search Posts** → ✅ Query + ⚠️ Subreddit (optional)
• **Search Comments** → ✅ Query + ⚠️ Subreddit (optional)
• **Find Subreddits** → ✅ Query
• **Fetch Post** → ✅ Post URL
• **Reddit AI Answers** → ✅ Query

## `subreddit` (type: `string`):

**Subreddit name without the r/ prefix**

✅ Required for: Scrape Subreddit
⚠️ Optional for: Search Posts, Search Comments
❌ Ignore for: Find Subreddits, Fetch Post, Reddit AI Answers

Example: `technology` (not `r/technology`)

## `query` (type: `string`):

**Keyword to search or question to ask**

✅ Required for: Search Posts, Search Comments, Find Subreddits, Reddit AI Answers
❌ Ignore for: Scrape Subreddit, Fetch Post

Examples:
• `best web scraping tools 2026`
• `how to learn Python`
• `AI automation`

## `postUrl` (type: `string`):

**Full Reddit post URL**

✅ Required for: Fetch Single Post
❌ Ignore for: All other actions

Example: `https://www.reddit.com/r/technology/comments/abc123/my_post/`

## `sort` (type: `string`):

**How to sort results**

For Scrape Subreddit:
• 🔥 **hot** (trending now)
• ✨ **new** (newest first)
• 🏆 **top** (highest rated)
• 📈 **rising** (gaining traction)
• ⚡ **controversial** (most debated)

For Search:
• 🎯 **relevance** (best match)
• ✨ **new** (newest first)
• 🏆 **top** (highest rated)
• 💬 **comments** (most discussed)

## `timeFilter` (type: `string`):

**Time range for results**

⚠️ Only applies when Sort = **top** or **controversial**

Options: past hour, day, week, month, year, or all time

## `limit` (type: `integer`):

**Maximum number of posts/results to return**

⚠️ Higher = more data but slower and costs more
💰 Each post with comments costs more than posts only

Recommended:
• 🟢 Quick test: 10-25
• 🟡 Normal: 50-100
• 🔴 Large: 200-500

## `includeComments` (type: `boolean`):

**Fetch full comment trees for each post**

✅ Enabled: Get complete discussions with nested replies
❌ Disabled: Posts only (faster, cheaper)

⚠️ Only applies to: **Scrape Subreddit**
💰 Comments significantly increase cost and time
📊 A post with 500 comments = 500 API calls

## `threads` (type: `integer`):

**Number of parallel workers**

🟢 Low (1-5): Slower but safer, less proxy bandwidth
🟡 Medium (6-10): Balanced speed and reliability
🔴 High (11-20): Fastest but uses more proxy bandwidth

⚠️ Only applies to: **Scrape Subreddit**, **Find Subreddits**
💡 Higher = faster but may trigger rate limits

## `proxyConfiguration` (type: `object`):

**Reddit requires residential proxies for best results**

📌 **Default Setup (Recommended):**
• ✅ Use Apify Proxy
• 🏠 Residential proxies (best success rate)
• 💰 Costs: $8/GB

💡 **Alternative:**
• 🏢 BUYPROXIES94952 (free datacenter, 5 IPs)
• ⚠️ Higher chance of Reddit blocks
• 💵 Free but less reliable

## Actor input object example

```json
{
  "action": "scrape_subreddit",
  "subreddit": "technology",
  "query": "best web scraping tools 2026",
  "postUrl": "https://www.reddit.com/r/technology/comments/1sdjh66/example/",
  "sort": "hot",
  "timeFilter": "week",
  "limit": 50,
  "includeComments": false,
  "threads": 10,
  "proxyConfiguration": {
    "useApifyProxy": true,
    "apifyProxyGroups": [
      "RESIDENTIAL"
    ]
  }
}
```

# Actor output Schema

## `results` (type: `string`):

No description

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "action": "scrape_subreddit",
    "subreddit": "technology",
    "sort": "hot",
    "timeFilter": "week",
    "limit": 50,
    "includeComments": false,
    "threads": 10,
    "proxyConfiguration": {
        "useApifyProxy": true,
        "apifyProxyGroups": [
            "RESIDENTIAL"
        ]
    }
};

// Run the Actor and wait for it to finish
const run = await client.actor("openclawai/reddit-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "action": "scrape_subreddit",
    "subreddit": "technology",
    "sort": "hot",
    "timeFilter": "week",
    "limit": 50,
    "includeComments": False,
    "threads": 10,
    "proxyConfiguration": {
        "useApifyProxy": True,
        "apifyProxyGroups": ["RESIDENTIAL"],
    },
}

# Run the Actor and wait for it to finish
run = client.actor("openclawai/reddit-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "action": "scrape_subreddit",
  "subreddit": "technology",
  "sort": "hot",
  "timeFilter": "week",
  "limit": 50,
  "includeComments": false,
  "threads": 10,
  "proxyConfiguration": {
    "useApifyProxy": true,
    "apifyProxyGroups": [
      "RESIDENTIAL"
    ]
  }
}' |
apify call openclawai/reddit-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=openclawai/reddit-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit Scraper - Posts, Comments, Search, AI Answers",
        "description": "The only Reddit scraper with AI Answers + TLS fingerprinting. Scrape posts, comments, search results & subreddits. No API key, no login. AI-ready JSON output. 6 actions in 1   Actor. Browser-grade anti-detection. Parallel comment fetching. $5/1k posts.",
        "version": "1.0",
        "x-build-id": "MGnhzXzu7x0Sd0BFV"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/openclawai~reddit-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-openclawai-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/openclawai~reddit-scraper/runs": {
            "post": {
                "operationId": "runs-sync-openclawai-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/openclawai~reddit-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-openclawai-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "action"
                ],
                "properties": {
                    "action": {
                        "title": "🎯 What do you want to scrape?",
                        "enum": [
                            "scrape_subreddit",
                            "search_posts",
                            "search_comments",
                            "search_subreddits",
                            "fetch_post",
                            "reddit_answers"
                        ],
                        "type": "string",
                        "description": "Choose your action — then scroll down and fill only the fields marked for that action:\n\n• **Scrape Subreddit** → ✅ Subreddit\n• **Search Posts** → ✅ Query + ⚠️ Subreddit (optional)\n• **Search Comments** → ✅ Query + ⚠️ Subreddit (optional)\n• **Find Subreddits** → ✅ Query\n• **Fetch Post** → ✅ Post URL\n• **Reddit AI Answers** → ✅ Query",
                        "default": "scrape_subreddit"
                    },
                    "subreddit": {
                        "title": "📝 Subreddit Name",
                        "type": "string",
                        "description": "**Subreddit name without the r/ prefix**\n\n✅ Required for: Scrape Subreddit\n⚠️ Optional for: Search Posts, Search Comments\n❌ Ignore for: Find Subreddits, Fetch Post, Reddit AI Answers\n\nExample: `technology` (not `r/technology`)"
                    },
                    "query": {
                        "title": "🔍 Search Query / Question",
                        "type": "string",
                        "description": "**Keyword to search or question to ask**\n\n✅ Required for: Search Posts, Search Comments, Find Subreddits, Reddit AI Answers\n❌ Ignore for: Scrape Subreddit, Fetch Post\n\nExamples:\n• `best web scraping tools 2026`\n• `how to learn Python`\n• `AI automation`"
                    },
                    "postUrl": {
                        "title": "🔗 Post URL",
                        "type": "string",
                        "description": "**Full Reddit post URL**\n\n✅ Required for: Fetch Single Post\n❌ Ignore for: All other actions\n\nExample: `https://www.reddit.com/r/technology/comments/abc123/my_post/`"
                    },
                    "sort": {
                        "title": "📊 Sort Order",
                        "enum": [
                            "hot",
                            "new",
                            "top",
                            "rising",
                            "controversial",
                            "relevance",
                            "comments"
                        ],
                        "type": "string",
                        "description": "**How to sort results**\n\nFor Scrape Subreddit:\n• 🔥 **hot** (trending now)\n• ✨ **new** (newest first)\n• 🏆 **top** (highest rated)\n• 📈 **rising** (gaining traction)\n• ⚡ **controversial** (most debated)\n\nFor Search:\n• 🎯 **relevance** (best match)\n• ✨ **new** (newest first)\n• 🏆 **top** (highest rated)\n• 💬 **comments** (most discussed)",
                        "default": "hot"
                    },
                    "timeFilter": {
                        "title": "⏰ Time Filter",
                        "enum": [
                            "hour",
                            "day",
                            "week",
                            "month",
                            "year",
                            "all"
                        ],
                        "type": "string",
                        "description": "**Time range for results**\n\n⚠️ Only applies when Sort = **top** or **controversial**\n\nOptions: past hour, day, week, month, year, or all time",
                        "default": "week"
                    },
                    "limit": {
                        "title": "🔢 Max Results",
                        "minimum": 1,
                        "maximum": 500,
                        "type": "integer",
                        "description": "**Maximum number of posts/results to return**\n\n⚠️ Higher = more data but slower and costs more\n💰 Each post with comments costs more than posts only\n\nRecommended:\n• 🟢 Quick test: 10-25\n• 🟡 Normal: 50-100\n• 🔴 Large: 200-500",
                        "default": 50
                    },
                    "includeComments": {
                        "title": "💬 Include Comments",
                        "type": "boolean",
                        "description": "**Fetch full comment trees for each post**\n\n✅ Enabled: Get complete discussions with nested replies\n❌ Disabled: Posts only (faster, cheaper)\n\n⚠️ Only applies to: **Scrape Subreddit**\n💰 Comments significantly increase cost and time\n📊 A post with 500 comments = 500 API calls",
                        "default": false
                    },
                    "threads": {
                        "title": "⚡ Concurrency (Threads)",
                        "minimum": 1,
                        "maximum": 20,
                        "type": "integer",
                        "description": "**Number of parallel workers**\n\n🟢 Low (1-5): Slower but safer, less proxy bandwidth\n🟡 Medium (6-10): Balanced speed and reliability\n🔴 High (11-20): Fastest but uses more proxy bandwidth\n\n⚠️ Only applies to: **Scrape Subreddit**, **Find Subreddits**\n💡 Higher = faster but may trigger rate limits",
                        "default": 10
                    },
                    "proxyConfiguration": {
                        "title": "🌐 Proxy Configuration",
                        "type": "object",
                        "description": "**Reddit requires residential proxies for best results**\n\n📌 **Default Setup (Recommended):**\n• ✅ Use Apify Proxy\n• 🏠 Residential proxies (best success rate)\n• 💰 Costs: $8/GB\n\n💡 **Alternative:**\n• 🏢 BUYPROXIES94952 (free datacenter, 5 IPs)\n• ⚠️ Higher chance of Reddit blocks\n• 💵 Free but less reliable",
                        "default": {
                            "useApifyProxy": true,
                            "apifyProxyGroups": [
                                "RESIDENTIAL"
                            ]
                        }
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
