# Reddit Scraper V2 — Posts, Comments, Users & Subreddits (11) (`red_crawler/reddit-scrape-v2`) Actor

Scrape Reddit at scale: single posts, comment trees, user profiles, subreddit feeds, and detailed comment lookups (Get Comment by ID + Linked Comment Info). 11 endpoints, no Reddit account or proxy required. For bulk-by-ID lookups see the c

- **URL**: https://apify.com/red\_crawler/reddit-scrape-v2.md
- **Developed by:** [Red Crawler](https://apify.com/red_crawler) (community)
- **Categories:** Automation, SEO tools, Social media
- **Stats:** 2 total users, 1 monthly users, 100.0% runs succeeded, 1 bookmarks
- **User rating**: No ratings yet

## Pricing

from $1.99 / 1,000 results

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Reddit Scraper V2

![Endpoints](https://img.shields.io/badge/endpoints-11-blue) ![Auth](https://img.shields.io/badge/Reddit_account-not_required-brightgreen) ![Proxy](https://img.shields.io/badge/proxy-not_required-brightgreen) ![Pricing](https://img.shields.io/badge/pricing-pay_per_result-orange)

Scrape Reddit at scale — single posts, comment trees, user profiles, subreddit feeds, and detailed comment lookups. **11 self-contained endpoints in one actor.** **No Reddit account, OAuth, or proxy required.**

Pick the endpoint, fill the matching section, hit **Start**.

> **Looking for bulk-by-ID lookups?** They live in the companion actor [**Reddit Bulk Scrape V2**](https://apify.com/triangular_triangle/reddit-bulk-scrape-v2) — paste up to 500 IDs/usernames (5000 for comments) per run and get one full record per item.

---

### Endpoints at a glance

| ## | Endpoint | Records returned | Best for |
|---|---|---|---|
| 1 | **Post Comments** | up to 1500 (or all) | sentiment, debate threads, archives, training data |
| 2 | **Post by ID** | 1 record | single-post deep dive |
| 3 | **Profile (Full)** | 1 record | full-profile dashboards, lead enrichment |
| 4 | **Profile (Details)** | 1 record | moderation tooling, contributor audits |
| 5 | **Profile Posts** | up to 1250 | author monitoring, content audits |
| 6 | **Profile Comments** | up to 1250 | brand-mention tracking, reputation monitoring |
| 7 | **User Info** | 1 record | filling field gaps from Profile (Full) |
| 8 | **Community Info** | 1 record | community discovery, audience sizing |
| 9 | **Community Feed** | up to 1250 | content scraping, trending posts |
| 10 | **Get Comment by ID** | 1 record | quoting, refreshing one comment |
| 11 | **Linked Comment Info** | 1 record | comment with parent post + author context |

Inputs accept the most-permissive format Reddit uses for each entity:

| Entity | Accepted |
|---|---|
| post | URL · `t3_` fullname · raw ID |
| comment | URL · `t1_` fullname · raw ID |
| user | username · `u/name` |
| subreddit | name · `r/name` · subreddit URL |

---

### What you can fetch

#### 1. Post Comments

Every comment on a single post, with control over how the tree is traversed.

**Inputs**

| Field | Type | Default | Notes |
|---|---|---|---|
| `post` | string | *(required)* | URL or post ID. |
| `sort` | enum | `best` | `best` / `confidence` / `top` / `new` / `controversial` / `old` / `qa`. |
| `mode` | enum | `custom` | `custom` (capped) / `top_level` / `all` (uncapped). |
| `limit` | int | `100` | 1 – 1500. Used by `custom` mode. |

**Returns per comment** — ID, fullname, parent comment / parent post IDs, author, body (markdown + HTML), score, depth, OP flag, all comment flags, subreddit, awards, created + edited timestamps, permalink.

**Use it when** — sentiment, debate threads, support-ticket mining, comment archives, training data.

---

#### 2. Post by ID

Full payload of a single post.

**Inputs**

| Field | Notes |
|---|---|
| `post` | URL or post ID. |

**Returns** — title, body, score, comment count, awards, flair, media (images / video / gallery), all post flags, subreddit, author, created timestamp.

**Use it when** — single-post deep dive, refreshing a stored post, importing one thread into your DB.

---

#### 3. Profile (Full)

Full Redditor identity with the richest set of profile fields.

**Inputs**

| Field | Notes |
|---|---|
| `username` | Raw or `u/name`. |

**Returns** — karma split into post / comment / award / awardee, account creation date, snoovatar, banner, social links, accepted-DMs flag, accepted-chats flag, accepted-followers flag, mod info, employee / verified flags, premium status, trophy-case totals.

**Use it when** — full-profile dashboards, lead enrichment, account-quality scoring, brand-monitor profile cards.

---

#### 4. Profile (Details)

Profile-as-subreddit settings (every Reddit profile is also a subreddit `u_username`).

**Inputs**

| Field | Notes |
|---|---|
| `username` | Raw or `u/name`. |

**Returns** — post permissions, flair settings, mod permissions, contributor / subscriber state, whitelist status, NSFW flag, the user's authorFlair on their own profile.

**Use it when** — moderation tooling, contributor / whitelist audits, profile-page gating logic.

---

#### 5. Profile Posts

The user's submitted posts.

**Inputs**

| Field | Type | Default | Notes |
|---|---|---|---|
| `username` | string | *(required)* | Raw or `u/name`. |
| `sort` | enum | `new` | `hot` / `new` / `top` / `controversial`. |
| `time` | enum | *(none)* | Used with `top` / `controversial`. |
| `limit` | int | `25` | 1 – 1250. |

**Returns per post** — same rich post record as Post by ID.

**Use it when** — author monitoring, content audits, building feeds of a creator's submissions.

---

#### 6. Profile Comments

The user's comment history.

**Inputs** — same controls as Profile Posts (sort / time / limit 1–1250).

**Returns per comment** — same record as Post Comments.

**Use it when** — brand-mention tracking, reputation monitoring, conversation mining on a single user.

---

#### 7. User Info

Alternate user-info read with a different field set than Profile (Full) — useful for filling gaps.

**Inputs**

| Field | Notes |
|---|---|
| `username` | Raw or `u/name`. |

**Returns** — complementary profile fields.

**Use it when** — Profile (Full) is missing fields you need; combining both gives the most complete record.

---

#### 8. Community Info

Subreddit metadata.

**Inputs**

| Field | Notes |
|---|---|
| `subreddit` | Name / `r/name` / URL. |

**Returns** — subscriber count, public + full description, rules summary, theme (banner, icon, colors), allowed submission types, NSFW flag, type (public / private / restricted), created timestamp.

**Use it when** — community discovery, sizing audiences, sidebar / theme audits.

---

#### 9. Community Feed

A subreddit's post feed with all 6 sort modes.

**Inputs**

| Field | Type | Default | Notes |
|---|---|---|---|
| `subreddit` | string | *(required)* | Name / `r/name` / URL. |
| `sort` | enum | `hot` | `best` / `hot` / `new` / `top` / `rising` / `controversial`. |
| `time` | enum | *(none)* | Used with `top` / `controversial`. |
| `limit` | int | `25` | 1 – 1250. |

**Returns per post** — same rich post record as Post by ID.

**Use it when** — content scraping, trending-post tracking, building feeds of a niche community.

---

#### 10. Get Comment by ID

Full payload of a single comment.

**Inputs**

| Field | Notes |
|---|---|
| `comment` | URL, `t1_` fullname, or raw ID. |

**Returns** — body, score, author, flair, awards, parent post / comment IDs, created timestamp, permalink.

**Use it when** — quoting a comment, refreshing one stored row, importing a single comment into your DB.

---

#### 11. Linked Comment Info

Comment payload **plus** the parent post and the comment author profile in a single record — handy when you need full conversation context without firing three separate calls.

**Inputs**

| Field | Notes |
|---|---|
| `comment` | URL, `t1_` fullname, or raw ID. |

**Returns** — full comment record + parent post (title, subreddit, created, score, flair) + author profile snapshot (karma, account age, flags).

**Use it when** — building rich comment cards, audit trails, moderation tooling, or any flow that needs the comment alongside its post and author in one row.

---

### How to run

1. **Pick an endpoint** in the "What to fetch" dropdown.
2. **Open the matching section** and fill its fields. Each section is independent.
3. **Click Start.**

Default endpoint is **Community Feed** on `r/python` so the actor runs out of the box.

---

### Output

Results are pushed to the actor's default dataset. View as a table or download as JSON / CSV / Excel / XML.

| Endpoint kind | Rows pushed |
|---|---|
| Single-record (Post by ID, Profile (Full), Community Info, Get Comment by ID, Linked Comment Info, etc.) | 1 record |
| Feed (Post Comments, Profile Posts / Comments, Community Feed) | up to your `limit` |

Every record carries an `endpoint` field. Most useful columns (id, title / name, score / karma, created date) are placed first.

---

### Status & error reference

**Run status** *(Apify-side, shown on the run page)*

| Apify UI cue | Status | Apify message | Meaning | What to do |
|---|---|---|---|---|
| green check | `SUCCEEDED` | "Actor succeeded with N results in the dataset" | Run finished. Some or zero results pushed. | Open the dataset. |
| red exclamation | `FAILED` | "The Actor process failed…" | Validation error or upstream Reddit fault. | Check the run log. You are NOT charged. |
| red clock | `TIMED-OUT` | "The Actor timed out…" | Run exceeded its timeout. | Re-run with a smaller `limit` or a less popular thread / feed. |
| red square outline | `ABORTED` | "The Actor process was aborted…" | You stopped the run manually. | No charge for unpushed results. |

**Common in-run conditions** *(visible in run log)*

| Condition | Cause | Result |
|---|---|---|
| Empty result set | Username / post / subreddit doesn't exist or is banned. | Run `SUCCEEDED`, 0 records, no charge. |
| Removed post stub | Post was removed; partial metadata returned. | Run `SUCCEEDED`, includes `removed_by_category`. |
| Suspended account | Username is suspended. | Run `SUCCEEDED`, mostly-null record. |

---

### Common edge cases

- **Removed / banned subreddits** return zero records.
- **Suspended / deleted accounts** return minimal data; expect most fields to be null.
- **Long Post Comments threads** — `all` mode (uncapped) on huge threads can return tens of thousands of records.
- **ID format flexibility** — raw IDs, prefixed (`t1_`, `t3_`), and full Reddit URLs are all accepted.
- **Bulk-by-ID lookups** live in the companion actor [Reddit Bulk Scrape V2](https://apify.com/triangular_triangle/reddit-bulk-scrape-v2) — use it when you have a list of post / comment / subreddit / user IDs to hydrate in a single call.
- **NSFW content** — fully supported; the `over_18` flag tells you if a post is age-gated.

---

### Why this actor is fast

- **Speed — 1–3 seconds per call, end-to-end.** Pure HTTP to Reddit's API. No browser to boot, no Playwright / Selenium / Puppeteer overhead. Competing browser-based scrapers typically take 15–60 seconds per call.
- **Reliability — zero browser flakiness.** No headless-Chromium crashes. No JS-render timeouts. No captcha pages. No surprise mid-run failures from a browser quirk.
- **Footprint:** see memory profile below.


**Runs in Apify's lowest 128 MB tier — typically peaks around 45 MB (~35% of the allocation).**

The actor is a thin async dispatcher: one HTTP call out, one `push_data` in. Most of the heavy lifting (Reddit auth, proxy rotation, retry, GraphQL persisted-query handling) is done off-actor on our backend, so the actor itself stays small.

| Run profile | Peak memory observed |
|---|---|
| Single post / comment / profile lookup | ~45 MB |
| Linked Comment Info (comment + post + author in one row) | ~46 MB |
| Subreddit feed (up to 1250 posts) | ~48 MB |

That gives ~64% headroom inside 128 MB. You can leave the **Memory** field at the default and never think about it. If you want extra margin (e.g. unusually large `all`-mode comment threads), bumping to **256 MB** is supported and costs more compute units per second on Apify's side — most users won't need it.

---

### Pricing

**Pay-per-result.** You're only charged for records actually pushed to the dataset.

| Outcome | Charged? |
|---|---|
| `SUCCEEDED` with results | Yes — per record pushed. |
| `SUCCEEDED` with zero records | No. |
| `FAILED` (validation / upstream) | No. |
| `ABORTED` | Only for records already pushed before you stopped. |

See the actor's **Pricing** tab for the current per-result rate.

# Actor input Schema

## `endpoint` (type: `string`):

Choose which lookup to run.
## `post_comments_post` (type: `string`):

Post URL, t3_ID, or raw ID.
## `post_comments_sort` (type: `string`):

Comment sort order.
## `post_comments_mode` (type: `string`):

custom = limit N comments; top_level = only top-level (no subtree expansion); all = expand every hidden subtree.
## `post_comments_limit` (type: `integer`):

Max comments to return (1–1500).
## `post_by_id_post` (type: `string`):

Post URL, t3_ID, or raw ID.
## `profile_full_username` (type: `string`):

Reddit username (without u/).
## `profile_details_username` (type: `string`):

Reddit username (without u/).
## `profile_posts_username` (type: `string`):

Reddit username (without u/).
## `profile_posts_sort` (type: `string`):

Sort order for the user's posts.
## `profile_posts_time` (type: `string`):

Only used with TOP/CONTROVERSIAL.
## `profile_posts_limit` (type: `integer`):

Max posts (1–1250).
## `profile_comments_username` (type: `string`):

Reddit username (without u/).
## `profile_comments_sort` (type: `string`):

Sort order for the user's comments.
## `profile_comments_time` (type: `string`):

Only used with TOP/CONTROVERSIAL.
## `profile_comments_limit` (type: `integer`):

Max comments (1–1250).
## `profile_info_username` (type: `string`):

Reddit username (without u/).
## `community_info_name` (type: `string`):

Subreddit name (without r/).
## `community_feed_name` (type: `string`):

Subreddit name (without r/).
## `community_feed_sort` (type: `string`):

Subreddit feed sort order.
## `community_feed_time` (type: `string`):

Only used with TOP/CONTROVERSIAL.
## `community_feed_limit` (type: `integer`):

Max posts (1–1250).
## `get_comment_byid_id` (type: `string`):

Comment URL, t1_ID, or raw ID.
## `linked_comment_info_id` (type: `string`):

Comment URL, t1_ID, or raw ID.

## Actor input object example

```json
{
  "endpoint": "community_feed",
  "post_comments_post": "1jq3e8u",
  "post_comments_sort": "best",
  "post_comments_mode": "custom",
  "post_comments_limit": 100,
  "post_by_id_post": "1jq3e8u",
  "profile_full_username": "spez",
  "profile_details_username": "spez",
  "profile_posts_username": "spez",
  "profile_posts_sort": "new",
  "profile_posts_time": "all",
  "profile_posts_limit": 100,
  "profile_comments_username": "spez",
  "profile_comments_sort": "new",
  "profile_comments_time": "all",
  "profile_comments_limit": 100,
  "profile_info_username": "spez",
  "community_info_name": "python",
  "community_feed_name": "python",
  "community_feed_sort": "hot",
  "community_feed_time": "all",
  "community_feed_limit": 100,
  "get_comment_byid_id": "c60n1vq",
  "linked_comment_info_id": "c60n1vq"
}
````

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "endpoint": "community_feed",
    "post_comments_post": "1jq3e8u",
    "post_by_id_post": "1jq3e8u",
    "profile_full_username": "spez",
    "profile_details_username": "spez",
    "profile_posts_username": "spez",
    "profile_comments_username": "spez",
    "profile_info_username": "spez",
    "community_info_name": "python",
    "community_feed_name": "python",
    "get_comment_byid_id": "c60n1vq",
    "linked_comment_info_id": "c60n1vq"
};

// Run the Actor and wait for it to finish
const run = await client.actor("red_crawler/reddit-scrape-v2").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "endpoint": "community_feed",
    "post_comments_post": "1jq3e8u",
    "post_by_id_post": "1jq3e8u",
    "profile_full_username": "spez",
    "profile_details_username": "spez",
    "profile_posts_username": "spez",
    "profile_comments_username": "spez",
    "profile_info_username": "spez",
    "community_info_name": "python",
    "community_feed_name": "python",
    "get_comment_byid_id": "c60n1vq",
    "linked_comment_info_id": "c60n1vq",
}

# Run the Actor and wait for it to finish
run = client.actor("red_crawler/reddit-scrape-v2").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "endpoint": "community_feed",
  "post_comments_post": "1jq3e8u",
  "post_by_id_post": "1jq3e8u",
  "profile_full_username": "spez",
  "profile_details_username": "spez",
  "profile_posts_username": "spez",
  "profile_comments_username": "spez",
  "profile_info_username": "spez",
  "community_info_name": "python",
  "community_feed_name": "python",
  "get_comment_byid_id": "c60n1vq",
  "linked_comment_info_id": "c60n1vq"
}' |
apify call red_crawler/reddit-scrape-v2 --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=red_crawler/reddit-scrape-v2",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit Scraper V2 — Posts, Comments, Users & Subreddits (11)",
        "description": "Scrape Reddit at scale: single posts, comment trees, user profiles, subreddit feeds, and detailed comment lookups (Get Comment by ID + Linked Comment Info). 11 endpoints, no Reddit account or proxy required. For bulk-by-ID lookups see the c",
        "version": "1.1",
        "x-build-id": "8eA31iy834h7RQS9Q"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/red_crawler~reddit-scrape-v2/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-red_crawler-reddit-scrape-v2",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/red_crawler~reddit-scrape-v2/runs": {
            "post": {
                "operationId": "runs-sync-red_crawler-reddit-scrape-v2",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/red_crawler~reddit-scrape-v2/run-sync": {
            "post": {
                "operationId": "run-sync-red_crawler-reddit-scrape-v2",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "endpoint"
                ],
                "properties": {
                    "endpoint": {
                        "title": "What to fetch",
                        "enum": [
                            "post_comments",
                            "post_by_id",
                            "profile_full",
                            "profile_details",
                            "profile_posts",
                            "profile_comments",
                            "profile_info",
                            "community_info",
                            "community_feed",
                            "get_comment_byid",
                            "linked_comment_info"
                        ],
                        "type": "string",
                        "description": "Choose which lookup to run.",
                        "default": "community_feed"
                    },
                    "post_comments_post": {
                        "title": "Post",
                        "type": "string",
                        "description": "Post URL, t3_ID, or raw ID."
                    },
                    "post_comments_sort": {
                        "title": "Sort",
                        "enum": [
                            "best",
                            "confidence",
                            "top",
                            "new",
                            "controversial",
                            "old",
                            "qa"
                        ],
                        "type": "string",
                        "description": "Comment sort order.",
                        "default": "best"
                    },
                    "post_comments_mode": {
                        "title": "Comment mode",
                        "enum": [
                            "custom",
                            "top_level",
                            "all"
                        ],
                        "type": "string",
                        "description": "custom = limit N comments; top_level = only top-level (no subtree expansion); all = expand every hidden subtree.",
                        "default": "custom"
                    },
                    "post_comments_limit": {
                        "title": "Limit",
                        "minimum": 1,
                        "maximum": 1500,
                        "type": "integer",
                        "description": "Max comments to return (1–1500).",
                        "default": 100
                    },
                    "post_by_id_post": {
                        "title": "Post",
                        "type": "string",
                        "description": "Post URL, t3_ID, or raw ID."
                    },
                    "profile_full_username": {
                        "title": "Username",
                        "type": "string",
                        "description": "Reddit username (without u/)."
                    },
                    "profile_details_username": {
                        "title": "Username",
                        "type": "string",
                        "description": "Reddit username (without u/)."
                    },
                    "profile_posts_username": {
                        "title": "Username",
                        "type": "string",
                        "description": "Reddit username (without u/)."
                    },
                    "profile_posts_sort": {
                        "title": "Sort",
                        "enum": [
                            "hot",
                            "new",
                            "top",
                            "controversial"
                        ],
                        "type": "string",
                        "description": "Sort order for the user's posts.",
                        "default": "new"
                    },
                    "profile_posts_time": {
                        "title": "Time filter",
                        "enum": [
                            "hour",
                            "day",
                            "week",
                            "month",
                            "year",
                            "all"
                        ],
                        "type": "string",
                        "description": "Only used with TOP/CONTROVERSIAL.",
                        "default": "all"
                    },
                    "profile_posts_limit": {
                        "title": "Limit",
                        "minimum": 1,
                        "maximum": 1250,
                        "type": "integer",
                        "description": "Max posts (1–1250).",
                        "default": 100
                    },
                    "profile_comments_username": {
                        "title": "Username",
                        "type": "string",
                        "description": "Reddit username (without u/)."
                    },
                    "profile_comments_sort": {
                        "title": "Sort",
                        "enum": [
                            "hot",
                            "new",
                            "top",
                            "controversial"
                        ],
                        "type": "string",
                        "description": "Sort order for the user's comments.",
                        "default": "new"
                    },
                    "profile_comments_time": {
                        "title": "Time filter",
                        "enum": [
                            "hour",
                            "day",
                            "week",
                            "month",
                            "year",
                            "all"
                        ],
                        "type": "string",
                        "description": "Only used with TOP/CONTROVERSIAL.",
                        "default": "all"
                    },
                    "profile_comments_limit": {
                        "title": "Limit",
                        "minimum": 1,
                        "maximum": 1250,
                        "type": "integer",
                        "description": "Max comments (1–1250).",
                        "default": 100
                    },
                    "profile_info_username": {
                        "title": "Username",
                        "type": "string",
                        "description": "Reddit username (without u/)."
                    },
                    "community_info_name": {
                        "title": "Subreddit",
                        "type": "string",
                        "description": "Subreddit name (without r/)."
                    },
                    "community_feed_name": {
                        "title": "Subreddit",
                        "type": "string",
                        "description": "Subreddit name (without r/)."
                    },
                    "community_feed_sort": {
                        "title": "Sort",
                        "enum": [
                            "best",
                            "hot",
                            "new",
                            "top",
                            "rising",
                            "controversial"
                        ],
                        "type": "string",
                        "description": "Subreddit feed sort order.",
                        "default": "hot"
                    },
                    "community_feed_time": {
                        "title": "Time filter",
                        "enum": [
                            "hour",
                            "day",
                            "week",
                            "month",
                            "year",
                            "all"
                        ],
                        "type": "string",
                        "description": "Only used with TOP/CONTROVERSIAL.",
                        "default": "all"
                    },
                    "community_feed_limit": {
                        "title": "Limit",
                        "minimum": 1,
                        "maximum": 1250,
                        "type": "integer",
                        "description": "Max posts (1–1250).",
                        "default": 100
                    },
                    "get_comment_byid_id": {
                        "title": "Comment",
                        "type": "string",
                        "description": "Comment URL, t1_ID, or raw ID."
                    },
                    "linked_comment_info_id": {
                        "title": "Comment",
                        "type": "string",
                        "description": "Comment URL, t1_ID, or raw ID."
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
