# Reddit Thread Scraper (`sheshinmcfly/reddit-thread-scraper`) Actor

Extract posts and top comments from any Reddit thread or subreddit. Returns post title, author, score, URL, body text, and top-voted comments with full metadata. Ideal for sentiment analysis, research, AI training datasets, and community monitoring. No API key required.

- **URL**: https://apify.com/sheshinmcfly/reddit-thread-scraper.md
- **Developed by:** [Sheshinmcfly](https://apify.com/sheshinmcfly) (community)
- **Categories:** Social media, Lead generation, Automation
- **Stats:** 2 total users, 0 monthly users, 100.0% runs succeeded, 1 bookmarks
- **User rating**: No ratings yet

## Pricing

from $2.00 / 1,000 results

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Reddit Thread Scraper

Extract **posts and comments from any subreddit** via Reddit's official public JSON API. No authentication required. Filter by sort order, time range, and number of comments.

Perfect for AI training datasets, sentiment analysis, market research, and trend monitoring.

---

### What data does it extract?

#### Posts

| Field | Description | Example |
|---|---|---|
| `type` | Record type | `"post"` |
| `id` | Reddit post ID | `"1sa4rlx"` |
| `subreddit` | Subreddit name | `"MachineLearning"` |
| `title` | Post title | `"New paper on LLM reasoning"` |
| `author` | Username | `"researcher123"` |
| `score` | Upvotes - downvotes | `1420` |
| `upvoteRatio` | Upvote ratio | `0.97` |
| `numComments` | Total comment count | `83` |
| `selftext` | Post body text | `"We propose a new..."` |
| `url` | Link URL | `"https://arxiv.org/..."` |
| `permalink` | Reddit post URL | `"https://reddit.com/r/..."` |
| `flair` | Post flair label | `"Research"` |
| `createdAt` | Post creation time | `"2026-04-21T10:00:00Z"` |
| `extractedAt` | Extraction timestamp | `"2026-04-21T12:00:00Z"` |

#### Comments

| Field | Description | Example |
|---|---|---|
| `type` | Record type | `"comment"` |
| `id` | Comment ID | `"abc123"` |
| `postId` | Parent post ID | `"1sa4rlx"` |
| `author` | Username | `"user456"` |
| `body` | Comment text | `"Great work, but..."` |
| `score` | Upvotes - downvotes | `342` |
| `depth` | Nesting level (0 = top-level) | `0` |
| `permalink` | Direct link to comment | `"https://reddit.com/..."` |
| `createdAt` | Comment creation time | `"2026-04-21T10:05:00Z"` |

---

### Use cases

- **AI training data**: Clean text from expert communities for LLM fine-tuning
- **Sentiment analysis**: Monitor brand mentions and user opinions
- **Market research**: Track trends and discussions in niche communities
- **Competitive intelligence**: See what problems users are discussing
- **RAG pipelines**: Feed domain-specific knowledge into retrieval systems
- **Content research**: Find top-performing posts for content strategy

---

### How to use

1. Open the actor and configure:
   - **Subreddits**: List subreddit names (e.g. `MachineLearning`, `investing`, `python`)
   - **Sort**: hot, new, top, or rising
   - **Time filter**: For "top" sort — day, week, month, year, all
   - **Max posts**: Cap per subreddit
   - **Include comments**: Also extract top comments
2. Click **Start**
3. Download results as JSON, CSV, or Excel

---

### Example output (JSON)

```json
[
  {
    "type": "post",
    "id": "1sa4rlx",
    "subreddit": "MachineLearning",
    "title": "[D] New method achieves SOTA on reasoning benchmarks",
    "author": "ml_researcher",
    "score": 1420,
    "upvoteRatio": 0.97,
    "numComments": 83,
    "selftext": "We introduce a novel approach...",
    "url": "https://arxiv.org/abs/2504.12345",
    "permalink": "https://www.reddit.com/r/MachineLearning/comments/1sa4rlx/",
    "flair": "Research",
    "createdAt": "2026-04-21T10:00:00.000Z",
    "extractedAt": "2026-04-21T12:00:00.000Z"
  },
  {
    "type": "comment",
    "id": "kxyz789",
    "postId": "1sa4rlx",
    "subreddit": "MachineLearning",
    "author": "deep_learner",
    "body": "Impressive results. Did you test on out-of-distribution benchmarks?",
    "score": 342,
    "depth": 0,
    "permalink": "https://www.reddit.com/r/MachineLearning/comments/1sa4rlx/comment/kxyz789/",
    "createdAt": "2026-04-21T10:05:00.000Z",
    "extractedAt": "2026-04-21T12:00:00.000Z"
  }
]
````

***

### Pricing

This actor charges **$0.002 USD per item extracted** (posts and comments each count as one item). Extracting 100 posts with 10 comments each = 1,100 items ≈ $2.20 USD.

***

### Keywords

reddit scraper, subreddit posts extractor, reddit comments scraper, reddit data for AI, reddit sentiment analysis, reddit thread extractor, social media scraper, reddit API scraper, NLP training data, reddit market research

***

### Legal Disclaimer

This actor extracts **publicly available data only** from Reddit using Reddit's official public JSON API (`reddit.com/r/{subreddit}.json`), in compliance with Chilean Law 19.628 on the Protection of Private Life (*Ley 19.628 sobre Protección de la Vida Privada*).

**What this actor does NOT collect:**

- Private messages or non-public posts
- Email addresses or personal contact information
- Data from private or restricted subreddits
- Any data not freely visible to anonymous visitors

**What this actor collects:**

- Post titles, body text, and metadata (public content)
- Publicly visible usernames and comment text
- Engagement metrics (score, upvotes, comment counts)

All data is publicly accessible without authentication via Reddit's JSON API. Users are solely responsible for ensuring their use of this data complies with applicable laws and Reddit's terms of service.

# Actor input Schema

## `subreddits` (type: `array`):

List of subreddit names to scrape (without the r/ prefix).

## `sort` (type: `string`):

How to sort posts within each subreddit.

## `timeFilter` (type: `string`):

Time range when sort is set to Top.

## `maxPostsPerSubreddit` (type: `integer`):

Maximum number of posts to extract per subreddit.

## `includeComments` (type: `boolean`):

Also extract the top comments for each post.

## `maxCommentsPerPost` (type: `integer`):

Maximum number of top-level comments to extract per post.

## `proxyConfiguration` (type: `object`):

Datacenter proxies are free and work for most sites. Switch to Residential if you get blocked.

## Actor input object example

```json
{
  "subreddits": [
    "MachineLearning"
  ],
  "sort": "hot",
  "timeFilter": "week",
  "maxPostsPerSubreddit": 25,
  "includeComments": true,
  "maxCommentsPerPost": 10,
  "proxyConfiguration": {
    "useApifyProxy": true
  }
}
```

# Actor output Schema

## `dataset` (type: `string`):

All scraped Reddit threads and comments stored in the default dataset.

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "proxyConfiguration": {
        "useApifyProxy": true
    }
};

// Run the Actor and wait for it to finish
const run = await client.actor("sheshinmcfly/reddit-thread-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = { "proxyConfiguration": { "useApifyProxy": True } }

# Run the Actor and wait for it to finish
run = client.actor("sheshinmcfly/reddit-thread-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "proxyConfiguration": {
    "useApifyProxy": true
  }
}' |
apify call sheshinmcfly/reddit-thread-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=sheshinmcfly/reddit-thread-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit Thread Scraper",
        "description": "Extract posts and top comments from any Reddit thread or subreddit. Returns post title, author, score, URL, body text, and top-voted comments with full metadata. Ideal for sentiment analysis, research, AI training datasets, and community monitoring. No API key required.",
        "version": "1.0",
        "x-build-id": "ovMs0vTchIT2FlgSm"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/sheshinmcfly~reddit-thread-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-sheshinmcfly-reddit-thread-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/sheshinmcfly~reddit-thread-scraper/runs": {
            "post": {
                "operationId": "runs-sync-sheshinmcfly-reddit-thread-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/sheshinmcfly~reddit-thread-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-sheshinmcfly-reddit-thread-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "properties": {
                    "subreddits": {
                        "title": "Subreddits",
                        "type": "array",
                        "description": "List of subreddit names to scrape (without the r/ prefix).",
                        "items": {
                            "type": "string"
                        },
                        "default": [
                            "MachineLearning"
                        ]
                    },
                    "sort": {
                        "title": "Sort order",
                        "enum": [
                            "hot",
                            "new",
                            "top",
                            "rising"
                        ],
                        "type": "string",
                        "description": "How to sort posts within each subreddit.",
                        "default": "hot"
                    },
                    "timeFilter": {
                        "title": "Time filter (only for Top)",
                        "enum": [
                            "hour",
                            "day",
                            "week",
                            "month",
                            "year",
                            "all"
                        ],
                        "type": "string",
                        "description": "Time range when sort is set to Top.",
                        "default": "week"
                    },
                    "maxPostsPerSubreddit": {
                        "title": "Max posts per subreddit",
                        "minimum": 1,
                        "maximum": 500,
                        "type": "integer",
                        "description": "Maximum number of posts to extract per subreddit.",
                        "default": 25
                    },
                    "includeComments": {
                        "title": "Include comments",
                        "type": "boolean",
                        "description": "Also extract the top comments for each post.",
                        "default": true
                    },
                    "maxCommentsPerPost": {
                        "title": "Max comments per post",
                        "minimum": 1,
                        "maximum": 100,
                        "type": "integer",
                        "description": "Maximum number of top-level comments to extract per post.",
                        "default": 10
                    },
                    "proxyConfiguration": {
                        "title": "Proxy Configuration",
                        "type": "object",
                        "description": "Datacenter proxies are free and work for most sites. Switch to Residential if you get blocked."
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
