# Reddit Scraper (`lentic_clockss/reddit-scraper`) Actor

Scrape Reddit posts from any subreddit — search by keyword, browse new/hot/top, get full post text and comments. No login, no API key, no browser. Fast HTTP-only.

- **URL**: https://apify.com/lentic\_clockss/reddit-scraper.md
- **Developed by:** [kane liu](https://apify.com/lentic_clockss) (community)
- **Categories:** Social media, Lead generation, MCP servers
- **Stats:** 1 total users, 0 monthly users, 0.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

Pay per usage

This Actor is paid per platform usage. The Actor is free to use, and you only pay for the Apify platform usage, which gets cheaper the higher subscription plan you have.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-usage

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Reddit Scraper

Extract Reddit post data at scale — browse subreddit feeds, run keyword searches, and pull top comments from public Reddit JSON endpoints. No login, no OAuth, no browser, and no Reddit API key required.

### Features

- **Subreddit feed scraping** - scrape `new`, `hot`, or `top` posts from any subreddit with automatic pagination
- **Keyword search** - search within one or more subreddits using Reddit's public search endpoint
- **Top comments** - optionally fetch top-level comments for each post to capture discussion context
- **Multi-subreddit runs** - scrape several subreddits in one actor run
- **Fast and lightweight** - HTTP-only extraction via `curl_cffi`, no browser overhead
- **Auto-deduplication** - results are deduplicated by post ID across subreddit and query combinations
- **Proxy-ready** - supports Apify proxy configuration for larger runs and rate-limit mitigation

### Input

| Parameter         | Type             | Default  | Description                                                                                                   |
| ----------------- | ---------------- | -------- | ------------------------------------------------------------------------------------------------------------- |
| `subreddits`      | array of strings | —        | Subreddit names to scrape, without the `r/` prefix. Required.                                                 |
| `searchQueries`   | array of strings | —        | Keywords to search inside each subreddit. If empty, the actor scrapes the subreddit feed directly.            |
| `sort`            | string           | `"new"`  | Sort order: `new`, `hot`, `top`, or `relevance`. `relevance` is only meaningful when `searchQueries` is used. |
| `timeFilter`      | string           | `"week"` | Time range filter for `top` and search results: `hour`, `day`, `week`, `month`, `year`, `all`.                |
| `maxResults`      | integer          | `100`    | Maximum posts to return per subreddit, or per subreddit + query combination. Range: 1-5000.                   |
| `includeComments` | boolean          | `false`  | Fetch top-level comments for each post. Slower but adds discussion context.                                   |
| `maxComments`     | integer          | `10`     | Maximum number of top-level comments to include per post when comments are enabled.                           |
| `proxy`           | object           | —        | Apify proxy configuration. Recommended for large runs to avoid rate limiting.                                 |

### Output Fields

| Field         | Type    | Description                                                               |
| ------------- | ------- | ------------------------------------------------------------------------- |
| `postId`      | string  | Reddit post ID                                                            |
| `subreddit`   | string  | Source subreddit name                                                     |
| `title`       | string  | Post title                                                                |
| `url`         | string  | Full Reddit post URL                                                      |
| `author`      | string  | Reddit username of the post author                                        |
| `body`        | string  | Self-post text body, empty for link posts or removed content              |
| `score`       | integer | Reddit score                                                              |
| `upvoteRatio` | number  | Upvote ratio reported by Reddit                                           |
| `numComments` | integer | Total number of comments on the post                                      |
| `flair`       | string  | Post flair text if present                                                |
| `createdAt`   | string  | Post creation time in ISO 8601 format                                     |
| `isNsfw`      | boolean | Whether the post is marked NSFW                                           |
| `isSelf`      | boolean | Whether the post is a self post                                           |
| `thumbnail`   | string  | Thumbnail URL or Reddit thumbnail marker                                  |
| `externalUrl` | string  | External destination URL for link posts                                   |
| `scrapedAt`   | string  | Time when this actor scraped the record                                   |
| `comments`    | array   | Top-level comments as objects with `author`, `body`, `score`, `createdAt` |

### Usage Examples

#### Scrape New Posts from a Subreddit

```json
{
  "subreddits": ["forhire"],
  "sort": "new",
  "maxResults": 50
}
````

#### Search Inside Multiple Subreddits

```json
{
  "subreddits": ["forhire", "freelance", "webscraping"],
  "searchQueries": ["hiring developer", "need help building"],
  "sort": "new",
  "timeFilter": "month",
  "maxResults": 100
}
```

#### Scrape Top Posts with Comments

```json
{
  "subreddits": ["startups"],
  "sort": "top",
  "timeFilter": "week",
  "maxResults": 25,
  "includeComments": true,
  "maxComments": 5
}
```

#### Large Run with Proxy

```json
{
  "subreddits": ["forhire", "freelance", "entrepreneur", "SaaS"],
  "searchQueries": ["looking for developer", "need automation"],
  "sort": "relevance",
  "timeFilter": "week",
  "maxResults": 200,
  "proxy": {
    "useApifyProxy": true
  }
}
```

### Pricing

This actor is designed to be lightweight and inexpensive: approximately **$2 per 1,000 results**, using a simple pricing model of start fee + per-result charge.

### Notes

- **Public data only** - the actor reads Reddit's public JSON endpoints. No login, OAuth, or private user data is used.
- **Cookie workaround** - Reddit's JSON endpoints require a cookie header to be present, but the cookie value itself does not need to be real. This actor uses the minimal cookie `_ = 1`.
- **Rate limits** - Reddit applies per-IP rate limiting. Large runs should use Apify proxy rotation for stability.
- **Comments cost extra requests** - enabling `includeComments` adds one extra request per post that has comments, so large runs will be slower.
- **Result scope** - `maxResults` applies independently to each subreddit, or each subreddit + search query combination when search is enabled.

### Legal

This actor extracts publicly available Reddit data from endpoints accessible to any visitor on the public web. No authentication or account access is used.

# Actor input Schema

## `subreddits` (type: `array`):

Subreddit names to scrape (without r/ prefix). Example: \["forhire", "freelance", "startups"].

## `searchQueries` (type: `array`):

Keywords to search within each subreddit. If empty, scrapes the subreddit feed (new/hot/top) without keyword filtering.

## `sort` (type: `string`):

How to sort results. 'new' for latest, 'hot' for trending, 'top' for highest scored, 'relevance' for search relevance (only with searchQueries).

## `timeFilter` (type: `string`):

Time range filter. Applies to 'top' sort and search results.

## `maxResults` (type: `integer`):

Maximum posts to return per subreddit (or per subreddit+query combination if searchQueries is set). Default 100.

## `includeComments` (type: `boolean`):

Fetch top comments for each post. Slower but includes discussion context. Each post requires an extra request.

## `maxComments` (type: `integer`):

Maximum number of top-level comments to include per post when includeComments is enabled.

## `proxy` (type: `object`):

Apify proxy configuration. Recommended for large runs to avoid rate limiting.

## Actor input object example

```json
{
  "subreddits": [
    "forhire",
    "freelance",
    "startups",
    "webscraping"
  ],
  "searchQueries": [
    "hiring developer",
    "need help building"
  ],
  "sort": "new",
  "timeFilter": "week",
  "maxResults": 100,
  "includeComments": false,
  "maxComments": 10
}
```

# Actor output Schema

## `posts` (type: `string`):

Dataset containing all scraped Reddit post data

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "subreddits": [
        "forhire"
    ]
};

// Run the Actor and wait for it to finish
const run = await client.actor("lentic_clockss/reddit-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = { "subreddits": ["forhire"] }

# Run the Actor and wait for it to finish
run = client.actor("lentic_clockss/reddit-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "subreddits": [
    "forhire"
  ]
}' |
apify call lentic_clockss/reddit-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=lentic_clockss/reddit-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit Scraper",
        "description": "Scrape Reddit posts from any subreddit — search by keyword, browse new/hot/top, get full post text and comments. No login, no API key, no browser. Fast HTTP-only.",
        "version": "0.1",
        "x-build-id": "r1LjqUmXLXQNdztAd"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/lentic_clockss~reddit-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-lentic_clockss-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/lentic_clockss~reddit-scraper/runs": {
            "post": {
                "operationId": "runs-sync-lentic_clockss-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/lentic_clockss~reddit-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-lentic_clockss-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "subreddits"
                ],
                "properties": {
                    "subreddits": {
                        "title": "Subreddits",
                        "type": "array",
                        "description": "Subreddit names to scrape (without r/ prefix). Example: [\"forhire\", \"freelance\", \"startups\"].",
                        "items": {
                            "type": "string"
                        }
                    },
                    "searchQueries": {
                        "title": "Search Queries",
                        "type": "array",
                        "description": "Keywords to search within each subreddit. If empty, scrapes the subreddit feed (new/hot/top) without keyword filtering.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "sort": {
                        "title": "Sort Order",
                        "enum": [
                            "new",
                            "hot",
                            "top",
                            "relevance"
                        ],
                        "type": "string",
                        "description": "How to sort results. 'new' for latest, 'hot' for trending, 'top' for highest scored, 'relevance' for search relevance (only with searchQueries).",
                        "default": "new"
                    },
                    "timeFilter": {
                        "title": "Time Filter",
                        "enum": [
                            "hour",
                            "day",
                            "week",
                            "month",
                            "year",
                            "all"
                        ],
                        "type": "string",
                        "description": "Time range filter. Applies to 'top' sort and search results.",
                        "default": "week"
                    },
                    "maxResults": {
                        "title": "Max Results",
                        "minimum": 1,
                        "maximum": 5000,
                        "type": "integer",
                        "description": "Maximum posts to return per subreddit (or per subreddit+query combination if searchQueries is set). Default 100.",
                        "default": 100
                    },
                    "includeComments": {
                        "title": "Include Comments",
                        "type": "boolean",
                        "description": "Fetch top comments for each post. Slower but includes discussion context. Each post requires an extra request.",
                        "default": false
                    },
                    "maxComments": {
                        "title": "Max Comments Per Post",
                        "minimum": 1,
                        "maximum": 100,
                        "type": "integer",
                        "description": "Maximum number of top-level comments to include per post when includeComments is enabled.",
                        "default": 10
                    },
                    "proxy": {
                        "title": "Proxy Configuration",
                        "type": "object",
                        "description": "Apify proxy configuration. Recommended for large runs to avoid rate limiting."
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
