# Reddit Scraper (`scrapesmith/reddit-scraper`) Actor

Scrape Reddit posts, comments, and user profiles without API keys or login. Extract from any subreddit, keyword search, or post URL. No rate limits.

- **URL**: https://apify.com/scrapesmith/reddit-scraper.md
- **Developed by:** [Scrape Smith](https://apify.com/scrapesmith) (community)
- **Categories:** Social media
- **Stats:** 2 total users, 2 monthly users, 100.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

from $0.80 / 1,000 results

This Actor is paid per event and usage. You are charged both the fixed price for specific events and for Apify platform usage.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Reddit Scraper — Posts, Comments, Search & Users

**Extract Reddit data without API keys, rate limits, or OAuth setup.** Scrapes public posts, comments, search results, and user profiles from any subreddit using Reddit's public JSON endpoints. No Reddit account needed.

### What does Reddit Scraper do?

Reddit Scraper lets you extract structured data from Reddit at scale — subreddit posts, full comment threads, keyword search results, and user activity history. No login, no OAuth, no API key approval needed. Paste in any Reddit URL or search keyword and get clean JSON output immediately.

### Why use Reddit Scraper?

- **Market research** — track what your target audience is saying about your product, niche, or competitors
- **Sentiment analysis** — collect posts and comments for NLP and AI pipelines
- **Content strategy** — find top-performing posts and trending topics in any community
- **Lead generation** — find users asking questions your product solves
- **Academic research** — collect public opinion data from thousands of communities
- **Brand monitoring** — track mentions across subreddits without manual searching

### How to use Reddit Scraper

1. Paste one or more Reddit URLs into **Start URLs** — subreddits, posts, user profiles, or search results all work
2. Or use the **Searches** field to search by keyword without needing a URL
3. Set your sort order, time filter, and item limits
4. Click **Run** and get your data in seconds
5. Download as JSON, CSV, or Excel from the Output tab

### Input

| Field | Description | Default |
|-------|-------------|---------|
| `startUrls` | Any Reddit URL (subreddit, post, user, search) | `reddit.com/r/entrepreneur` |
| `searches` | Search keywords — no URL needed | — |
| `searchCommunityName` | Restrict keyword search to a specific subreddit | — |
| `sort` | `hot`, `new`, `top`, `rising`, `controversial` | `hot` |
| `time` | `hour`, `day`, `week`, `month`, `year`, `all` | all time |
| `maxItems` | Hard cap on total items across all sources | — |
| `maxPostCount` | Max posts per subreddit or user | `100` |
| `maxComments` | Max comments per post | `100` |
| `postDateLimit` | Only scrape posts after this date (YYYY-MM-DD) | — |
| `commentDateLimit` | Only scrape comments after this date (YYYY-MM-DD) | — |
| `skipComments` | Scrape posts only, skip fetching comments | `false` |
| `skipUserPosts` | Fetch user profile only, skip their post history | `false` |
| `includeNSFW` | Include NSFW/over-18 content | `false` |

#### Input examples

**Scrape a subreddit:**
```json
{
  "startUrls": [{ "url": "https://www.reddit.com/r/entrepreneur/" }],
  "sort": "top",
  "time": "week",
  "maxPostCount": 200
}
````

**Search by keyword:**

```json
{
  "searches": ["ChatGPT alternatives", "AI tools 2025"],
  "sort": "relevance",
  "time": "month",
  "maxPostCount": 100
}
```

**Scrape a post and its comments:**

```json
{
  "startUrls": [{ "url": "https://www.reddit.com/r/entrepreneur/comments/abc123/my_post/" }],
  "maxComments": 500
}
```

**Scrape a user profile:**

```json
{
  "startUrls": [{ "url": "https://www.reddit.com/user/someusername/" }],
  "maxPostCount": 100
}
```

### Output

Each item is a post, comment, or user profile with full metadata. Download as JSON, CSV, Excel, or HTML from the Output tab.

**Sample post:**

```json
{
  "id": "t3_1rqawb9",
  "parsedId": "1rqawb9",
  "url": "https://www.reddit.com/r/Entrepreneur/comments/1rqawb9/title/",
  "username": "someuser",
  "title": "How I grew my SaaS to $10k MRR in 6 months",
  "communityName": "r/Entrepreneur",
  "body": "Here's everything I did...",
  "numberOfComments": 142,
  "upVotes": 1240,
  "upVoteRatio": 0.97,
  "flair": "Success Story",
  "isVideo": false,
  "imageUrls": [],
  "createdAt": "2026-03-10T21:58:04.000Z",
  "scrapedAt": "2026-04-11T10:00:00.000Z",
  "dataType": "post"
}
```

**Sample comment:**

```json
{
  "id": "t1_jnhqrgg",
  "parsedId": "jnhqrgg",
  "postUrl": "https://www.reddit.com/r/Entrepreneur/comments/1rqawb9/title/",
  "username": "anotheruser",
  "body": "Great writeup, thanks for sharing.",
  "upVotes": 34,
  "depth": 0,
  "communityName": "r/Entrepreneur",
  "createdAt": "2026-03-10T22:15:00.000Z",
  "dataType": "comment"
}
```

### Data fields

| Field | Description |
|-------|-------------|
| `id` | Reddit fullname ID (e.g. `t3_abc123`) |
| `parsedId` | Short post/comment ID |
| `url` | Direct URL to the post or comment |
| `username` | Author's username |
| `title` | Post title (posts only) |
| `communityName` | Subreddit (e.g. `r/entrepreneur`) |
| `body` | Post text or comment text |
| `upVotes` | Total upvotes |
| `upVoteRatio` | Upvote ratio (0–1) |
| `numberOfComments` | Comment count |
| `flair` | Post flair label |
| `isVideo` | Whether the post contains a video |
| `videoUrl` | Direct video URL if available |
| `imageUrls` | Array of image URLs |
| `createdAt` | Creation timestamp (ISO 8601) |
| `dataType` | `post`, `comment`, or `user` |

### FAQ

**Do I need a Reddit account?**
No. This scraper uses Reddit's public JSON endpoints — the same data anyone sees without logging in.

**How many results can I get?**
Up to ~1,000 posts per subreddit per sort order. Reddit's API paginates to about 1,000 items. For broader coverage, use multiple time filters (day, week, month, year) or multiple subreddit URLs.

**What URL types are supported?**
Any public Reddit URL works: subreddit homepages, individual posts, user profiles, and search result pages. Just paste the URL as-is.

**Is scraping Reddit legal?**
This Actor only accesses publicly available data — the same data anyone can view without logging in. Always use data responsibly and in compliance with applicable laws.

**Something broken?**
Open an issue on this Actor's page and we'll fix it within 24 hours.

# Actor input Schema

## `startUrls` (type: `array`):

Paste any Reddit URL — subreddit, post, user profile, or search page. The scraper auto-detects the type.

## `searches` (type: `array`):

Search Reddit by keyword without needing a URL. Each keyword runs as a separate search. Leave empty if using Start URLs.

## `searchCommunityName` (type: `string`):

Optional. If set, keyword searches will be scoped to this subreddit only (e.g. 'entrepreneur'). Only used when Searches is filled.

## `sort` (type: `string`):

Sort order for subreddit posts. Hot = trending now. New = most recent. Top = highest voted. Rising = gaining traction.

## `time` (type: `string`):

Filter results by time range. Most useful when Sort is set to Top.

## `maxItems` (type: `integer`):

Hard cap on the total number of items saved across all URLs and searches combined. Leave empty for no global limit.

## `maxPostCount` (type: `integer`):

Maximum number of posts to scrape from each subreddit, search query, or user.

## `maxComments` (type: `integer`):

Maximum number of comments to scrape when visiting a post URL. Set to 0 to skip comments entirely.

## `postDateLimit` (type: `string`):

Only scrape posts published after this date (YYYY-MM-DD). Automatically sets sort to New.

## `commentDateLimit` (type: `string`):

Only scrape comments published after this date (YYYY-MM-DD).

## `skipComments` (type: `boolean`):

When enabled, only posts are scraped — comments are not fetched. Speeds up subreddit scraping significantly.

## `skipUserPosts` (type: `boolean`):

When enabled, only the user profile object is returned — their post history is not fetched.

## `includeNSFW` (type: `boolean`):

When enabled, over-18 and NSFW posts are included in results.

## Actor input object example

```json
{
  "startUrls": [
    {
      "url": "https://www.reddit.com/r/entrepreneur/"
    }
  ],
  "searches": [],
  "searchCommunityName": "",
  "sort": "hot",
  "time": "",
  "maxPostCount": 100,
  "maxComments": 100,
  "skipComments": false,
  "skipUserPosts": false,
  "includeNSFW": false
}
```

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "startUrls": [
        {
            "url": "https://www.reddit.com/r/entrepreneur/"
        }
    ]
};

// Run the Actor and wait for it to finish
const run = await client.actor("scrapesmith/reddit-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = { "startUrls": [{ "url": "https://www.reddit.com/r/entrepreneur/" }] }

# Run the Actor and wait for it to finish
run = client.actor("scrapesmith/reddit-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "startUrls": [
    {
      "url": "https://www.reddit.com/r/entrepreneur/"
    }
  ]
}' |
apify call scrapesmith/reddit-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=scrapesmith/reddit-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit Scraper",
        "description": "Scrape Reddit posts, comments, and user profiles without API keys or login. Extract from any subreddit, keyword search, or post URL. No rate limits.",
        "version": "0.0",
        "x-build-id": "2WsVZsO6eQa78qePC"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/scrapesmith~reddit-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-scrapesmith-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/scrapesmith~reddit-scraper/runs": {
            "post": {
                "operationId": "runs-sync-scrapesmith-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/scrapesmith~reddit-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-scrapesmith-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "properties": {
                    "startUrls": {
                        "title": "Start URLs",
                        "type": "array",
                        "description": "Paste any Reddit URL — subreddit, post, user profile, or search page. The scraper auto-detects the type.",
                        "items": {
                            "type": "object",
                            "required": [
                                "url"
                            ],
                            "properties": {
                                "url": {
                                    "type": "string",
                                    "title": "URL of a web page",
                                    "format": "uri"
                                }
                            }
                        }
                    },
                    "searches": {
                        "title": "Search keywords",
                        "type": "array",
                        "description": "Search Reddit by keyword without needing a URL. Each keyword runs as a separate search. Leave empty if using Start URLs.",
                        "default": [],
                        "items": {
                            "type": "string"
                        }
                    },
                    "searchCommunityName": {
                        "title": "Restrict search to subreddit",
                        "type": "string",
                        "description": "Optional. If set, keyword searches will be scoped to this subreddit only (e.g. 'entrepreneur'). Only used when Searches is filled.",
                        "default": ""
                    },
                    "sort": {
                        "title": "Sort",
                        "enum": [
                            "hot",
                            "new",
                            "top",
                            "rising",
                            "controversial"
                        ],
                        "type": "string",
                        "description": "Sort order for subreddit posts. Hot = trending now. New = most recent. Top = highest voted. Rising = gaining traction.",
                        "default": "hot"
                    },
                    "time": {
                        "title": "Time filter",
                        "enum": [
                            "",
                            "hour",
                            "day",
                            "week",
                            "month",
                            "year",
                            "all"
                        ],
                        "type": "string",
                        "description": "Filter results by time range. Most useful when Sort is set to Top.",
                        "default": ""
                    },
                    "maxItems": {
                        "title": "Max total items",
                        "minimum": 1,
                        "type": "integer",
                        "description": "Hard cap on the total number of items saved across all URLs and searches combined. Leave empty for no global limit."
                    },
                    "maxPostCount": {
                        "title": "Max posts per source",
                        "minimum": 1,
                        "maximum": 10000,
                        "type": "integer",
                        "description": "Maximum number of posts to scrape from each subreddit, search query, or user.",
                        "default": 100
                    },
                    "maxComments": {
                        "title": "Max comments per post",
                        "minimum": 0,
                        "maximum": 10000,
                        "type": "integer",
                        "description": "Maximum number of comments to scrape when visiting a post URL. Set to 0 to skip comments entirely.",
                        "default": 100
                    },
                    "postDateLimit": {
                        "title": "Post date limit",
                        "type": "string",
                        "description": "Only scrape posts published after this date (YYYY-MM-DD). Automatically sets sort to New."
                    },
                    "commentDateLimit": {
                        "title": "Comment date limit",
                        "type": "string",
                        "description": "Only scrape comments published after this date (YYYY-MM-DD)."
                    },
                    "skipComments": {
                        "title": "Skip comments",
                        "type": "boolean",
                        "description": "When enabled, only posts are scraped — comments are not fetched. Speeds up subreddit scraping significantly.",
                        "default": false
                    },
                    "skipUserPosts": {
                        "title": "Skip user posts",
                        "type": "boolean",
                        "description": "When enabled, only the user profile object is returned — their post history is not fetched.",
                        "default": false
                    },
                    "includeNSFW": {
                        "title": "Include NSFW content",
                        "type": "boolean",
                        "description": "When enabled, over-18 and NSFW posts are included in results.",
                        "default": false
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
