# Reddit MCP Server — Claude, ChatGPT, Cursor, Codex (`makework36/reddit-mcp-server`) Actor

Native Reddit MCP server for AI agents. 7 Reddit tools (search, subreddits, posts+comments, users, trending) over Streamable HTTP. Works with Claude Desktop, Cursor, ChatGPT, OpenAI Codex, Agents SDK, Windsurf. No Reddit API key. Pay per tool call.

- **URL**: https://apify.com/makework36/reddit-mcp-server.md
- **Developed by:** [deusex machine](https://apify.com/makework36) (community)
- **Categories:** MCP servers, AI, Social media
- **Stats:** 1 total users, 1 monthly users, 0.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

from $20.00 / 1,000 mcp tool calls

This Actor is paid per event and usage. You are charged both the fixed price for specific events and for Apify platform usage.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Reddit MCP Server — Native Model Context Protocol for Claude, ChatGPT, Cursor and Codex

> ⭐ **Useful?** [Leave a review](https://apify.com/makework36/reddit-mcp-server/reviews) — it takes 10 seconds and is the single biggest thing that helps other AI engineers and agent builders find this Reddit MCP server.

Give **Claude Desktop**, **ChatGPT**, **Cursor**, **OpenAI Codex**, the **OpenAI Agents SDK**, **Windsurf**, **Continue.dev**, **Zed**, **n8n**, **LangChain**, **LlamaIndex** and any other MCP-compatible agent a production Reddit toolbox — no Reddit API key, no local process, no `npx` middleware running on the user's laptop, no OAuth dance. Seven first-class Reddit tools are exposed over the **Model Context Protocol (MCP)** using **Streamable HTTP**, hosted on [Apify Standby](https://docs.apify.com/platform/actors/running/standby). You connect once via URL, your agent discovers every tool automatically, and you pay only for successful tool calls. Read the official [MCP specification](https://modelcontextprotocol.io/specification) for background.

This is a native MCP server, not a wrapper or proxy. Real JSON-RPC 2.0 over HTTP POST at `/mcp`, protocol version `2025-06-18`. The underlying scraper has been in production against `old.reddit.com` since 2024 with residential proxies and a stealth-hardened browser — so the transport changed, but the reliability did not.

### What this Reddit MCP server does

Given an MCP client (Claude Desktop, Cursor, ChatGPT, Codex, Windsurf, LangChain, a custom agent, anything) and a URL, this actor turns Reddit into a typed, callable toolbox. The agent sends `tools/list` on first connect, discovers the seven Reddit tools, and then calls `tools/call` with a structured argument object whenever it wants to search Reddit, pull a subreddit listing, fetch a post with full comments, profile a user, look up subreddit metadata or discover trending communities.

Every Reddit fetch is performed inside Apify's container, through a residential-proxy-routed stealth browser targeting `old.reddit.com`'s JSON endpoints. Images, fonts, CSS and media are blocked at the request-interception layer so each page load is only the JSON payload. The response is transformed into a flat, documented JSON shape — one level of nesting, ISO 8601 timestamps, no undefined vs null ambiguity — before being handed back to the agent as an MCP `content` block.

No Reddit developer account. No OAuth app. No refresh-token rotation. No 100-request-per-minute official-API ceiling.

### Why this actor exists

- **Native MCP, not a wrapper.** Real JSON-RPC 2.0 over HTTP POST at `/mcp`. No extra proxy process, no Smithery adapter, no `npx` dependency to keep updated on every end-user machine.
- **No Reddit API key required.** Reddit tightened free-tier access and killed many third-party clients in 2023. This actor talks to Reddit's publicly-accessible JSON endpoints instead, over residential IPs, so your agent is not gated by the official API quota.
- **Built for agents, not dashboards.** Tool schemas are strict, responses are flat, field names are stable. An LLM can reason over the response in one pass and cite post IDs, permalinks and author handles without hallucinating URLs.
- **Pay-per-tool-call.** $0.02 per successful tool call. No subscription, no seat fee. Idle MCP connections cost nothing.
- **No cold start for your users.** Apify Standby keeps a warm container alive so `tools/call` latency is dominated by the Reddit fetch, not Actor boot time.

### The 7 Reddit MCP tools

All tools return a single JSON object. Field names match the schemas exposed via `tools/list` — no undocumented fields, no breaking renames.

#### 1. `search_reddit`

Global or subreddit-scoped Reddit search.

- `query` (string, required)
- `subreddit` (string, optional — restricts to one sub, `restrict_sr=on`)
- `sort` — `relevance` (default), `hot`, `top`, `new`, `comments`
- `timeFilter` — `hour`, `day`, `week`, `month`, `year`, `all` (default)
- `limit` — 1–250, default 25

#### 2. `get_subreddit_posts`

Fetch a listing from a subreddit with pagination handled automatically.

- `subreddit` (string, required) — with or without the `r/` prefix
- `sort` — `hot` (default), `new`, `top`, `rising`, `controversial`
- `timeFilter` — applies when `sort` is `top` or `controversial`
- `limit` — 1–250, default 25

#### 3. `get_post_with_comments`

Fetch a single Reddit post plus the **entire flattened comment tree**. Reddit has no server-side depth cap — this tool enforces it via `commentLimit`.

- `postId` (string) — with or without the `t3_` prefix
- `subreddit` (string, recommended when using `postId`)
- `url` (string, optional) — full Reddit URL, overrides `postId` + `subreddit` when set
- `commentSort` — `confidence` (default), `top`, `new`, `controversial`, `old`, `qa`
- `commentLimit` — 0–500, default 100 (0 means unlimited up to 500)

#### 4. `get_user_posts`

Submissions by a user.

- `username` (string, required)
- `sort` — `new` (default), `hot`, `top`, `controversial`
- `timeFilter` — applies when `sort` is `top` or `controversial`
- `limit` — 1–250, default 25

#### 5. `get_user_comments`

Comments by a user, each enriched with the parent post title and permalink for context.

- `username` (string, required)
- `sort` — `new` (default), `hot`, `top`, `controversial`
- `timeFilter` — applies when `sort` is `top` or `controversial`
- `limit` — 1–500, default 25

#### 6. `get_subreddit_info`

Metadata for a subreddit: description, subscribers, active users, creation date, submission rules, NSFW flag, icon, banner.

- `subreddit` (string, required)

#### 7. `get_trending_subreddits`

Discover popular, new or default subreddits.

- `category` — `popular` (default), `new`, `default`
- `limit` — 1–100, default 25

### Use cases for this Reddit MCP server

- **Research assistants** — Build a Claude or ChatGPT workflow that takes a topic, searches multiple subreddits, pulls full comment trees, clusters the arguments and returns a cited summary. `search_reddit` + `get_post_with_comments` is the whole backend.
- **Sentiment tracking on your product** — Set up a scheduled agent that monitors `r/startups`, `r/saas`, `r/selfhosted` and your brand's dedicated sub for mentions, pulls the thread context, and files tickets when something looks like churn risk.
- **Social listening for marketers** — Feed a nightly agent the prompt "what did /r/ProductManagement discuss this week" and let it produce a digest with clickable permalinks.
- **Lead intelligence for B2B sales** — Profile a Reddit user your prospect mentions, read their last 12 months of comments, and draft a pre-call brief — without ever leaving Cursor.
- **Moderation and compliance tooling** — Agents that scan flagged threads, quote the context, and draft responses for human review.
- **Dataset builders for ML teams** — Combine `search_reddit` + `get_post_with_comments` in a LangChain or LlamaIndex pipeline to assemble fine-tuning datasets (text + metadata) for domain-specific LLMs.

### How to connect this Reddit MCP server

Your MCP endpoint is:

````

https://makework36--reddit-mcp-server.apify.actor/mcp?token=APIFY\_TOKEN

````

Replace `APIFY_TOKEN` with a [personal API token](https://console.apify.com/settings/integrations) (read scope is enough). The server also accepts the token in an `Authorization: Bearer <token>` header — that is sometimes cleaner if your MCP client supports arbitrary headers.

What follows are copy-paste configs for each major MCP client.

### How to use with Claude Desktop

Edit `~/Library/Application Support/Claude/claude_desktop_config.json` on macOS or `%APPDATA%\Claude\claude_desktop_config.json` on Windows:

```json
{
  "mcpServers": {
    "reddit": {
      "url": "https://makework36--reddit-mcp-server.apify.actor/mcp?token=APIFY_TOKEN"
    }
  }
}
````

Restart Claude Desktop. Then ask: *"What is trending on r/technology today? Include the top comments on the highest-scoring post."* Claude will discover and call the tools automatically.

### How to use with Claude Code (CLI)

One-liner:

```bash
claude mcp add reddit https://makework36--reddit-mcp-server.apify.actor/mcp?token=APIFY_TOKEN
```

Or drop the same JSON block into `~/.claude.json` under `mcpServers`. Claude Code inherits the MCP server for every project.

### How to use with Cursor

Settings → *MCP* → *Add new MCP server*:

```json
{
  "mcpServers": {
    "reddit": {
      "url": "https://makework36--reddit-mcp-server.apify.actor/mcp?token=APIFY_TOKEN"
    }
  }
}
```

Reload Cursor, open an empty composer tab, and ask Cursor to "search r/ClaudeAI for opinions on Opus 4.7 vs Sonnet 4.6 and summarize the top 10 posts of the last month." Cursor will discover the tools and stream the results back inline.

### How to use with Windsurf / Codeium

`~/.codeium/windsurf/mcp_config.json`:

```json
{
  "mcpServers": {
    "reddit": {
      "serverUrl": "https://makework36--reddit-mcp-server.apify.actor/mcp?token=APIFY_TOKEN"
    }
  }
}
```

### How to use with ChatGPT Desktop (Custom Connector)

Open *Settings* → *Beta Features* → enable **Custom Connectors (MCP)**, then add a connector:

- Name: `Reddit`
- URL: `https://makework36--reddit-mcp-server.apify.actor/mcp?token=APIFY_TOKEN`

ChatGPT will expose the 7 tools as function calls under a single connector.

### How to use with OpenAI Codex CLI

Codex CLI supports MCP servers via `~/.codex/config.toml`:

```toml
[mcp_servers.reddit]
url = "https://makework36--reddit-mcp-server.apify.actor/mcp?token=APIFY_TOKEN"
```

Restart Codex and the tools will show up in the command palette.

### How to use with OpenAI Agents SDK

Python:

```python
from agents.mcp import MCPServerStreamableHttp

reddit = MCPServerStreamableHttp(
    name="reddit",
    params={"url": "https://makework36--reddit-mcp-server.apify.actor/mcp?token=APIFY_TOKEN"},
)
## pass `reddit` into Agent(mcp_servers=[reddit])
```

TypeScript:

```ts
import { MCPServerStreamableHttp } from "@openai/agents";

const reddit = new MCPServerStreamableHttp({
  name: "reddit",
  url: "https://makework36--reddit-mcp-server.apify.actor/mcp?token=APIFY_TOKEN",
});
```

### How to use with the OpenAI Assistants API

Reference the MCP server as a tool in your Assistants API call:

```json
{
  "type": "mcp",
  "server_url": "https://makework36--reddit-mcp-server.apify.actor/mcp?token=APIFY_TOKEN",
  "server_label": "reddit"
}
```

### How to use with Continue.dev

`~/.continue/config.yaml`:

```yaml
mcpServers:
  - name: reddit
    type: streamable-http
    url: https://makework36--reddit-mcp-server.apify.actor/mcp?token=APIFY_TOKEN
```

### How to use with Zed

`~/.config/zed/settings.json`:

```json
{
  "context_servers": {
    "reddit": {
      "command": null,
      "url": "https://makework36--reddit-mcp-server.apify.actor/mcp?token=APIFY_TOKEN"
    }
  }
}
```

### How to use with n8n

Use the **MCP Client Tool** node. Set the endpoint URL to the actor's `/mcp` URL with your token as the query parameter, select the tool to invoke, map the input fields from the previous node. No custom credential type required.

### How to use with LangChain (Python)

```python
from langchain_mcp_adapters.client import MultiServerMCPClient

client = MultiServerMCPClient({
    "reddit": {
        "transport": "streamable_http",
        "url": "https://makework36--reddit-mcp-server.apify.actor/mcp?token=APIFY_TOKEN",
    }
})

tools = await client.get_tools()
```

All seven Reddit MCP tools are now available to any LangChain agent, graph or chain.

### How to use with raw JSON-RPC (curl)

```bash
curl -s https://makework36--reddit-mcp-server.apify.actor/mcp?token=APIFY_TOKEN \
  -H 'content-type: application/json' \
  -d '{"jsonrpc":"2.0","id":1,"method":"tools/list"}'
```

Useful for debugging, CI smoke tests or building your own MCP client from scratch. Read the [JSON-RPC 2.0 spec](https://www.jsonrpc.org/specification) for wire-level details.

### Example agent prompts this server enables

Real prompts that route through multiple tools without human orchestration:

- *"Research what r/LocalLLaMA thinks about running Llama 4 locally. Summarize the top 5 threads of the past week with the most upvoted counterargument in each."* → `search_reddit` + `get_post_with_comments` ×5
- *"Find Reddit discussions comparing Cursor and a rival coding-agent IDE, pull the top comments, produce a sentiment breakdown with quotes."* → `search_reddit` + `get_post_with_comments`
- *"Profile Reddit user `spez` over the past year: submission patterns, subreddits they comment in, controversial takes."* → `get_user_posts` + `get_user_comments`
- *"Which new subreddits from the past month are worth watching for startup signal?"* → `get_trending_subreddits` + `get_subreddit_info` ×N
- *"What is blowing up on r/news in the last hour? Give me the post with the most polarized comments."* → `get_subreddit_posts` (sort=`new`, timeFilter=`hour`) + `get_post_with_comments` (sort=`controversial`)

Because the tools return structured JSON, the agent can cite post IDs, permalinks, author handles and timestamps verbatim — no hallucinated URLs.

### Output example

A truncated response from `get_post_with_comments`:

```json
{
  "post": {
    "id": "1abc234",
    "subreddit": "LocalLLaMA",
    "title": "Running Llama 4 on a Mac Studio M3 Ultra",
    "author": "some_user",
    "score": 1240,
    "upvoteRatio": 0.96,
    "createdAt": "2026-04-18T14:22:11.000Z",
    "permalink": "https://www.reddit.com/r/LocalLLaMA/comments/1abc234/...",
    "selftext": "After a week of tuning, I got Llama 4 70B running at 22 tok/s on the M3 Ultra..."
  },
  "comments": [
    {
      "id": "jxyz98",
      "parentId": "t3_1abc234",
      "author": "llm_pro",
      "score": 420,
      "createdAt": "2026-04-18T14:40:02.000Z",
      "body": "22 tok/s on 70B is wild. What quantization?",
      "depth": 0
    }
  ],
  "scrapedAt": "2026-04-22T10:01:02.118Z"
}
```

All timestamps are ISO 8601. Field names are stable across versions.

### Pricing

**$0.02 per successful tool call.** That is the entire pricing model.

- `initialize`, `tools/list`, `ping` and notifications are **free** — only the events that actually hit Reddit cost money.
- Failed tool calls (Reddit 404, rate-limit, network error after retries) are **not billed**.
- Long idle MCP connections (the agent loaded the server but did not call anything yet) cost **nothing**.
- Standby compute and residential proxy bandwidth are included — no separate line items.

#### Plan guidance

| Apify plan | Recommended for | Notes |
|-----------|-----------------|-------|
| **FREE** (trial credit) | First-time evaluation, personal AI agents | ~$5 credit → ~250 tool calls to evaluate |
| **STARTER** | Personal agents, side projects, research bots | Monthly credit + proxy bandwidth included |
| **SCALE** | Production agents, moderate-traffic SaaS | Higher concurrency, more proxy bandwidth |
| **BUSINESS** | Enterprise agent products, internal tools at scale | SLAs, priority support |
| **ENTERPRISE / DIAMOND** | Large AI companies, LLM labs, high-volume agent platforms | Dedicated resources, custom terms |

**Comparison you should actually do:** price a DIY setup (Reddit OAuth app, rotating datacenter proxies to survive 429s, a server to host an MCP bridge, monitoring, refresh-token rotation) against $0.02 per answered question. For most agent workloads this actor saves weeks of setup and tens of dollars a month in proxy bills.

Token-level cost is a rounding error: even a heavy research agent making 500 Reddit calls per day lands at $10/day. Most assistants burn 10–50 calls per session, i.e. $0.20–$1.00.

### Reddit MCP server comparison — how this compares to alternatives

Several Reddit integrations exist for AI agents. Here is how this native MCP server stacks up on the dimensions that matter most in production. All alternatives are anonymized because pricing models and transport layers change frequently.

| Feature | This MCP server | Official Reddit API + OAuth app | Reddit scraper with residential proxy | Local `npx` Reddit MCP bridge |
|---------|-----------------|-----------------------------------|----------------------------------------|--------------------------------|
| Setup time | **30 seconds** (paste URL) | Hours (OAuth app + refresh tokens) | Minutes (key + proxy config) | Minutes (install, update, restart) |
| Reddit API key required | **No** | Yes | Usually no | Sometimes |
| Transport | **Streamable HTTP + JSON-RPC 2.0** | REST | REST / custom | Local stdio |
| Works on Claude, Cursor, ChatGPT, Codex, Windsurf, Zed, LangChain, n8n | **Yes — all of them** | No — you must wrap it | No — you must wrap it | Claude Desktop only or limited |
| Rate limits | **No per-minute ceiling** | 100 req/min (free tier) | Depends on proxy quota | Depends on proxy quota |
| Residential proxy included | **Yes** | No | Optional, extra cost | Optional, extra cost |
| Cold start per session | **Warm container (Standby)** | N/A | Warm-up depends on host | Depends on local machine |
| Pay-per-success billing | **Yes ($0.02/call, failures free)** | Free but gated by quota | Typically per scrape | Typically flat |
| Output shape stable across versions | **Yes (documented schemas)** | Yes | Varies | Varies |
| Comment tree flattening and depth cap | **Yes (`commentLimit`)** | Manual | Manual | Manual |
| Works with OpenAI Assistants / Agents SDK | **Yes** | Requires wrapper | Requires wrapper | No (stdio only) |

The honest take: if you already have a Reddit OAuth app, production proxy infrastructure and an MCP bridge you maintain, keep using it. If you want an MCP server you can hand to any AI client with a single URL, this one is the fastest path.

### Architecture notes

For readers who care about the stack under the hood:

- **Transport.** JSON-RPC 2.0 over HTTP POST. Batching supported. `notifications/initialized` returns `202 Accepted`. Protocol version `2025-06-18`.
- **Runtime.** Apify Standby mode. Node.js 20 on `apify/actor-node-puppeteer-chrome:20`. Chrome + puppeteer-extra with the stealth plugin.
- **Fetch layer.** Requests hit a warm Puppeteer browser against `old.reddit.com`. Images, fonts, CSS and media are blocked at the request-interception layer, so each page load is only the JSON response body.
- **Proxy.** `RESIDENTIAL` Apify Proxy by default, with a graceful fallback to the default Apify Proxy group. Reddit aggressively 403s datacenter IPs, so residential is non-negotiable for reliability.
- **Retry policy.** 3 attempts per fetch. 404 short-circuits (propagated as a tool error). 403 / 5xx rotates the browser session and retries.
- **Response shape.** Every tool returns one JSON object. All timestamps are ISO 8601 strings. No `null` vs `undefined` ambiguity. Text fields that can be huge (`selftext`, comment `body`, subreddit `description`) are sliced to sane caps so the response fits comfortably in a 200 KB LLM context window.

### Step-by-step tutorial — your first Reddit MCP agent in 3 minutes

1. **Sign up for Apify** — go to [apify.com](https://apify.com) and create a free account. You get a $5 trial credit, good for ~250 tool calls.
2. **Grab a token** — [Apify Console → Settings → Integrations](https://console.apify.com/settings/integrations) → *Create API token*. Name it "Reddit MCP".
3. **Pick your client** — use Claude Desktop for the smoothest first-run experience. Any of the clients above work.
4. **Paste the config** — copy the Claude Desktop JSON block from earlier in this README, replace `APIFY_TOKEN` with your real token, and save.
5. **Restart the client** — Claude Desktop picks up the new MCP server on launch.
6. **Ask a Reddit question** — *"Top 5 threads in r/ChatGPT this week, plus the most-upvoted comment per thread."* The agent will discover the 7 tools, plan which ones to call, and return cited results.
7. **Check usage** — [Apify Console → Billing](https://console.apify.com/billing) shows your per-event cost so far.

### Advanced usage patterns

#### Pattern 1 — weekly community-sentiment digest

Schedule an agent to run every Monday at 08:00. Call `get_subreddit_posts` for the five subs your product cares about, sort `top` with `timeFilter=week`, pull the top 10 posts from each, fetch comments via `get_post_with_comments`, and generate a summary. Email it to the PM team.

#### Pattern 2 — crisis detection for brand

Every 15 minutes, call `search_reddit` with your brand name as the query, sort `new`, `timeFilter=hour`. If any post crosses a virality threshold (score > 200 in the first hour), fetch the full comment tree and route it to an on-call Slack channel with the `permalink` baked in.

#### Pattern 3 — competitor user research

Pick a public Reddit user who is highly active in a niche you care about. Run `get_user_posts` + `get_user_comments` once a month, store the delta in a database, and you have a longitudinal record of what a domain expert thinks — great raw material for content, customer interviews and PMF research.

#### Pattern 4 — long-tail keyword discovery for SEO

Call `search_reddit` on your seed keyword, fetch top posts, pull comments, and extract noun phrases. Reddit's natural-language questions are an excellent source of long-tail search intent that never appears in keyword-tool databases. Feed this into an AI-SEO content planner.

#### Pattern 5 — evaluation set for RAG pipelines

Use `search_reddit` + `get_post_with_comments` to assemble Q\&A-style pairs from high-quality subreddits (e.g. `r/explainlikeimfive`, `r/askscience`). Flatten post→top-answer pairs into an evaluation dataset for your RAG system. Because the response shape is flat, this transform is a few lines of Python.

### Troubleshooting

**No tools appeared in Claude Desktop after adding the server**
Check the MCP log pane: *Help* → *Logs* → *MCP*. Usually it is a quoting issue in `claude_desktop_config.json` or a missing `token=` query parameter. Hit `GET /` on the actor URL in a browser — it returns a JSON manifest listing the 7 tools. If you see them there, the server is healthy.

**I get a 401 / Unauthorized**
Your `APIFY_TOKEN` either expired, was never pasted in, or was scoped without access to this actor. Generate a fresh personal token from [Apify Console → Settings → Integrations](https://console.apify.com/settings/integrations) — the default scope includes actor calls.

**The first tool call takes ~10 seconds**
That is the Standby container warming up (browser launch plus first residential proxy handshake). Subsequent calls within the same Standby window are typically 1–3 seconds each.

**I am seeing `Reddit 403 — resource not found`**
Reddit is actively blocking the current residential exit IP. The actor will rotate and retry 3 times automatically. If it persistently fails on one sub, that sub may be quarantined or private. `get_subreddit_info` will make that explicit.

**Can I run this without a browser? It feels heavy.**
The browser layer exists because pure-fetch calls to Reddit's JSON endpoints fail on ~30% of residential IPs. Stealth Puppeteer is what brings reliability from "flaky" to "production". If you need raw HTTP calls to Reddit without MCP, use the [`makework36/reddit-scraper`](https://apify.com/makework36/reddit-scraper) actor instead in scheduled-scrape mode.

**My agent keeps calling the same tool in a loop**
MCP clients do not enforce tool-call budgets by default. Add a system prompt in your agent that caps tool calls per turn (e.g. "use at most 5 Reddit tool calls per user message") or enforce a budget at the orchestration layer.

### FAQ

**Do I need a Reddit account or a Reddit API key?**
No. This MCP server fetches Reddit's public JSON endpoints using residential IPs. No OAuth, no developer app, no refresh tokens.

**Is scraping Reddit legal?**
This server accesses only publicly visible data — the same content any user sees on reddit.com. We do not bypass logins, quarantined-community warnings or private communities. As with any scraping, consult legal counsel for your specific use case and jurisdiction.

**What MCP clients are officially supported?**
Claude Desktop, Claude Code, Cursor, ChatGPT Desktop (Custom Connectors), OpenAI Codex CLI, OpenAI Agents SDK (Python and TypeScript), OpenAI Assistants API, Windsurf / Codeium, Continue.dev, Zed, n8n, LangChain and LlamaIndex via their respective MCP adapters. Any MCP client that speaks Streamable HTTP + JSON-RPC 2.0 will work.

**What is the MCP protocol version?**
`2025-06-18`. The server advertises this in its `initialize` response.

**How fresh is the data?**
Real-time. Every `tools/call` hits Reddit live. Nothing is cached between calls unless your client adds its own caching.

**Can I combine this with other MCP servers?**
Yes. MCP clients are designed to host multiple servers simultaneously — Reddit, your file system, your database, a web search server, and this one. Each server has its own tool namespace.

**What happens if Reddit changes its HTML?**
The server parses Reddit's JSON endpoints, not HTML. JSON contract changes are much rarer than HTML changes. When they do happen, the actor is updated without breaking your MCP client — just retry.

**Can I whitelist specific tools per user?**
Not at the server level in v1. Enforce it in your agent: give the agent a system prompt that allowlists the subset of tools you want exposed.

**Do you support rate-limiting per caller?**
Rate limits are enforced by Apify Standby itself (per-actor concurrency) and by Apify's proxy rotation. If you need stricter per-user quotas, add an API gateway or proxy in front of the MCP URL.

**Do you have related scrapers for other social platforms?**
Yes. See *Related scrapers* — same account publishes scrapers for Trustpilot, Airbnb, Booking.com, flights and hotels.

### Changelog

- **v1.0.0** (2026-04-21) — Initial public release. Apify Standby MCP server over Streamable HTTP. JSON-RPC 2.0, protocol version `2025-06-18`. Seven Reddit tools: `search_reddit`, `get_subreddit_posts`, `get_post_with_comments`, `get_user_posts`, `get_user_comments`, `get_subreddit_info`, `get_trending_subreddits`. Residential proxy, stealth Puppeteer against `old.reddit.com` JSON endpoints. Pay-per-event pricing at $0.02 per successful tool call.

### Related scrapers

- [Reddit Scraper — Classic Actor Interface](https://apify.com/makework36/reddit-scraper) — Same transform pipeline as this MCP server, but exposed as a traditional Apify input/output actor for scheduled bulk scraping.
- [Trustpilot Reviews Scraper](https://apify.com/makework36/trustpilot-reviews-scraper) — Reviews, ratings and business search.
- [Flight Price Scraper — Multi-source](https://apify.com/makework36/flight-price-scraper) — Flight prices across 7 sources.
- [Fast Airbnb Price Scraper](https://apify.com/makework36/fast-airbnb-price-scraper) — Fast HTTP Airbnb scraper with GPS coords.
- [Airbnb MCP Server](https://apify.com/makework36/airbnb-mcp-server) — Airbnb data as MCP tools for Claude, Cursor, Codex.
- [Hotel Price Scraper](https://apify.com/makework36) — Hotel rates from multiple sources in one run.

### Support, issues, references

- Changelog: see `CHANGELOG.md` in this repo.
- Report issues or request new tools: [Issues tab](https://apify.com/makework36/reddit-mcp-server/issues) on the actor page.
- MCP specification: [modelcontextprotocol.io/specification](https://modelcontextprotocol.io/specification).
- JSON-RPC 2.0 specification: [jsonrpc.org/specification](https://www.jsonrpc.org/specification).
- Apify Standby documentation: [docs.apify.com/platform/actors/running/standby](https://docs.apify.com/platform/actors/running/standby).

### Legal and ethics note

This actor accesses only publicly visible Reddit content through Reddit's own JSON endpoints. It does not bypass private communities, user authentication, NSFW gates tied to Reddit accounts, or paid-only subreddits. Reddit's terms of service apply to any downstream use; if you plan to publish, resell or train a commercial model on Reddit data, review Reddit's [public content policy](https://www.redditinc.com/policies/public-content-policy) and consult legal counsel for your jurisdiction. We do not store prompts, tool responses or Reddit content beyond what Apify's platform retains for per-run debugging and billing.

***

> 🙏 **Built something cool with this Reddit MCP server?** [Leaving a review](https://apify.com/makework36/reddit-mcp-server/reviews) helps the Apify algorithm surface this actor to other AI engineers and agent builders. Much appreciated.

# Actor input Schema

## `showSetupInstructions` (type: `boolean`):

When enabled, the test run outputs MCP connection details (the Standby URL you paste in your AI client) and the full list of available Reddit tools — search\_reddit, get\_subreddit\_posts, get\_post\_with\_comments, get\_user\_posts, get\_user\_comments, get\_subreddit\_info, get\_trending\_subreddits — along with their parameters and per-call pricing. Leave this enabled if you are exploring the Actor for the first time. This setting has no effect when the Actor is invoked via its Standby MCP endpoint from Claude, Cursor, or any other AI client.

## `debug` (type: `boolean`):

When enabled, the MCP server logs every incoming JSON-RPC request, tool call arguments, and Reddit fetch result. Useful when debugging integration issues with a new AI client or unexpected errors from specific Reddit tools. Leave disabled for production use — verbose logs can slow response times slightly.

## Actor input object example

```json
{
  "showSetupInstructions": true,
  "debug": false
}
```

# Actor output Schema

## `dataset` (type: `string`):

All scraped items in the default dataset (JSON).

## `csv` (type: `string`):

Default dataset formatted as CSV for spreadsheets (Google Sheets, Excel).

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {};

// Run the Actor and wait for it to finish
const run = await client.actor("makework36/reddit-mcp-server").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {}

# Run the Actor and wait for it to finish
run = client.actor("makework36/reddit-mcp-server").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{}' |
apify call makework36/reddit-mcp-server --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=makework36/reddit-mcp-server",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit MCP Server — Claude, ChatGPT, Cursor, Codex",
        "description": "Native Reddit MCP server for AI agents. 7 Reddit tools (search, subreddits, posts+comments, users, trending) over Streamable HTTP. Works with Claude Desktop, Cursor, ChatGPT, OpenAI Codex, Agents SDK, Windsurf. No Reddit API key. Pay per tool call.",
        "version": "1.0",
        "x-build-id": "WB56J6vfjCfYXpGly"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/makework36~reddit-mcp-server/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-makework36-reddit-mcp-server",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/makework36~reddit-mcp-server/runs": {
            "post": {
                "operationId": "runs-sync-makework36-reddit-mcp-server",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/makework36~reddit-mcp-server/run-sync": {
            "post": {
                "operationId": "run-sync-makework36-reddit-mcp-server",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "properties": {
                    "showSetupInstructions": {
                        "title": "Show setup instructions",
                        "type": "boolean",
                        "description": "When enabled, the test run outputs MCP connection details (the Standby URL you paste in your AI client) and the full list of available Reddit tools — search_reddit, get_subreddit_posts, get_post_with_comments, get_user_posts, get_user_comments, get_subreddit_info, get_trending_subreddits — along with their parameters and per-call pricing. Leave this enabled if you are exploring the Actor for the first time. This setting has no effect when the Actor is invoked via its Standby MCP endpoint from Claude, Cursor, or any other AI client.",
                        "default": true
                    },
                    "debug": {
                        "title": "Enable debug logging",
                        "type": "boolean",
                        "description": "When enabled, the MCP server logs every incoming JSON-RPC request, tool call arguments, and Reddit fetch result. Useful when debugging integration issues with a new AI client or unexpected errors from specific Reddit tools. Leave disabled for production use — verbose logs can slow response times slightly.",
                        "default": false
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
