# Reddit Keyword & Conversation Search Scraper 1 (`magnolia_pancake/reddit-keyword-conversation-search-scraper-1`) Actor

Extract every Reddit post & comment by keyword from exact dates. Get full nested conversations, not just recent top posts. For deep historical analysis and event tracking.

- **URL**: https://apify.com/magnolia\_pancake/reddit-keyword-conversation-search-scraper-1.md
- **Developed by:** [mlih sahb](https://apify.com/magnolia_pancake) (community)
- **Categories:** Social media, Developer tools, SEO tools
- **Stats:** 1 total users, 1 monthly users, 100.0% runs succeeded, 1 bookmarks
- **User rating**: 5.00 out of 5 stars

## Pricing

$10.00/month + usage

To use this Actor, you pay a monthly rental fee to the developer. The rent is subtracted from your prepaid usage every month after the free trial period.You also pay for the Apify platform usage, which gets cheaper the higher Apify subscription plan you have.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#rental-actors

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Reddit Temporal Search: Competitive Advantages & Features

### 🥊 Beating the Competition: Your Killer Features

Generic scrapers fail at three critical points. Your tool is engineered to fix them.

| **Competitor's Weakness** | **Your Killer Feature** | **Why It Matters** |
| :--- | :--- | :--- |
| **Shallow History**<br>(Can't reliably access deep, historical data) | **Exact Date-Range Search**<br>(Pulls data from any specific day, week, or month using epoch timestamps) | **You can research past events.** Analyze sentiment during a product launch, track a news cycle, or map a controversy's timeline from its origin. |
| **Flat Comments**<br>(Returns lists, losing the conversation structure) | **Complete Nested Comment Trees**<br>(Preserves *"who replied to whom"* as structured JSON objects with `replies` arrays) | **You get true context.** Essential for analyzing debates, support threads, or any conversation where reply chains matter. Flat lists are useless for real discourse analysis. |
| **Unstructured Media**<br>(Outputs plain URL strings) | **Parsed Media Objects**<br>(Images, videos, galleries are categorized with type and metadata like resolution and direct links) | **Your data is analysis-ready.** No need to manually parse `preview.redd.it` URLs. Media is pre-sorted for immediate use in reports and dashboards. |

---

### 🎯 Targeted Use Cases

**Who This Is For:**

*   **Security Researchers:** Map the timeline and technical details of discussions around a software vulnerability in `r/netsec`.
*   **Product Managers:** Find every feature request, bug report, and complaint about a competitor in their official subreddit from the last quarter.
*   **Financial Analysts:** Track early sentiment and discussion patterns on a stock or cryptocurrency in `r/wallstreetbets` before a major market move.
*   **Academic Researchers:** Study the evolution of public discourse on topics like "AI ethics" or "climate change" in `r/science` over a defined 6-month period.

**Concrete Example:**
> *"Find **every post and comment** from `r/aws` in **April 2024** containing the keywords 'outage' or 'downtime' to perform a complete incident response analysis."*
> This is the specific, powerful capability generic scrapers lack.

---

### 🚀 Next-Level Improvements to Build an Unbeatable Lead

To transform your superior core into an undeniable market leader, implement these technical features:

1.  **Sentiment Scoring**
    *   **What:** Integrate a fast NLP library (like `TextBlob` or `VADER`) to add a `sentiment_score` and `sentiment_label` field to each post and comment.
    *   **Impact:** Turns raw text data into immediate, quantifiable insight. Users can filter for "negative" posts or graph sentiment trends over time without any extra steps.

2.  **Cross-Subreddit Search**
    *   **What:** Allow users to input a *list* of subreddits to mine the same keywords across multiple communities in a single Actor run.
    *   **Impact:** Expands the addressable market from users analyzing one community to those performing competitive landscape analysis across entire ecosystems (e.g., all programming subreddits).

3.  **Export to Knowledge Graph Format**
    *   **What:** Offer an optional output that formats results as nodes (users, posts, comments) and edges (posted, replied_to) in a standard format like CSV for `Gephi` or JSON for `Neo4j`.
    *   **Impact:** Caters to advanced users and researchers, positioning your tool as the go-to source for preparing network analysis data, far beyond simple data collection.

**Your scraper's core is already superior for focused, historical analysis. By naming it powerfully and explicitly marketing these technical advantages, you target users who are currently frustrated by the competition's limitations.**

# Actor input Schema

## `subreddit` (type: `string`):

Subreddit without r/ (e.g., 'python')
## `after` (type: `string`):

YYYY-MM-DD (e.g., 2024-01-01)
## `before` (type: `string`):

YYYY-MM-DD or leave empty
## `keywords` (type: `string`):

Comma-separated (e.g., api,scraping,tutorial)

## Actor input object example

```json
{
  "subreddit": "python",
  "after": "2024-01-01",
  "before": "2024-12-31",
  "keywords": "api,scraping"
}
````

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "subreddit": "python",
    "after": "2024-01-01"
};

// Run the Actor and wait for it to finish
const run = await client.actor("magnolia_pancake/reddit-keyword-conversation-search-scraper-1").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "subreddit": "python",
    "after": "2024-01-01",
}

# Run the Actor and wait for it to finish
run = client.actor("magnolia_pancake/reddit-keyword-conversation-search-scraper-1").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "subreddit": "python",
  "after": "2024-01-01"
}' |
apify call magnolia_pancake/reddit-keyword-conversation-search-scraper-1 --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=magnolia_pancake/reddit-keyword-conversation-search-scraper-1",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit Keyword & Conversation Search Scraper 1",
        "description": "Extract every Reddit post & comment by keyword from exact dates. Get full nested conversations, not just recent top posts. For deep historical analysis and event tracking.",
        "version": "0.0",
        "x-build-id": "QtyLgot9rptvjY3FI"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/magnolia_pancake~reddit-keyword-conversation-search-scraper-1/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-magnolia_pancake-reddit-keyword-conversation-search-scraper-1",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/magnolia_pancake~reddit-keyword-conversation-search-scraper-1/runs": {
            "post": {
                "operationId": "runs-sync-magnolia_pancake-reddit-keyword-conversation-search-scraper-1",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/magnolia_pancake~reddit-keyword-conversation-search-scraper-1/run-sync": {
            "post": {
                "operationId": "run-sync-magnolia_pancake-reddit-keyword-conversation-search-scraper-1",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "subreddit",
                    "after",
                    "keywords"
                ],
                "properties": {
                    "subreddit": {
                        "title": "Subreddit",
                        "type": "string",
                        "description": "Subreddit without r/ (e.g., 'python')"
                    },
                    "after": {
                        "title": "Start Date",
                        "type": "string",
                        "description": "YYYY-MM-DD (e.g., 2024-01-01)"
                    },
                    "before": {
                        "title": "End Date (Optional)",
                        "type": "string",
                        "description": "YYYY-MM-DD or leave empty"
                    },
                    "keywords": {
                        "title": "Keywords",
                        "type": "string",
                        "description": "Comma-separated (e.g., api,scraping,tutorial)"
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
