# Reddit Posts Scraper (`scrapepilotapi/reddit-posts-scraper`) Actor

- **URL**: https://apify.com/scrapepilotapi/reddit-posts-scraper.md
- **Developed by:** [ScrapePilot](https://apify.com/scrapepilotapi) (community)
- **Categories:** AI, Lead generation, Social media
- **Stats:** 2 total users, 1 monthly users, 100.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

from $3.99 / 1,000 results

This Actor is paid per event and usage. You are charged both the fixed price for specific events and for Apify platform usage.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## 🤖 Reddit Posts Scraper

> **Scrape posts and comments from Reddit by subreddit, URL, or keyword. Get structured data with automatic proxy fallback.** 📊

---

### 📖 What Is This Actor?

🟠 **Reddit Posts Scraper** is an Apify Actor that extracts **public Reddit posts and comments** in one run. You can target **subreddits**, **full URLs**, or **search keywords** and receive clean, structured JSON—perfect for research, analytics, NLP, brand monitoring, and automation.

✅ No coding required • ✅ Proxy fallback (datacenter → residential) • ✅ Retries on blocks & timeouts • ✅ Export to JSON, CSV, or API

---

### 🎯 Why Choose This Actor?

| | |
|---|---|
| ⚡ **Fast & scalable** | Handle hundreds of posts per source with parallel comment fetching |
| 🧩 **Flexible inputs** | Subreddits, URLs, or keywords—one field, multiple formats |
| 🛡️ **Reliable** | Automatic proxy fallback and retries on 403, 5xx, and timeouts |
| 📤 **Structured output** | Subreddit, title, author, score, comments, links, timestamps, and more |
| 🔧 **Tunable** | Sort order, time filter, post/comment limits, request delay, proxy |
| 😊 **Beginner-friendly** | Simple form in Apify Console; no setup or code needed |

---

### 📥 Input Parameters

Input is grouped into **four sections** in the Apify Console.

#### 📍 Where to scrape

| Field | Type | Description |
|-------|------|-------------|
| 🏷️ **Reddit URLs / Subreddits / Keywords** | List (required) | One per line: **full URLs** (e.g. `https://www.reddit.com/r/news/`), **subreddit names** (e.g. `news` or `r/news`), or **search keywords** (e.g. `artificial intelligence`). |

---

#### 📊 Sorting & time range

| Field | Type | Description |
|-------|------|-------------|
| 🔀 **Sort order** | Dropdown | **Hot** • **New** • **Top** • **Rising**. How posts are ordered. |
| ⏱️ **Time filter** | Dropdown | **Past hour** • **Past 24 hours** • **Past week** • **Past month** • **Past year** • **All time**. Only applies when sort order is **Top** or **Rising**. |

---

#### 🔢 Limits

| Field | Type | Description |
|-------|------|-------------|
| 📄 **Maximum posts per source** | Number (1–1000) | Max posts to scrape **per** subreddit/keyword. Default: **50**. |
| 💬 **Maximum comments per post** | Number (0–1000) | Max comments to fetch per post. Set to **0** to skip comments. Default: **100**. |

---

#### 🌐 Proxy & network

| Field | Type | Description |
|-------|------|-------------|
| ⏳ **Delay between requests (seconds)** | Number (0–30) | Pause between requests to reduce rate limits. A small random delay is added automatically. Default: **1**. |
| 🔐 **Proxy configuration** | Proxy picker | Choose proxies (e.g. Apify Proxy). If Reddit blocks the request, the actor falls back to **residential proxy**. Recommended for large runs. |

---

### 📤 Output (Dataset)

Results are saved to the **Reddit Posts Data** dataset. Each row is one post with the following fields:

| Column | Description |
|--------|-------------|
| 🏷️ **Subreddit** | Community name (e.g. `news`, `technology`) |
| 📝 **Title** | Post title |
| 👤 **Author** | Reddit username of the poster |
| ⬆️ **Score** | Upvotes / score |
| 💬 **## Comments** | Number of comments |
| 📅 **Posted (UTC)** | Unix timestamp (UTC) |
| 🔗 **Link to post** | Permalink to the Reddit thread |
| 📄 **Post text** | Body/selftext of the post |
| 🖼️ **Thumbnail** | Thumbnail image URL |
| 🖼️ **Image** | Main image URL (if any) |
| 💬 **Comments** | Array of comments (author, body, score, created_utc, replies) |
| 🆔 **Post ID** | Reddit post ID |
| ✅ **Success** | Whether the post was scraped successfully |
| ⚠️ **Error (if any)** | Error message if the post failed |

You can **export** the dataset as **JSON**, **CSV**, or **Excel**, or use the **Apify API** to fetch results.

---

### 🚀 How to Use (Apify Console)

1. 🔐 **Log in** at [console.apify.com](https://console.apify.com).
2. 🔍 **Find the actor** — search for **Reddit Posts Scraper** (or open it from the store).
3. 📥 **Fill the input:**
   - Under **Where to scrape**, add subreddits, URLs, or keywords (one per line).
   - Optionally set **Sort order**, **Time filter**, **Limits**, and **Proxy & network**.
4. ▶️ **Run** — click **Start** and watch the run log.
5. 💾 **Get results** — open the **Output** tab, preview the dataset, and **Export** (JSON/CSV/Excel) or use the **API**.

---

### ✨ Key Features

- 📌 **Multiple input types** — Subreddits, full Reddit URLs, or search keywords in one list.
- 🔄 **Sort & filter** — Hot, New, Top, Rising + time range (hour to all time).
- 📊 **Scalable limits** — Up to 1000 posts per source, up to 1000 comments per post (or 0 to skip comments).
- 🛡️ **Proxy fallback** — No proxy → Datacenter → Residential if Reddit blocks.
- 🔁 **Retries** — Automatic retries on 403, 429, 5xx (e.g. UPSTREAM503/502), timeouts, and SSL/connection issues.
- 💾 **Live saving** — Data is pushed to the dataset as it’s scraped (partial results kept if the run stops).
- 📤 **Structured JSON** — Ready for analytics, NLP, dashboards, and integrations (n8n, Zapier, Make, etc.).

---

### 🎯 Best Use Cases

| Use case | How this actor helps |
|----------|----------------------|
| 📊 **Market & trend research** | Pull top posts and comments by keyword or subreddit. |
| 🧠 **NLP / ML datasets** | Get clean text (title, body, comments) for training or analysis. |
| 📝 **Content & SEO** | Discover what people talk about and find content ideas. |
| 📈 **Brand monitoring** | Track mentions and sentiment across communities. |
| 📰 **Journalism & research** | Gather quotes and discussions from public threads. |
| 🔄 **Automation** | Trigger runs via API or connect to n8n, Zapier, Google Sheets. |

---

### ⚖️ Legal & Ethical Use

- ✅ **Allowed:** Scraping **publicly available** Reddit content for research, analytics, and insights.
- ❌ **Do not:** Scrape private subreddits without permission, misuse personal data, or ignore Reddit’s terms and rate limits.
- 🛡️ This actor is designed for **ethical, compliant** use of public data only.

---

### 📋 Input / Output Examples

#### 📥 Example input (JSON)

```json
{
  "startUrls": [
    "https://www.reddit.com/r/news/",
    "news",
    "artificial intelligence"
  ],
  "sortOrder": "top",
  "timeFilter": "week",
  "maxPosts": 50,
  "maxComments": 100,
  "requestDelay": 1,
  "proxyConfiguration": { "useApifyProxy": false }
}
````

#### 📤 Example output item (one post)

```json
{
  "subreddit": "news",
  "title": "Example post title",
  "author": "username",
  "score": 156,
  "num_comments": 42,
  "created_utc": 1703123456.789,
  "permalink": "https://www.reddit.com/r/news/comments/abc123/...",
  "body": "Post content...",
  "thumbnail_url": "https://...",
  "image_url": "https://...",
  "comments": [
    {
      "author": "commenter1",
      "body": "Comment text...",
      "score": 23,
      "created_utc": 1703123456.789,
      "replies": []
    }
  ],
  "post_id": "abc123",
  "success": true,
  "error_message": null
}
```

***

### ❓ Frequently Asked Questions

| Question | Answer |
|----------|--------|
| 🆓 **Is it free?** | You can run it on Apify’s free plan for small jobs. |
| 🔌 **API-like?** | Yes — output is structured JSON; you can call the actor via Apify API. |
| 💬 **Comments included?** | Yes — set **Maximum comments per post** > 0 (or 0 to skip). |
| 📂 **Multiple subreddits?** | Yes — add as many as you want in **Reddit URLs / Subreddits / Keywords**. |
| 🛡️ **What if Reddit blocks?** | The actor uses proxy fallback (e.g. residential) and retries. |
| 👶 **Need coding?** | No — use the form in Apify Console or send JSON input via API. |

***

### 🛠️ Support & Feedback

- 🐞 **Bug reports:** Use the repository **Issues** section.
- ✨ **Custom solutions or feature requests:** 📧 **dev.scraperengine@gmail.com**

***

### ✅ Summary

🟠 **Reddit Posts Scraper** gives you **posts + comments** from Reddit by **subreddit**, **URL**, or **keyword**, with **sort order**, **time filter**, **limits**, and **proxy support**. Output is **structured**, **exportable**, and **integration-ready**—ideal for research, analytics, and automation. 🚀

# Actor input Schema

## `startUrls` (type: `array`):

📝 Enter one item per line. You can mix:
• 🌐 Full URLs — e.g. https://www.reddit.com/r/news/
• 📌 Subreddit names — e.g. news or r/news
• 🔍 Search keywords — e.g. artificial intelligence (searches Reddit)

Duplicate subreddits are merged. At least one entry is required.

## `maxPosts` (type: `integer`):

Max number of posts to scrape **per** subreddit or keyword (1–1000). If you have 3 sources and set 50, you can get up to 150 posts total.

## `maxComments` (type: `integer`):

Max comments to fetch for each post (0–1000). Set to **0** to skip comments and only get post metadata (faster).

## `sortOrder` (type: `string`):

How Reddit should sort the posts. Hot = trending now, New = latest first, Top = most upvoted, Rising = gaining traction.

## `timeFilter` (type: `string`):

Time range for results. ⚠️ Only applies when Sort order is **Top** or **Rising**. Ignored for Hot and New.

## `proxyConfiguration` (type: `object`):

Choose which proxies to use. If Reddit blocks a request, the actor automatically falls back: no proxy → datacenter → residential. ✅ Recommended for large runs or when you hit blocks.

## Actor input object example

```json
{
  "startUrls": [
    "news",
    "r/technology",
    "https://www.reddit.com/r/news/",
    "artificial intelligence"
  ],
  "maxPosts": 10,
  "maxComments": 5,
  "sortOrder": "top",
  "timeFilter": "week",
  "proxyConfiguration": {
    "useApifyProxy": false
  }
}
```

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "startUrls": [
        "https://www.reddit.com/r/news/",
        "news",
        "artificial intelligence"
    ],
    "proxyConfiguration": {
        "useApifyProxy": false
    }
};

// Run the Actor and wait for it to finish
const run = await client.actor("scrapepilotapi/reddit-posts-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "startUrls": [
        "https://www.reddit.com/r/news/",
        "news",
        "artificial intelligence",
    ],
    "proxyConfiguration": { "useApifyProxy": False },
}

# Run the Actor and wait for it to finish
run = client.actor("scrapepilotapi/reddit-posts-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "startUrls": [
    "https://www.reddit.com/r/news/",
    "news",
    "artificial intelligence"
  ],
  "proxyConfiguration": {
    "useApifyProxy": false
  }
}' |
apify call scrapepilotapi/reddit-posts-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=scrapepilotapi/reddit-posts-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit Posts Scraper",
        "version": "0.1",
        "x-build-id": "sYeBgPBBLI6meD8WW"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/scrapepilotapi~reddit-posts-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-scrapepilotapi-reddit-posts-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/scrapepilotapi~reddit-posts-scraper/runs": {
            "post": {
                "operationId": "runs-sync-scrapepilotapi-reddit-posts-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/scrapepilotapi~reddit-posts-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-scrapepilotapi-reddit-posts-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "startUrls"
                ],
                "properties": {
                    "startUrls": {
                        "title": "🔗 Reddit URLs / Subreddits / Keywords",
                        "type": "array",
                        "description": "📝 Enter one item per line. You can mix:\n• 🌐 Full URLs — e.g. https://www.reddit.com/r/news/\n• 📌 Subreddit names — e.g. news or r/news\n• 🔍 Search keywords — e.g. artificial intelligence (searches Reddit)\n\nDuplicate subreddits are merged. At least one entry is required.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "maxPosts": {
                        "title": "📄 Maximum posts per source",
                        "minimum": 1,
                        "maximum": 1000,
                        "type": "integer",
                        "description": "Max number of posts to scrape **per** subreddit or keyword (1–1000). If you have 3 sources and set 50, you can get up to 150 posts total.",
                        "default": 10
                    },
                    "maxComments": {
                        "title": "💬 Maximum comments per post",
                        "minimum": 0,
                        "maximum": 1000,
                        "type": "integer",
                        "description": "Max comments to fetch for each post (0–1000). Set to **0** to skip comments and only get post metadata (faster).",
                        "default": 5
                    },
                    "sortOrder": {
                        "title": "📋 Sort order",
                        "enum": [
                            "hot",
                            "new",
                            "top",
                            "rising"
                        ],
                        "type": "string",
                        "description": "How Reddit should sort the posts. Hot = trending now, New = latest first, Top = most upvoted, Rising = gaining traction.",
                        "default": "top"
                    },
                    "timeFilter": {
                        "title": "⏱️ Time filter",
                        "enum": [
                            "hour",
                            "day",
                            "week",
                            "month",
                            "year",
                            "all"
                        ],
                        "type": "string",
                        "description": "Time range for results. ⚠️ Only applies when Sort order is **Top** or **Rising**. Ignored for Hot and New.",
                        "default": "week"
                    },
                    "proxyConfiguration": {
                        "title": "🔐 Proxy configuration",
                        "type": "object",
                        "description": "Choose which proxies to use. If Reddit blocks a request, the actor automatically falls back: no proxy → datacenter → residential. ✅ Recommended for large runs or when you hit blocks."
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
