# Reddit All-in-One Scraper | Posts, Comments & Users (`taroyamada/reddit-all-in-one-scraper`) Actor

Scrape Reddit subreddits, posts, user profiles, and search results via public JSON endpoints. Normalized output for analysis.

- **URL**: https://apify.com/taroyamada/reddit-all-in-one-scraper.md
- **Developed by:** [太郎 山田](https://apify.com/taroyamada) (community)
- **Categories:** Social media, Automation
- **Stats:** 1 total users, 0 monthly users, 0.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

Pay per event

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## 📡 Reddit All-in-One Scraper

Scrape Reddit subreddits, posts, comments, user profiles, and search results via public JSON endpoints. No API key needed. Clean normalized output for downstream analysis.

### Store Quickstart

Start with the **Quickstart** template (r/javascript hot posts). For deep analysis, use **Search + Comments** to include comment trees. Use **Keyword Filter** to narrow results.

### Key Features

- 📡 **All source types** — Subreddits, post URLs, user profiles, and search queries
- 💬 **Comments with depth control** — Nested comment trees with configurable depth
- 🔍 **Search support** — Reddit-wide search via `search:your query`
- 🏷️ **Keyword filtering** — Filter posts by title/body keywords
- 📊 **Normalized output** — Clean, flat objects designed for analysis pipelines
- ⏱️ **Rate-limit aware** — Configurable delays, automatic 429 retry

### Use Cases

| Who | Why |
|-----|-----|
| Market researchers | Track brand mentions and sentiment across subreddits |
| Content creators | Find trending topics and popular discussions |
| Data scientists | Collect training data with comments and metadata |
| Community managers | Monitor subreddit activity and user engagement |
| Competitive analysts | Track competitor mentions and industry trends |

### Input

| Field | Type | Default | Description |
|-------|------|---------|-------------|
| sources | array | **required** | List of Reddit sources. Each can be a subreddit name (e.g. 'javascript'), subreddit URL, post URL, user name (e.g. 'u/sp |
| maxPostsPerSource | integer | `25` | Maximum posts to collect from each subreddit, user, or search source. |
| includeComments | boolean | `false` | Fetch comments for each post. Increases run time. |
| maxCommentsPerPost | integer | `50` | Maximum top-level + nested comments to extract per post (when includeComments is on). |
| commentDepth | integer | `3` | How many reply levels to extract (1 = top-level only). |
| sort | string | `"hot"` | Sort order for subreddit and search listings. |
| time | string | `"all"` | Time range filter (applies when sort is 'top' or 'controversial'). |
| keywords | array | `[]` | Only include posts whose title or selftext contains at least one keyword (case-insensitive). Leave empty to include all. |

#### Input Example

```json
{
  "sources": ["javascript", "u/spez", "search:web scraping"],
  "maxPostsPerSource": 10,
  "includeComments": false,
  "sort": "hot",
  "keywords": [],
  "delivery": "dataset"
}
````

### Output

| Field | Type | Description |
|-------|------|-------------|
| `meta` | object |  |
| `posts` | array |  |
| `posts[].id` | string |  |
| `posts[].subreddit` | string |  |
| `posts[].title` | string |  |
| `posts[].author` | string |  |
| `posts[].score` | number |  |
| `posts[].upvoteRatio` | number |  |
| `posts[].numComments` | number |  |
| `posts[].createdAt` | timestamp |  |
| `posts[].url` | string (url) |  |
| `posts[].permalink` | string (url) |  |
| `posts[].selftext` | string |  |
| `posts[].isSelf` | boolean |  |
| `posts[].isNsfw` | boolean |  |
| `posts[].isStickied` | boolean |  |
| `posts[].flair` | string |  |
| `posts[].domain` | string |  |
| `posts[].thumbnail` | null |  |
| `posts[].awards` | number |  |
| `posts[].sourceType` | string |  |
| `posts[].sourceValue` | string |  |

#### Output Example

```json
{
  "id": "abc123",
  "subreddit": "javascript",
  "title": "New ESM features in Node 22",
  "author": "devuser",
  "score": 842,
  "upvoteRatio": 0.96,
  "numComments": 127,
  "createdAt": "2026-01-15T12:30:00.000Z",
  "url": "https://example.com/article",
  "permalink": "https://www.reddit.com/r/javascript/comments/abc123/…",
  "selftext": null,
  "isSelf": false,
  "isNsfw": false,
  "flair": "News",
  "sourceType": "subreddit",
  "sourceValue": "javascript"
}
```

### API Usage

Run this actor programmatically using the Apify API. Replace `YOUR_API_TOKEN` with your token from [Apify Console → Settings → Integrations](https://console.apify.com/account/integrations).

#### cURL

```bash
curl -X POST "https://api.apify.com/v2/acts/taroyamada~reddit-all-in-one-scraper/run-sync-get-dataset-items?token=YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{ "sources": ["javascript", "u/spez", "search:web scraping"], "maxPostsPerSource": 10, "includeComments": false, "sort": "hot", "keywords": [], "delivery": "dataset" }'
```

#### Python

```python
from apify_client import ApifyClient

client = ApifyClient("YOUR_API_TOKEN")
run = client.actor("taroyamada/reddit-all-in-one-scraper").call(run_input={
  "sources": ["javascript", "u/spez", "search:web scraping"],
  "maxPostsPerSource": 10,
  "includeComments": false,
  "sort": "hot",
  "keywords": [],
  "delivery": "dataset"
})

for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)
```

#### JavaScript / Node.js

```javascript
import { ApifyClient } from 'apify-client';

const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });
const run = await client.actor('taroyamada/reddit-all-in-one-scraper').call({
  "sources": ["javascript", "u/spez", "search:web scraping"],
  "maxPostsPerSource": 10,
  "includeComments": false,
  "sort": "hot",
  "keywords": [],
  "delivery": "dataset"
});

const { items } = await client.dataset(run.defaultDatasetId).listItems();
console.log(items);
```

### Tips & Limitations

- Use `snapshotKey` to persist seen-item state across runs so only new items are pushed.
- For high-volume feeds, limit `maxItems` per run and increase schedule frequency instead.
- Webhook delivery payloads are compact — parse on receiver side for routing to multiple channels.
- Combine this actor with `article-content-extractor` for full-text bodies when feeds are title-only.
- Run against your own staging feed first to validate filter keywords before production alerts.

### FAQ

**Does this need a Reddit API key?**

No. It uses Reddit's public `.json` endpoints that don't require authentication.

**Rate limits?**

Reddit rate-limits unauthenticated requests. The actor uses configurable delays (default 1.5s) and retries on 429 responses.

**Can I scrape private subreddits?**

No. Only public subreddits are accessible via the public JSON endpoints.

**What about NSFW content?**

NSFW posts are included in results with `isNsfw: true`. Filter them in your pipeline if needed.

**How do I filter by keyword?**

Most actors expose a `watchKeywords` or `filterKeywords` array — matches are flagged in the output with highlight metadata.

**Can this work with paywalled content?**

No — this actor only processes publicly accessible feed/article URLs. Paywalled content is out of scope.

### Related Actors

News & Content cluster — explore related Apify tools:

- [📰 Google News Scraper](https://apify.com/taroyamada/google-news-scraper) — Scrape Google News articles for any search query via official RSS feed.
- [📰 Article Extractor](https://apify.com/taroyamada/article-content-extractor) — Extract clean article content with title, author, publish date, images from news and blog pages.
- [📄 Website Content Extractor](https://apify.com/taroyamada/website-content-extractor) — Extract clean main content from any webpage as text, markdown, or HTML.
- [📡 RSS Feed Aggregator](https://apify.com/taroyamada/rss-feed-aggregator) — Aggregate multiple RSS and Atom feeds with keyword filtering and deduplication.
- [📰 Hacker News Scraper](https://apify.com/taroyamada/hacker-news-intelligence) — Fetch Hacker News top, new, best, ask, show, job stories via official Firebase API.
- [🚨 Reddit Keyword Monitor Alerts](https://apify.com/taroyamada/reddit-keyword-monitor-alerts) — Focused Reddit keyword and subreddit monitor built for recurring alerts, snapshot diffing, and webhook handoff.

### Cost

**Pay Per Event**:

- `actor-start`: $0.01 (flat fee per run)
- `dataset-item`: $0.001 per output item

**Example**: 1,000 items = $0.01 + (1,000 × $0.001) = **$1.01**

No subscription required — you only pay for what you use.

# Actor input Schema

## `sources` (type: `array`):

List of Reddit sources. Each can be a subreddit name (e.g. 'javascript'), subreddit URL, post URL, user name (e.g. 'u/spez'), user URL, or search query prefixed with 'search:' (e.g. 'search:machine learning').

## `maxPostsPerSource` (type: `integer`):

Maximum posts to collect from each subreddit, user, or search source.

## `includeComments` (type: `boolean`):

Fetch comments for each post. Increases run time.

## `maxCommentsPerPost` (type: `integer`):

Maximum top-level + nested comments to extract per post (when includeComments is on).

## `commentDepth` (type: `integer`):

How many reply levels to extract (1 = top-level only).

## `sort` (type: `string`):

Sort order for subreddit and search listings.

## `time` (type: `string`):

Time range filter (applies when sort is 'top' or 'controversial').

## `keywords` (type: `array`):

Only include posts whose title or selftext contains at least one keyword (case-insensitive). Leave empty to include all.

## `timeoutMs` (type: `integer`):

HTTP request timeout in milliseconds.

## `delayMs` (type: `integer`):

Delay between requests to avoid rate-limiting.

## `delivery` (type: `string`):

Where to send results: dataset or webhook.

## `webhookUrl` (type: `string`):

Webhook URL to POST results to (if delivery=webhook).

## `dryRun` (type: `boolean`):

Run without saving results (for testing).

## Actor input object example

```json
{
  "maxPostsPerSource": 25,
  "includeComments": false,
  "maxCommentsPerPost": 50,
  "commentDepth": 3,
  "sort": "hot",
  "time": "all",
  "keywords": [],
  "timeoutMs": 15000,
  "delayMs": 1500,
  "delivery": "dataset",
  "dryRun": false
}
```

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {};

// Run the Actor and wait for it to finish
const run = await client.actor("taroyamada/reddit-all-in-one-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {}

# Run the Actor and wait for it to finish
run = client.actor("taroyamada/reddit-all-in-one-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{}' |
apify call taroyamada/reddit-all-in-one-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=taroyamada/reddit-all-in-one-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit All-in-One Scraper | Posts, Comments & Users",
        "description": "Scrape Reddit subreddits, posts, user profiles, and search results via public JSON endpoints. Normalized output for analysis.",
        "version": "0.1",
        "x-build-id": "MesEM2dFoFo1eJ9ct"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/taroyamada~reddit-all-in-one-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-taroyamada-reddit-all-in-one-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/taroyamada~reddit-all-in-one-scraper/runs": {
            "post": {
                "operationId": "runs-sync-taroyamada-reddit-all-in-one-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/taroyamada~reddit-all-in-one-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-taroyamada-reddit-all-in-one-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "sources"
                ],
                "properties": {
                    "sources": {
                        "title": "Sources",
                        "type": "array",
                        "description": "List of Reddit sources. Each can be a subreddit name (e.g. 'javascript'), subreddit URL, post URL, user name (e.g. 'u/spez'), user URL, or search query prefixed with 'search:' (e.g. 'search:machine learning').",
                        "items": {
                            "type": "string"
                        }
                    },
                    "maxPostsPerSource": {
                        "title": "Max Posts Per Source",
                        "minimum": 1,
                        "maximum": 500,
                        "type": "integer",
                        "description": "Maximum posts to collect from each subreddit, user, or search source.",
                        "default": 25
                    },
                    "includeComments": {
                        "title": "Include Comments",
                        "type": "boolean",
                        "description": "Fetch comments for each post. Increases run time.",
                        "default": false
                    },
                    "maxCommentsPerPost": {
                        "title": "Max Comments Per Post",
                        "minimum": 1,
                        "maximum": 500,
                        "type": "integer",
                        "description": "Maximum top-level + nested comments to extract per post (when includeComments is on).",
                        "default": 50
                    },
                    "commentDepth": {
                        "title": "Comment Depth",
                        "minimum": 1,
                        "maximum": 10,
                        "type": "integer",
                        "description": "How many reply levels to extract (1 = top-level only).",
                        "default": 3
                    },
                    "sort": {
                        "title": "Sort",
                        "enum": [
                            "hot",
                            "new",
                            "top",
                            "rising",
                            "controversial"
                        ],
                        "type": "string",
                        "description": "Sort order for subreddit and search listings.",
                        "default": "hot"
                    },
                    "time": {
                        "title": "Time Filter",
                        "enum": [
                            "hour",
                            "day",
                            "week",
                            "month",
                            "year",
                            "all"
                        ],
                        "type": "string",
                        "description": "Time range filter (applies when sort is 'top' or 'controversial').",
                        "default": "all"
                    },
                    "keywords": {
                        "title": "Keywords",
                        "type": "array",
                        "description": "Only include posts whose title or selftext contains at least one keyword (case-insensitive). Leave empty to include all.",
                        "default": [],
                        "items": {
                            "type": "string"
                        }
                    },
                    "timeoutMs": {
                        "title": "Timeout (ms)",
                        "minimum": 3000,
                        "maximum": 60000,
                        "type": "integer",
                        "description": "HTTP request timeout in milliseconds.",
                        "default": 15000
                    },
                    "delayMs": {
                        "title": "Delay Between Requests (ms)",
                        "minimum": 500,
                        "maximum": 10000,
                        "type": "integer",
                        "description": "Delay between requests to avoid rate-limiting.",
                        "default": 1500
                    },
                    "delivery": {
                        "title": "Delivery",
                        "enum": [
                            "dataset",
                            "webhook"
                        ],
                        "type": "string",
                        "description": "Where to send results: dataset or webhook.",
                        "default": "dataset"
                    },
                    "webhookUrl": {
                        "title": "Webhook URL",
                        "type": "string",
                        "description": "Webhook URL to POST results to (if delivery=webhook)."
                    },
                    "dryRun": {
                        "title": "Dry Run",
                        "type": "boolean",
                        "description": "Run without saving results (for testing).",
                        "default": false
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
