# Reddit Scraper (`maximedupre/reddit-scraper`) Actor

The best Reddit scraper, for both posts & comments.

- **URL**: https://apify.com/maximedupre/reddit-scraper.md
- **Developed by:** [Maxime](https://apify.com/maximedupre) (community)
- **Categories:** Developer tools, Social media, Automation
- **Stats:** 2 total users, 1 monthly users, 100.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

from $3.99 / 1,000 scraped items

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Search Reddit posts and comments with raw export-ready rows

I built Reddit Scraper for people who want live Reddit search results without bolting on their own browser automation first.

This actor searches Reddit posts and comments, keeps the raw IDs, timestamps, subreddit names, vote counts, and permalinks, then sends one normalized row per result to the dataset. It does not score, summarize, filter, or save anything to your database for you.

### Fastest way to try it

The Input tab ships with `query: "openai"` prefilled. Keep both `Include posts?` and `Include comments?` on, lower `Post result cap` plus `Comment result cap` to `10` each for a small first run, then inspect the first rows in the [Output](https://apify.com/maximedupre/reddit-scraper/output-schema) tab.

### Why people use it

- Search Reddit posts and comments in one run instead of stitching two scrapers together.
- Keep raw fields that downstream systems actually need, including post IDs, comment IDs, subreddit names, vote counts, and timestamps.
- Use one simple `query` for normal runs, or separate advanced post and comment queries when you need exact control.
- Export clean permalinks for both posts and comments.
- Feed brand monitoring, competitor research, QA review, enrichment, or your own AI pipeline without carrying extra app logic inside the actor.

### How to use in 3 simple steps

1. Open the [Input](https://apify.com/maximedupre/reddit-scraper/input-schema) tab and enter your base `query`.
2. Keep both result types on for the normal mode, or switch one off when you only want posts or only want comments.
3. Tune freshness and caps with `maxDaysOld`, `maxPostResults`, and `maxCommentResults`, then read the raw dataset rows in the [Output](https://apify.com/maximedupre/reddit-scraper/output-schema) tab or API.

### Inputs, defaults, and behavior

- `query` is required. The Console form prefills it with `openai` so the default starter run has a live search term ready.
- `postsQuery` and `commentsQuery` are optional advanced overrides. If you leave them empty, both searches reuse `query`.
- `shouldIncludePosts` defaults to `true`.
- `shouldIncludeComments` defaults to `true`.
- `maxDaysOld` defaults to `30`.
- `maxPostResults` defaults to `25`. Set it to `0` to keep all fresh post results inside the requested window.
- `maxCommentResults` defaults to `25`. Set it to `0` to keep all fresh comment results inside the requested window.
- `proxyConfiguration` defaults to Apify residential proxies with no country pin, which keeps Reddit search off the fragile datacenter path unless you override it. When Reddit blocks one session, the actor retries with a small fresh-session budget before giving up on that search pass.
- The actor keeps posts and comments as raw rows. It does not dedupe against your history, does not apply AI relevancy checks, and does not write to your own storage.
- If one search type fails after retries, the other successful search type can still return rows in the same run.

#### Input example

```json
{
  "query": "OpenAI",
  "postsQuery": "(site:openai.com OR title:\"OpenAI\" OR selftext:\"OpenAI\")",
  "commentsQuery": "(\"openai.com\" OR \"OpenAI\")",
  "shouldIncludePosts": true,
  "shouldIncludeComments": true,
  "maxDaysOld": 7,
  "maxPostResults": 25,
  "maxCommentResults": 25
}
````

### What data can Reddit Scraper extract?

See the full [Output](https://apify.com/maximedupre/reddit-scraper/output-schema) tab for the complete schema.

Every row keeps the Reddit data close to the source, so you can decide later how much filtering, scoring, or storage logic you want on top.

| Field | What you get | Why it matters |
| --- | --- | --- |
| `type` | `post` or `comment` | Split or combine result types downstream |
| `searchQuery` | The exact query used for that pass | Verify advanced post/comment overrides |
| `url` | Post or comment permalink | Review the original Reddit source quickly |
| `postId` / `commentId` | Stable Reddit identifiers | Deduplicate or join results later |
| `postDateTime` / `commentDateTime` | Raw Reddit timestamps | Sort, filter, and store without guessing |
| `subredditName` | Source subreddit | Add routing, alerting, or reporting context |
| `nbVotes` / `nbComments` | Numeric engagement fields | Prioritize higher-signal rows in your own system |

#### Output example

```json
[
  {
    "type": "comment",
    "query": "OpenAI",
    "searchQuery": "(\"openai.com\" OR \"OpenAI\")",
    "url": "https://www.reddit.com/r/saas/comments/post-2/_/comment-9",
    "postId": "post-2",
    "postTitle": "Thoughts on OpenAI pricing",
    "postDateTime": "2026-03-19T10:00:00.000Z",
    "subredditName": "saas",
    "nbVotes": 35,
    "nbComments": null,
    "commentId": "comment-9",
    "commenterUsername": "alice",
    "commentDateTime": "2026-03-21T09:30:00.000Z",
    "commentText": "I switched because the API got better."
  },
  {
    "type": "post",
    "query": "OpenAI",
    "searchQuery": "(site:openai.com OR title:\"OpenAI\" OR selftext:\"OpenAI\")",
    "url": "https://www.reddit.com/r/machinelearning/comments/post-1",
    "postId": "post-1",
    "postTitle": "OpenAI launches something",
    "postDateTime": "2026-03-20T12:00:00.000Z",
    "subredditName": "machinelearning",
    "nbVotes": 120,
    "nbComments": 48,
    "commentId": null,
    "commenterUsername": null,
    "commentDateTime": null,
    "commentText": null
  }
]
```

### How much does Reddit search cost?

This actor uses price-per-result billing, so the main cost driver is how many Reddit rows you keep. A good first run is `10` post rows plus `10` comment rows, which is about `$0.20` at the current repo-configured rate. At that same repo-configured rate, `100` total rows is about `$1.00`. The live [Pricing](https://apify.com/maximedupre/reddit-scraper/pricing) tab is the source of truth for the exact current rate.

| Billed item | When it triggers | Repo-configured price |
| --- | --- | --- |
| Reddit result row | When one Reddit row is pushed to the dataset | `$0.01` |

### Why run Reddit Scraper on Apify?

- Run it from the Console when you want a quick Reddit export without writing Playwright code.
- Call it from the Apify API when Reddit search is one step inside a larger pipeline.
- Keep datasets, run logs, and retries in one place.
- Swap from the simple public mode into advanced post and comment query overrides when your workflow gets stricter.

### FAQ

#### Can I reproduce different post and comment searches?

Yes. Leave `query` as the base label, then set `postsQuery` and `commentsQuery` to the exact Reddit search expressions you want for each pass.

#### Does this actor run AI filtering or save to my database?

No. It only returns raw scraped Reddit rows in the dataset.

#### Why are post rows missing comment fields?

That is intentional. Post rows keep `commentId`, `commenterUsername`, `commentDateTime`, and `commentText` as `null`, while comment rows fill them.

#### What happens if Reddit blocks one search type?

The actor retries blocked sessions with fresh proxy sessions first. If one search type still fails after that retry budget is exhausted, the actor logs that failure and still returns rows from the other enabled search type when that pass succeeds.

#### Can I get all available fresh results instead of the default caps?

Yes. Set `maxPostResults` and or `maxCommentResults` to `0`.

#### Where do I report a broken selector, missing field, or Reddit markup change?

Open the [Issues](https://apify.com/maximedupre/reddit-scraper/issues/open) page with the input you used and the output you expected. I use that queue for fixes and feature requests.

### Explore the rest of the collection

- **[Product Hunt Scraper](https://apify.com/maximedupre/product-hunt-scraper)** - daily Product Hunt leaderboard scraping with cache and live-crawl options, maker links, and optional email enrichment
- **[TinySeed Scraper](https://apify.com/maximedupre/tinyseed-scraper)** - TinySeed portfolio scraping with company descriptions and optional website emails
- **[Tiny Startups Scraper](https://apify.com/maximedupre/tiny-startups-scraper)** - Tiny Startups homepage scraping with promoted-card filtering and email enrichment
- **[Uneed Scraper](https://apify.com/maximedupre/uneed-scraper)** - Uneed daily ladder scraping with promoted-listing control, maker links, and optional website emails
- **[Website Emails Scraper](https://apify.com/maximedupre/website-emails-scraper)** - shallow-crawl any list of URLs and emit one row per unique email found

### Missing a feature or data?

[File an issue](https://apify.com/maximedupre/reddit-scraper/issues/open) and I'll add it in less than 24h 🫡

# Actor input Schema

## `query` (type: `string`):

🔎 Base Reddit search query. Use this for the simple mode, or keep it as the human-readable label while advanced post and comment query overrides do the exact search work.

## `postsQuery` (type: `string`):

🧠 Optional advanced query for Reddit post search only. Leave it empty to reuse Query for posts.

## `commentsQuery` (type: `string`):

🧠 Optional advanced query for Reddit comment search only. Leave it empty to reuse Query for comments.

## `shouldIncludePosts` (type: `boolean`):

📝 Search Reddit posts and include raw post rows in the dataset. Defaults to true.

## `shouldIncludeComments` (type: `boolean`):

💬 Search Reddit comments and include raw comment rows in the dataset. Defaults to true.

## `maxDaysOld` (type: `integer`):

📅 Keep only results whose own Reddit timestamp is within this many days. Defaults to 30.

## `maxPostResults` (type: `integer`):

🔢 Cap how many fresh post rows to return. Defaults to 25. Set it to 0 to keep all available post results within the freshness window.

## `maxCommentResults` (type: `integer`):

🔢 Cap how many fresh comment rows to return. Defaults to 25. Set it to 0 to keep all available comment results within the freshness window.

## `proxyConfiguration` (type: `object`):

🌐 Apify proxy settings for Reddit search requests. By default the actor uses Apify residential proxies with no country pin so Reddit search does not depend on long datacenter rotation streaks. Fill this in only when you want to override that default path.

## Actor input object example

```json
{
  "query": "openai",
  "shouldIncludePosts": true,
  "shouldIncludeComments": true,
  "maxDaysOld": 30,
  "maxPostResults": 25,
  "maxCommentResults": 25,
  "proxyConfiguration": {
    "useApifyProxy": true,
    "apifyProxyGroups": [
      "RESIDENTIAL"
    ]
  }
}
```

# Actor output Schema

## `results` (type: `string`):

Dataset of raw Reddit search rows

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "query": "openai"
};

// Run the Actor and wait for it to finish
const run = await client.actor("maximedupre/reddit-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = { "query": "openai" }

# Run the Actor and wait for it to finish
run = client.actor("maximedupre/reddit-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "query": "openai"
}' |
apify call maximedupre/reddit-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=maximedupre/reddit-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit Scraper",
        "description": "The best Reddit scraper, for both posts & comments.",
        "version": "0.0",
        "x-build-id": "nbew9GUFL3Kq95KMb"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/maximedupre~reddit-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-maximedupre-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/maximedupre~reddit-scraper/runs": {
            "post": {
                "operationId": "runs-sync-maximedupre-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/maximedupre~reddit-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-maximedupre-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "query"
                ],
                "properties": {
                    "query": {
                        "title": "Query",
                        "minLength": 1,
                        "type": "string",
                        "description": "🔎 Base Reddit search query. Use this for the simple mode, or keep it as the human-readable label while advanced post and comment query overrides do the exact search work."
                    },
                    "postsQuery": {
                        "title": "Advanced posts query",
                        "minLength": 1,
                        "type": "string",
                        "description": "🧠 Optional advanced query for Reddit post search only. Leave it empty to reuse Query for posts."
                    },
                    "commentsQuery": {
                        "title": "Advanced comments query",
                        "minLength": 1,
                        "type": "string",
                        "description": "🧠 Optional advanced query for Reddit comment search only. Leave it empty to reuse Query for comments."
                    },
                    "shouldIncludePosts": {
                        "title": "Include posts?",
                        "type": "boolean",
                        "description": "📝 Search Reddit posts and include raw post rows in the dataset. Defaults to true.",
                        "default": true
                    },
                    "shouldIncludeComments": {
                        "title": "Include comments?",
                        "type": "boolean",
                        "description": "💬 Search Reddit comments and include raw comment rows in the dataset. Defaults to true.",
                        "default": true
                    },
                    "maxDaysOld": {
                        "title": "Freshness window (days)",
                        "minimum": 0,
                        "type": "integer",
                        "description": "📅 Keep only results whose own Reddit timestamp is within this many days. Defaults to 30.",
                        "default": 30
                    },
                    "maxPostResults": {
                        "title": "Post result cap",
                        "minimum": 0,
                        "type": "integer",
                        "description": "🔢 Cap how many fresh post rows to return. Defaults to 25. Set it to 0 to keep all available post results within the freshness window.",
                        "default": 25
                    },
                    "maxCommentResults": {
                        "title": "Comment result cap",
                        "minimum": 0,
                        "type": "integer",
                        "description": "🔢 Cap how many fresh comment rows to return. Defaults to 25. Set it to 0 to keep all available comment results within the freshness window.",
                        "default": 25
                    },
                    "proxyConfiguration": {
                        "title": "Proxy configuration",
                        "type": "object",
                        "description": "🌐 Apify proxy settings for Reddit search requests. By default the actor uses Apify residential proxies with no country pin so Reddit search does not depend on long datacenter rotation streaks. Fill this in only when you want to override that default path.",
                        "default": {
                            "useApifyProxy": true,
                            "apifyProxyGroups": [
                                "RESIDENTIAL"
                            ]
                        }
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
