# Reddit Subreddit Posts Scraper - No API Key (`wiry_kingdom/reddit-subreddit-scraper`) Actor

Scrape any public subreddit. Posts, scores, comments, authors, awards, flairs, timestamps. No API key, no OAuth, no login. Free public Reddit JSON. For alt data, social listening, AI training datasets.

- **URL**: https://apify.com/wiry\_kingdom/reddit-subreddit-scraper.md
- **Developed by:** [Mohieldin Mohamed](https://apify.com/wiry_kingdom) (community)
- **Categories:** Business, Developer tools
- **Stats:** 1 total users, 0 monthly users, 100.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

Pay per event

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Reddit Subreddit Posts Scraper

**Scrape any public subreddit for posts, scores, comments, authors, and metadata. No API key. No OAuth. No login. 100% free public Reddit JSON endpoints.**

This actor pulls structured post data from any public subreddit using Reddit's official free JSON endpoints — no authentication required. Perfect for **alt data**, **social listening**, **sentiment analysis**, **market research**, **AI training datasets**, and **content trend tracking**.

### What does Reddit Subreddit Scraper do?

You give it a list of subreddit names (e.g. `wallstreetbets`, `webdev`, `saas`, `localllama`, `aitools`). It pulls posts ranked by **hot**, **new**, **top**, **rising**, or **controversial** — paginating automatically up to your max — and returns each post as a clean structured row with:

- **Title, author, score, upvote ratio, comment count**
- **Created timestamp** (ISO format)
- **URL** (the linked content) and **permalink** (the Reddit thread)
- **Linked domain** (useful for tracking which sites get traction)
- **Flair, awards, gilded count**
- **Body text** for self-posts
- **Thumbnail, media type**
- **Flags**: stickied, locked, NSFW, video, self-post

Try it: leave the defaults (`r/wallstreetbets`, hot, top 50), press Start, and watch the dataset fill with the current top WSB stock-picking and meme posts in seconds.

### Why use Reddit Subreddit Scraper?

Reddit is one of the **most-cited alternative data sources** for hedge funds and quant traders. The 2025 Nasdaq State of Alt Data report listed Reddit sentiment as a top-5 institutional alt data source. Beyond finance, Reddit is the backbone for:

- **Hedge funds + quant traders** — track r/wallstreetbets sentiment shifts before they hit price
- **Marketing teams** — find which posts and content domains are trending in your niche
- **Founders + indie hackers** — track r/SaaS, r/IndieHackers, r/Entrepreneur for market signals
- **AI/ML researchers** — collect training data for sentiment, summarization, instruction-following models (Reddit was famously sold to OpenAI for $60M as training data)
- **Content marketers** — find trending topics to write about
- **Social listeners** — monitor brand mentions, product feedback, complaints
- **Journalists** — break stories that emerge from niche communities first

This actor is **dramatically simpler and cheaper** than commercial alternatives like Brandwatch ($800+/month), Sprout Social ($249/month), or PRAW + custom infrastructure. It's also faster than the official Reddit API (which requires OAuth, app registration, and rate limit headaches).

### How to use

1. Click **Try for free** (or **Start**)
2. Paste subreddit names into **Subreddits** (without the `r/` prefix)
3. Pick the **sort order** (hot / new / top / rising / controversial)
4. Set **max posts per subreddit** (default 50, max 1000)
5. Click **Start**
6. Download as JSON, CSV, HTML, or Excel — or schedule daily runs

### Input

- **Subreddits** — list of subreddit names (e.g. `["wallstreetbets", "webdev", "saas"]`)
- **Sort order** — `hot` / `new` / `top` / `rising` / `controversial` (default: `hot`)
- **Time filter** — `hour` / `day` / `week` / `month` / `year` / `all` (only for `top` and `controversial`, default: `day`)
- **Max posts per subreddit** — cap (default 50, max 1000)
- **Include text** — pull `selftext` for self-posts (default: yes)
- **Min score** — filter out posts below this score (default: 0)
- **Extract domains** — parse linked URLs to get the domain (default: yes)
- **Proxy configuration** — optional, recommended for high-volume runs

### Output

```json
{
    "subreddit": "wallstreetbets",
    "title": "NVDA earnings preview - what to expect",
    "author": "stocksavvy",
    "score": 1234,
    "upvoteRatio": 0.92,
    "numComments": 387,
    "createdAt": "2026-04-15T14:30:00.000Z",
    "url": "https://example.com/nvda-earnings-preview",
    "permalink": "https://www.reddit.com/r/wallstreetbets/comments/abc123/nvda_earnings_preview/",
    "linkDomain": "example.com",
    "flair": "DD",
    "isVideo": false,
    "isSelfPost": false,
    "isOver18": false,
    "selftext": null,
    "thumbnailUrl": "https://b.thumbs.redditmedia.com/...jpg",
    "mediaType": "link",
    "awardsCount": 5,
    "gilded": 2,
    "stickied": false,
    "locked": false,
    "edited": false,
    "domain": "example.com",
    "id": "abc123",
    "extractedAt": "2026-04-15T19:00:00.000Z"
}
````

### Data table

| Field | Type | Description |
|-------|------|-------------|
| `subreddit` | string | Subreddit name |
| `title` | string | Post title |
| `author` | string | Reddit username (or `null` for deleted accounts) |
| `score` | number | Upvotes minus downvotes |
| `upvoteRatio` | number | 0.0–1.0 ratio of upvotes to total votes |
| `numComments` | number | Comment count |
| `createdAt` | string | ISO timestamp |
| `url` | string | Linked URL |
| `permalink` | string | Direct link to the Reddit thread |
| `linkDomain` | string | Domain of the linked URL (e.g. `example.com`) |
| `flair` | string | Post flair (e.g. "DD", "Discussion", "Meme") |
| `awardsCount` | number | Total Reddit awards received |
| `gilded` | number | Number of gold awards |
| `stickied` | boolean | Pinned to the top of the subreddit? |
| `locked` | boolean | Comments locked? |
| `isVideo` | boolean | Reddit-hosted video? |
| `isSelfPost` | boolean | Text-only self-post? |
| `isOver18` | boolean | NSFW flag |
| `selftext` | string | Body text (for self-posts) |
| `thumbnailUrl` | string | Thumbnail image URL |
| `mediaType` | string | Reddit's `post_hint` (link, image, video, etc.) |
| `id` | string | Reddit post ID |
| `extractedAt` | string | When this scrape happened |

### Pricing

This actor uses Apify's **pay-per-event** pricing — extremely cheap for both small spot-checks and bulk historical pulls:

- **Actor start**: $0.01 per run
- **Per post extracted**: $0.005 per post

**Example costs:**

- Daily snapshot of r/wallstreetbets top 50 → $0.26/day = $7.85/month
- Hourly check across 10 niche subreddits (50 posts each) → ~$15/month
- Bulk historical pull of 1000 posts from one subreddit → $5.01
- Track 100 subreddits × 50 posts daily → ~$15/day = $450/month

Compare to Brandwatch ($800+/month minimum), Sprout Social ($249/month), or building your own PRAW pipeline (which requires OAuth, rate limit handling, and infrastructure).

Free Apify tier members get $5/month in platform credits, which covers ~1,000 posts per month.

### Tips and advanced options

- **Schedule hourly runs** during market hours to track r/wallstreetbets / r/stocks sentiment changes in near-real-time
- **Use `sort: top` + `timeFilter: hour`** to catch breaking trending content
- **Use `sort: new`** + frequent runs to catch every post the moment it's submitted (great for keyword alerts)
- **Track multiple sub-niches at once** — pass an array of related subreddits (e.g. `["aitools", "localllama", "ChatGPT", "ClaudeAI"]`) for a complete AI tools ecosystem snapshot
- **Filter by minScore** to only get posts that are gaining traction — eliminates the long tail noise
- **Pipe into a quant model** to test Reddit sentiment as an alpha factor
- **Combine with the SEC EDGAR Filing Monitor** to correlate r/wallstreetbets buzz with SEC filings
- **Combine with the Hiring Signal Tracker** to triangulate company growth signals across multiple data sources

### FAQ and support

**Do I need a Reddit API key or OAuth credentials?** No. This actor uses Reddit's free public JSON endpoints (`https://reddit.com/r/{subreddit}/{sort}.json`), which require no authentication. We don't even ask for your Reddit username.

**What about rate limits?** Reddit's public JSON endpoints allow ~60 requests per minute per IP. For large jobs (1000+ posts across many subreddits), enable Apify Proxy in the input to rotate IPs and avoid rate limits.

**Does this work for private or quarantined subreddits?** No. Only public subreddits accessible without login are supported.

**How does this compare to PRAW or the official Reddit API?** PRAW is excellent for Python developers but requires OAuth setup and rate limit handling. The official Reddit API has stricter rate limits and requires app registration. This actor is the simplest path to clean Reddit data without any of that hassle.

**Is this legal?** Yes. Reddit's public JSON endpoints are explicitly designed for programmatic access. We respect rate limits and identify ourselves with a clear User-Agent header.

**Found a bug?** Open an issue on the Issues tab.

# Actor input Schema

## `subreddits` (type: `array`):

List of subreddit names (without the r/ prefix). Example: \["wallstreetbets", "webdev", "saas"]

## `sort` (type: `string`):

How Reddit should rank the posts.

## `timeFilter` (type: `string`):

Time range for the 'top' and 'controversial' sort orders. Ignored for hot/new/rising.

## `maxPostsPerSubreddit` (type: `integer`):

Cap on posts to extract per subreddit. Reddit returns up to 100 per page; we paginate automatically.

## `includeText` (type: `boolean`):

Include the full text body of each post (for self-posts).

## `minScore` (type: `integer`):

Filter out posts with score below this threshold. Use 0 for no filter.

## `extractDomains` (type: `boolean`):

Parse the linked URL of each post and extract its domain (useful for tracking which sites get traction).

## `proxyConfiguration` (type: `object`):

Optional Apify Proxy. Reddit may rate-limit aggressive direct requests; enable proxy for large jobs.

## Actor input object example

```json
{
  "subreddits": [
    "wallstreetbets"
  ],
  "sort": "hot",
  "timeFilter": "day",
  "maxPostsPerSubreddit": 50,
  "includeText": true,
  "minScore": 0,
  "extractDomains": true,
  "proxyConfiguration": {
    "useApifyProxy": false
  }
}
```

# Actor output Schema

## `dataset` (type: `string`):

No description

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "subreddits": [
        "wallstreetbets"
    ]
};

// Run the Actor and wait for it to finish
const run = await client.actor("wiry_kingdom/reddit-subreddit-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = { "subreddits": ["wallstreetbets"] }

# Run the Actor and wait for it to finish
run = client.actor("wiry_kingdom/reddit-subreddit-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "subreddits": [
    "wallstreetbets"
  ]
}' |
apify call wiry_kingdom/reddit-subreddit-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=wiry_kingdom/reddit-subreddit-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit Subreddit Posts Scraper - No API Key",
        "description": "Scrape any public subreddit. Posts, scores, comments, authors, awards, flairs, timestamps. No API key, no OAuth, no login. Free public Reddit JSON. For alt data, social listening, AI training datasets.",
        "version": "0.1",
        "x-build-id": "y9aj9IqvbIJQ4fI0D"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/wiry_kingdom~reddit-subreddit-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-wiry_kingdom-reddit-subreddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/wiry_kingdom~reddit-subreddit-scraper/runs": {
            "post": {
                "operationId": "runs-sync-wiry_kingdom-reddit-subreddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/wiry_kingdom~reddit-subreddit-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-wiry_kingdom-reddit-subreddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "subreddits"
                ],
                "properties": {
                    "subreddits": {
                        "title": "Subreddits to scrape",
                        "type": "array",
                        "description": "List of subreddit names (without the r/ prefix). Example: [\"wallstreetbets\", \"webdev\", \"saas\"]",
                        "default": [
                            "wallstreetbets"
                        ],
                        "items": {
                            "type": "string"
                        }
                    },
                    "sort": {
                        "title": "Sort order",
                        "enum": [
                            "hot",
                            "new",
                            "top",
                            "rising",
                            "controversial"
                        ],
                        "type": "string",
                        "description": "How Reddit should rank the posts.",
                        "default": "hot"
                    },
                    "timeFilter": {
                        "title": "Time window (only for 'top' and 'controversial')",
                        "enum": [
                            "hour",
                            "day",
                            "week",
                            "month",
                            "year",
                            "all"
                        ],
                        "type": "string",
                        "description": "Time range for the 'top' and 'controversial' sort orders. Ignored for hot/new/rising.",
                        "default": "day"
                    },
                    "maxPostsPerSubreddit": {
                        "title": "Max posts per subreddit",
                        "minimum": 1,
                        "maximum": 1000,
                        "type": "integer",
                        "description": "Cap on posts to extract per subreddit. Reddit returns up to 100 per page; we paginate automatically.",
                        "default": 50
                    },
                    "includeText": {
                        "title": "Include post body / selftext",
                        "type": "boolean",
                        "description": "Include the full text body of each post (for self-posts).",
                        "default": true
                    },
                    "minScore": {
                        "title": "Minimum score (upvotes - downvotes)",
                        "type": "integer",
                        "description": "Filter out posts with score below this threshold. Use 0 for no filter.",
                        "default": 0
                    },
                    "extractDomains": {
                        "title": "Extract linked domains",
                        "type": "boolean",
                        "description": "Parse the linked URL of each post and extract its domain (useful for tracking which sites get traction).",
                        "default": true
                    },
                    "proxyConfiguration": {
                        "title": "Proxy configuration",
                        "type": "object",
                        "description": "Optional Apify Proxy. Reddit may rate-limit aggressive direct requests; enable proxy for large jobs.",
                        "default": {
                            "useApifyProxy": false
                        }
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
