# Reddit User Posts & Comments Scraper | Bulk Export (`clearpath/reddit-user-content-scraper`) Actor

Scrape all posts and comments from any Reddit user profile. Supports sorting, time filters, and bulk username input. Up to 1,000 items per user.

- **URL**: https://apify.com/clearpath/reddit-user-content-scraper.md
- **Developed by:** [ClearPath](https://apify.com/clearpath) (community)
- **Categories:** Developer tools, Lead generation, Social media
- **Stats:** 2 total users, 1 monthly users, 100.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

from $1.99 / 1,000 results

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Reddit User Content Scraper | Posts & Comments History (2026)

<blockquote style="border-left:4px solid #FF4500;background:#FFF5F2;padding:12px 16px">
<span style="font-size:16px;font-weight:700;color:#1C1917">5,000 posts and comments in under 60 seconds</span> <span style="font-size:15px;color:#57534E">— bulk username support, sorting, and time filters.</span>
</blockquote>

Pass one username or a thousand. Get back every post, comment, or both, sorted by new, top, hot, or controversial.

<table style="width:100%">
<tr>
<td colspan="3" style="padding:10px 14px;background:#006D77;border:none;border-radius:4px 4px 0 0">
<span style="color:#FAFAF9;font-size:14px;font-weight:700;letter-spacing:0.5px">Clearpath Reddit Suite</span>
<span style="color:#EDF6F9;font-size:13px">&nbsp;&nbsp;&bull;&nbsp;&nbsp;Search, analyze, and monitor Reddit at scale</span>
</td>
</tr>
<tr>
<td style="padding:12px 16px;border:1px solid #E7E5E4;border-radius:0 0 0 4px;background:#EDF6F9;border-right:none;border-top:none;vertical-align:top;width:33%">
<span style="color:#006D77;font-size:12px;font-weight:600">&#10148; You are here</span><br>
<a href="https://apify.com/clearpath/reddit-user-content-scraper" style="color:#006D77;text-decoration:none;font-weight:700;font-size:14px">User Content Scraper</a><br>
<span style="color:#78716C;font-size:12px">Posts & comments history</span>
</td>
<td style="padding:12px 16px;border:1px solid #E7E5E4;border-right:none;border-top:none;vertical-align:top;width:33%">
<img src="https://apify-image-uploads-prod.s3.us-east-1.amazonaws.com/DSvMCAwsufMyZeLyt-actor-LA9VzVDphD8rUCFnW-fLjTXd3OC3-reddit-profile-scraper-logo.png" width="24" height="24" style="vertical-align:middle"> &nbsp;<a href="https://apify.com/clearpath/reddit-profile-scraper" style="color:#1C1917;text-decoration:none;font-weight:700;font-size:14px">Reddit Profile Scraper</a><br>
<span style="color:#78716C;font-size:12px">Bulk profile & karma lookup</span>
</td>
<td style="padding:12px 16px;border:1px solid #E7E5E4;border-radius:0 0 4px 0;border-top:none;vertical-align:top;width:33%">
<img src="https://apify-image-uploads-prod.s3.us-east-1.amazonaws.com/DSvMCAwsufMyZeLyt-actor-nckWiP0hDfAhfSqIP-a7kh9CcAjF-reddit-to-llm-logo.png" width="24" height="24" style="vertical-align:middle"> &nbsp;<a href="https://apify.com/clearpath/reddit-mcp" style="color:#1C1917;text-decoration:none;font-weight:700;font-size:14px">Reddit MCP Server</a><br>
<span style="color:#78716C;font-size:12px">Search posts & comments for AI agents</span>
</td>
</tr>
</table>

#### Copy to your AI assistant

Copy this block into ChatGPT, Claude, Cursor, or any LLM to start building with this data.

````

Reddit User Content Scraper (clearpath/reddit-user-content-scraper) on Apify scrapes posts and comments from Reddit user profiles in bulk. Returns 100+ fields per item including title, body text, score, subreddit, permalink, timestamps, upvote ratio, awards, flair, and author metadata. Supports sorting (new, hot, top, controversial) and time filters (hour, day, week, month, year, all). Input: single username, array of usernames, file URL, or uploaded CSV/TXT file. Accepts any format: plain username, u/name, @name, or profile URLs. Content type: posts only, comments only, or both. Max ~1,000 items per user (Reddit server limit). Output: JSON array, one object per post/comment. Pricing: $1.99 per 1,000 items (PPE). No Reddit API key or login required. Apify token required.

````

### Key Features

- **Full post & comment history** — scrape everything a user has posted or commented
- **Sorting & time filters** — new, hot, top, controversial + time range (hour to all time)
- **Bulk username support** — process hundreds of users in parallel
- **100+ fields per item** — title, body, score, subreddit, permalink, flair, awards, author metadata
- **No Reddit login needed** — uses public data, no API keys required

### How to Scrape Reddit User History

#### Single user, all content

```json
{
    "username": "thisisbillgates"
}
````

#### Top posts from a user

```json
{
    "username": "GovSchwarzenegger",
    "contentType": "posts",
    "sort": "top",
    "timeFilter": "all",
    "maxItemsPerUser": 50
}
```

#### Comments from multiple users

```json
{
    "usernames": ["thisisbillgates", "GovSchwarzenegger", "kn0thing"],
    "contentType": "comments",
    "sort": "new",
    "maxItemsPerUser": 100
}
```

#### Bulk from file URL

```json
{
    "usernamesFileUrl": "https://example.com/my-usernames.csv"
}
```

You can also upload a file directly using the **Upload file** field in the Apify Console.

### Input Parameters

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `username` | string | | A single Reddit username |
| `usernames` | string\[] | `[]` | List of usernames, `u/` prefixed names, or profile URLs |
| `contentType` | select | `overview` | `overview` (both), `posts`, or `comments` |
| `sort` | select | `new` | `new`, `hot`, `top`, `controversial` |
| `timeFilter` | select | `all` | `hour`, `day`, `week`, `month`, `year`, `all` (for top/controversial) |
| `maxItemsPerUser` | integer | `100` | Max items per user (1-1000) |
| `usernamesFileUrl` | string | | URL to a hosted .txt or .csv file |
| `usernamesFile` | string | | Upload a .txt or .csv file via the Apify Console |

### What Data Can You Extract from Reddit User History?

#### Post fields

```json
{
    "_username": "thisisbillgates",
    "_status": "found",
    "_content_type": "post",
    "title": "With all of the negative headlines dominating the news these days, it can be difficult to spot signs of progress. What makes you optimistic about the future?",
    "selftext": "",
    "subreddit": "AskReddit",
    "score": 139503,
    "num_comments": 20875,
    "created_utc": 1519761227.0,
    "permalink": "/r/AskReddit/comments/80phz7/with_all_of_the_negative_headlines_dominating_the/",
    "url": "https://www.reddit.com/r/AskReddit/comments/80phz7/...",
    "author": "thisisbillgates",
    "domain": "self.AskReddit",
    "is_self": true,
    "over_18": false,
    "upvote_ratio": 0.92,
    "id": "80phz7",
    "name": "t3_80phz7"
}
```

#### Comment fields

```json
{
    "_username": "thisisbillgates",
    "_status": "found",
    "_content_type": "comment",
    "body": "I would be glad to pass along your thoughts on this to the right person at Microsoft...",
    "subreddit": "IAmA",
    "score": 37256,
    "created_utc": 1519758516.0,
    "permalink": "/r/IAmA/comments/80ow6w/.../dux4be8/",
    "author": "thisisbillgates",
    "link_title": "I'm Bill Gates, co-chair of the Bill & Melinda Gates Foundation. Ask Me Anything.",
    "parent_id": "t1_dux2k81",
    "id": "dux4be8",
    "name": "t1_dux4be8"
}
```

Each item includes 100+ fields. The examples above show the most commonly used ones. All public fields Reddit returns are included.

### Speed

| Users | Items/user | Time |
|-------|-----------|------|
| 1 | 100 | ~1 second |
| 1 | 1,000 | ~10 seconds |
| 10 | 100 each | ~5 seconds |
| 100 | 100 each | ~30 seconds |

Bulk speed comes from running multiple users in parallel. Rate limits are handled automatically with proxy rotation and retries.

### Pricing — Pay Per Event (PPE)

<span style="font-size:18px;font-weight:700;color:#E29578">$1.99 per 1,000 items</span>

### FAQ

**How much content can I scrape per user?**
Up to ~1,000 posts and ~1,000 comments per user. This is a Reddit server-side limit, not an actor limit.

**Do I need a Reddit account or API key?**
No. The actor uses publicly available data. No login, no API key, no OAuth.

**What happens with deleted or suspended accounts?**
They're included in the output with `"_status": "not_found"` so you can see exactly which usernames didn't resolve. You're only charged for found content.

**What input formats are supported?**
Plain username (`thisisbillgates`), prefixed (`u/name`, `/u/name`, `@name`), full URLs (`reddit.com/user/name`), and uploaded CSV/TXT files.

**What's the difference between "overview" and separate posts/comments?**
Overview returns posts and comments mixed in chronological order, which is how Reddit's profile page works. Separate modes give you only posts or only comments.

**How is the data structured?**
One JSON object per post or comment. Posts have `title`, `selftext`, `num_comments`. Comments have `body`, `link_title`, `parent_id`. Both share `score`, `subreddit`, `permalink`, `created_utc`, and 90+ more fields.

### Support

- **Bugs**: Issues tab
- **Features**: Email or issues
- **Email**: max@mapa.slmail.me

### Legal Compliance

Extracts publicly available data. Users must comply with Reddit's terms of service and applicable data protection regulations (GDPR, CCPA).

***

*Bulk Reddit user history at scale. Posts, comments, or both. No login, no API key.*

# Actor input Schema

## `username` (type: `string`):

A single Reddit username, u/name, or profile URL.

## `usernames` (type: `array`):

Add multiple usernames. Use 'Bulk edit' to paste a list.

## `contentType` (type: `string`):

What to scrape from each user.

## `sort` (type: `string`):

How to sort results.

## `timeFilter` (type: `string`):

Only applies when sorting by Top or Controversial.

## `maxItemsPerUser` (type: `integer`):

Maximum posts/comments to scrape per user. Reddit limits to ~1,000 per content type.

## `usernamesFile` (type: `string`):

Upload a .txt or .csv file, or paste a URL. TXT: one username per line. CSV: auto-detects 'username' column.

## Actor input object example

```json
{
  "username": "thisisbillgates",
  "usernames": [],
  "contentType": "overview",
  "sort": "new",
  "timeFilter": "all",
  "maxItemsPerUser": 1000
}
```

# Actor output Schema

## `results` (type: `string`):

No description

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "username": "thisisbillgates",
    "maxItemsPerUser": 1000
};

// Run the Actor and wait for it to finish
const run = await client.actor("clearpath/reddit-user-content-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "username": "thisisbillgates",
    "maxItemsPerUser": 1000,
}

# Run the Actor and wait for it to finish
run = client.actor("clearpath/reddit-user-content-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "username": "thisisbillgates",
  "maxItemsPerUser": 1000
}' |
apify call clearpath/reddit-user-content-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=clearpath/reddit-user-content-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit User Posts & Comments Scraper | Bulk Export",
        "description": "Scrape all posts and comments from any Reddit user profile. Supports sorting, time filters, and bulk username input. Up to 1,000 items per user.",
        "version": "0.0",
        "x-build-id": "hQ8py9GMuqZhPJ9bj"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/clearpath~reddit-user-content-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-clearpath-reddit-user-content-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/clearpath~reddit-user-content-scraper/runs": {
            "post": {
                "operationId": "runs-sync-clearpath-reddit-user-content-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/clearpath~reddit-user-content-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-clearpath-reddit-user-content-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "properties": {
                    "username": {
                        "title": "Reddit username",
                        "type": "string",
                        "description": "A single Reddit username, u/name, or profile URL."
                    },
                    "usernames": {
                        "title": "Multiple usernames",
                        "type": "array",
                        "description": "Add multiple usernames. Use 'Bulk edit' to paste a list.",
                        "items": {
                            "type": "string"
                        },
                        "default": []
                    },
                    "contentType": {
                        "title": "Content type",
                        "enum": [
                            "overview",
                            "posts",
                            "comments"
                        ],
                        "type": "string",
                        "description": "What to scrape from each user.",
                        "default": "overview"
                    },
                    "sort": {
                        "title": "Sort by",
                        "enum": [
                            "new",
                            "hot",
                            "top",
                            "controversial"
                        ],
                        "type": "string",
                        "description": "How to sort results.",
                        "default": "new"
                    },
                    "timeFilter": {
                        "title": "Time filter",
                        "enum": [
                            "hour",
                            "day",
                            "week",
                            "month",
                            "year",
                            "all"
                        ],
                        "type": "string",
                        "description": "Only applies when sorting by Top or Controversial.",
                        "default": "all"
                    },
                    "maxItemsPerUser": {
                        "title": "Max items per user",
                        "minimum": 1,
                        "maximum": 1000,
                        "type": "integer",
                        "description": "Maximum posts/comments to scrape per user. Reddit limits to ~1,000 per content type.",
                        "default": 1000
                    },
                    "usernamesFile": {
                        "title": "Usernames file",
                        "type": "string",
                        "description": "Upload a .txt or .csv file, or paste a URL. TXT: one username per line. CSV: auto-detects 'username' column."
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
