# Reddit Bulk Scrape 10000 IDs V1 — Posts, Comments, Subs, Users (`red_crawler/reddit-bulk-scrape`) Actor

Bulk-hydrate up to 10,000 Reddit posts, comments, subreddits, or users per run. Paste IDs, names, or URLs — get one full record per item. No Reddit account, OAuth, or proxy required. Mix formats freely; duplicates auto-removed. $1.99 per 1,000 results.

- **URL**: https://apify.com/red\_crawler/reddit-bulk-scrape.md
- **Developed by:** [Red Crawler](https://apify.com/red_crawler) (community)
- **Categories:** Lead generation, Social media, Automation
- **Stats:** 4 total users, 3 monthly users, 100.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

from $1.99 / 1,000 results

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Reddit Bulk Scrape

![Endpoints](https://img.shields.io/badge/endpoints-4-blue) ![Auth](https://img.shields.io/badge/Reddit_account-not_required-brightgreen) ![Proxy](https://img.shields.io/badge/proxy-not_required-brightgreen) ![Pricing](https://img.shields.io/badge/pricing-pay_per_result-orange) ![Cap](https://img.shields.io/badge/per_run_cap-10000_items-lightgrey)

Hydrate up to **10000 Reddit IDs, names, or URLs in a single run**. Pick an endpoint — Posts, Comments, Subreddits, or Users — paste your list, and get one fully populated dataset record per item. **No Reddit account, OAuth, or proxy required.**

A single-input actor: choose the endpoint, paste the list, hit **Start**.

---

### Endpoints at a glance

| ## | Endpoint | Input | Cap per run | Best for |
|---|---|---|---|---|
| 1 | **Bulk Posts** | post IDs / URLs | 10000 | refreshing stored post lists, hydrating IDs / URLs |
| 2 | **Bulk Comments** | comment IDs / URLs | 10000 | comment-list hydration, archival pipelines |
| 3 | **Bulk Subreddits** | subreddit names / IDs / URLs | 10000 | community-list enrichment, niche directories |
| 4 | **Bulk Users** | usernames / IDs / URLs | 10000 | CRM enrichment, account-quality scoring |

Inputs accept the most-permissive format Reddit uses for each entity:

| Entity | Accepted formats |
|---|---|
| post | full URL · prefixed `t3_1s4a4j6` · stripped ID `1s4a4j6` · short URL `https://reddit.com/comments/1s4a4j6` |
| comment | full URL · prefixed `t1_lwbnv0t` · stripped ID `lwbnv0t` |
| subreddit | name `AskReddit` · prefixed `r/AskReddit` · full ID `t5_2qh1i` · full URL `https://reddit.com/r/AskReddit` |
| user | username `spez` · prefixed `u/spez` · full ID `t2_1w72` · profile URL `https://reddit.com/user/spez` |

Separate inputs with commas **or** newlines — both work. Mix prefixed, stripped, names, and URLs freely; duplicates are removed automatically.

---

### What you can fetch

#### 1. Bulk Posts

Hydrate a list of Reddit posts.

**Inputs**

| Field | Notes |
|---|---|
| `bulk_posts_ids` | Comma- or newline-separated post inputs. Up to **10000**. |

**Accepted formats** — full IDs (`t3_1s4a4j6`), stripped IDs (`1s4a4j6`), full URLs (`https://www.reddit.com/r/Wordpress/comments/1s4a4j6/`), and short URLs (`https://reddit.com/comments/1s4a4j6`). Mix freely.

**Returns per post** — Reddit ID, fullname, title, body / selftext, author, author fullname, subreddit (name + prefixed + ID), score, ups / downs, upvote ratio, comment count, crosspost count, created + edited timestamps, permalink, external URL, domain, post-type flags (`is_self`, `is_video`, `over_18`, `spoiler`, `locked`, `stickied`, `pinned`, `archived`), distinguished status, removal category, link & author flair, thumbnail, media (images / video / gallery), awards, polls, crosspost source.

**Use it when** — refreshing a stored dataset (yesterday's IDs → today's score / comment counts / edits / deletions), turning a list of links into structured records, hydrating IDs from your own search results.

> **Note:** Posts are **always SFW** in this actor. NSFW (over-18) posts are not returned. There is no toggle. The SFW lock applies to **posts only** — comments, subreddits, and users are returned as-is regardless of any age-gating on the parent post or community.

---

#### 2. Bulk Comments

Hydrate a list of Reddit comments.

**Inputs**

| Field | Notes |
|---|---|
| `bulk_comments_ids` | Comma- or newline-separated comment inputs. Up to **10000**. |

**Accepted formats** — full IDs (`t1_lwbnv0t`), stripped IDs (`lwbnv0t`), and full URLs (`https://www.reddit.com/r/Wordpress/comments/1s4a4j6/comment/lwbnv0t/`). Mix freely.

**Returns per comment** — ID, fullname, parent post fullname, parent comment ID, author + author fullname, body (markdown + HTML), score / ups / downs / controversiality, created + edited timestamps, permalink, OP flag (`is_submitter`), depth, stickied / distinguished / locked / archived / saved / gilded flags, score-hidden flag, subreddit, awards.

**Use it when** — hydrating comment IDs from your own pipelines, comment archives, sentiment analysis on a known set of comments.

---

#### 3. Bulk Subreddits

Hydrate a list of subreddits.

**Inputs**

| Field | Notes |
|---|---|
| `bulk_subreddits_ids` | Comma- or newline-separated subreddit inputs. Up to **10000**. |

**Accepted formats** — subreddit names (`AskReddit`), prefixed names (`r/AskReddit`), full IDs (`t5_2qh1i`), and full URLs (`https://reddit.com/r/AskReddit`). Mix freely.

**Returns per subreddit** — ID, fullname, display name (raw + prefixed), title, subscriber count, active user count, public + full description, created timestamp, language, type (public / private / restricted), NSFW flag, URL, header / icon / banner images, primary + key colors, submit text, allowed submission types (videos / images / polls / galleries).

**Use it when** — subreddit comparison reports, community sizing, profile-page enrichment, building niche directories.

---

#### 4. Bulk Users

Hydrate a list of Reddit users.

**Inputs**

| Field | Notes |
|---|---|
| `bulk_users_ids` | Comma- or newline-separated user inputs. Up to **10000**. |

**Accepted formats** — usernames (`spez`), prefixed names (`u/spez`), full IDs (`t2_1w72`), and profile URLs (`https://reddit.com/user/spez`). Mix freely.

**Returns per user** — ID, name, total karma split into post / comment / award / awardee karma, account creation timestamp, employee / mod / Reddit-Gold / verified / verified-email flags, profile icon, snoovatar image, mini subreddit info, accept-followers flag, hide-from-robots flag.

**Use it when** — CRM / lead enrichment from a list of usernames, account-quality scoring, finding which accounts are still alive, batch profile lookups for influencer research.

---

### How to run

1. **Pick an endpoint** in the "What to fetch" dropdown — Bulk Posts, Bulk Comments, Bulk Subreddits, or Bulk Users.
2. **Open the matching section** and paste your IDs / names / URLs (comma- or newline-separated). **Up to 10000 entries per run** — duplicates are removed automatically.
3. **Click Start.**

Default endpoint is **Bulk Posts** with a small prefilled list so the actor runs out of the box.

---

### Output

Results are pushed to the actor's default dataset, **one record per item**. View as a table or download as JSON / CSV / Excel / XML.

| Behavior | Detail |
|---|---|
| Record granularity | One dataset row per input item that resolved. |
| `endpoint` tag | Every row carries the `endpoint` field for downstream routing. |
| Column order | Most useful columns (id, title / name, score / karma, created date) placed first. |
| Unresolved IDs | Silently dropped — compare input count vs dataset row count to spot misses. |
| Mixed formats | `1s4a4j6`, `t3_1s4a4j6`, full URL all resolve to the same item. Mix freely. |

---

### Status & error reference

**Run status** *(Apify-side, shown on the run page)*

| Apify UI cue | Status | Apify message | Meaning | What to do |
|---|---|---|---|---|
| green check | `SUCCEEDED` | "Actor succeeded with N results in the dataset" | Run finished. Some or zero results pushed. | Open the dataset to view results. |
| red exclamation | `FAILED` | "The Actor process failed…" | Validation error or upstream Reddit fault. | Check the run log. You are NOT charged for failed runs. |
| red clock | `TIMED-OUT` | "The Actor timed out. You can resurrect it with a longer timeout to continue where you left off." | Run exceeded its timeout. | Re-run with a smaller list (≤10000 per run). |
| red square outline | `ABORTED` | "The Actor process was aborted. You can resurrect it to continue where you left off." | You stopped the run manually. | No charge for unpushed results. |

**Common in-run conditions** *(visible in run log)*

| Condition | Cause | Result |
|---|---|---|
| Empty result set | None of the inputs resolved (all deleted / banned / typoed). | Run `SUCCEEDED`, 0 records, no charge. |
| Some IDs dropped | Subset deleted / banned / not found. | Run `SUCCEEDED`, fewer rows than inputs. |
| NSFW posts skipped | Bulk Posts endpoint and some inputs were NSFW. | Run `SUCCEEDED`, NSFW posts excluded. |
| Validation error: `endpoint` is required | Missing `endpoint`. | Run `FAILED` immediately, no charge. |
| Validation error: list too long | More than 10000 entries. | Run `FAILED` immediately, no charge. |

---

### Common edge cases

- **Deleted / banned items** — returned with whatever metadata Reddit still exposes (often a stub with `removed_by_category`).
- **Private subreddits** — not accessible. Reddit gates them behind a logged-in account and they're skipped.
- **Quarantined content** — not returned. Reddit hides quarantined posts/communities from anonymous calls.
- **IDs that don't resolve** — silently dropped.
- **Mixed formats** — accepted, no normalization needed on your side.
- **NSFW posts** — never returned (this actor is SFW-only for posts). Comments / subreddits / users on NSFW communities are returned normally.

---

### Why this actor is fast

- **Speed — large runs scale linearly and finish in minutes, not hours.** No browser to boot, no Playwright / Selenium / Puppeteer overhead. Competing browser-based scrapers typically take 15–60 seconds per item.
- **Reliability — zero browser flakiness.** No headless-Chromium crashes. No JS-render timeouts. No captcha pages. No surprise mid-run failures from a browser quirk.
- **Footprint — well under the 512 MB allocation, even on full 10000-item runs.** Most browser-based scrapers need 1–4 GB.

---

### Pricing

**$1.99 per 1,000 results.** Pay-per-result — you're only charged for records actually pushed to the dataset.

| Volume | Cost |
|---|---|
| 100 records | ~$0.20 |
| 500 records | ~$1.00 |
| 1,000 records | $1.99 |
| 1,500 records | ~$2.99 |
| 10,000 records (one full run) | ~$19.90 |

| Outcome | Charged? |
|---|---|
| `SUCCEEDED` with results | Yes — $1.99 per 1,000 records pushed. |
| `SUCCEEDED` with zero records | No. |
| `FAILED` (validation / upstream) | No. |
| `ABORTED` | Only for records already pushed before you stopped. |

See the actor's **Pricing** tab for the live rate — this README is the source of truth at publish time but the Pricing tab always reflects the current price.

# Actor input Schema

## `endpoint` (type: `string`):

Choose which bulk lookup to run.
## `bulk_posts_ids` (type: `string`):

Comma- or newline-separated post inputs. Up to 10000.
## `bulk_comments_ids` (type: `string`):

Comma- or newline-separated comment inputs. Up to 10000.
## `bulk_subreddits_ids` (type: `string`):

Comma- or newline-separated subreddit inputs. Up to 10000.
## `bulk_users_ids` (type: `string`):

Comma- or newline-separated user inputs. Up to 10000.

## Actor input object example

```json
{
  "endpoint": "bulk_posts",
  "bulk_posts_ids": "1jq3e8u, 1s4a4j6, https://www.reddit.com/r/Wordpress/comments/1szbpra/",
  "bulk_comments_ids": "c60n1vq, cszv2lg",
  "bulk_subreddits_ids": "AskReddit, t5_2qh1i, r/wordpress, https://reddit.com/r/learnprogramming",
  "bulk_users_ids": "spez, kn0thing, u/AutoModerator, https://reddit.com/user/spez"
}
````

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "endpoint": "bulk_posts",
    "bulk_posts_ids": "1jq3e8u, 1s4a4j6, https://www.reddit.com/r/Wordpress/comments/1szbpra/",
    "bulk_comments_ids": "c60n1vq, cszv2lg",
    "bulk_subreddits_ids": "AskReddit, t5_2qh1i, r/wordpress, https://reddit.com/r/learnprogramming",
    "bulk_users_ids": "spez, kn0thing, u/AutoModerator, https://reddit.com/user/spez"
};

// Run the Actor and wait for it to finish
const run = await client.actor("red_crawler/reddit-bulk-scrape").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "endpoint": "bulk_posts",
    "bulk_posts_ids": "1jq3e8u, 1s4a4j6, https://www.reddit.com/r/Wordpress/comments/1szbpra/",
    "bulk_comments_ids": "c60n1vq, cszv2lg",
    "bulk_subreddits_ids": "AskReddit, t5_2qh1i, r/wordpress, https://reddit.com/r/learnprogramming",
    "bulk_users_ids": "spez, kn0thing, u/AutoModerator, https://reddit.com/user/spez",
}

# Run the Actor and wait for it to finish
run = client.actor("red_crawler/reddit-bulk-scrape").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "endpoint": "bulk_posts",
  "bulk_posts_ids": "1jq3e8u, 1s4a4j6, https://www.reddit.com/r/Wordpress/comments/1szbpra/",
  "bulk_comments_ids": "c60n1vq, cszv2lg",
  "bulk_subreddits_ids": "AskReddit, t5_2qh1i, r/wordpress, https://reddit.com/r/learnprogramming",
  "bulk_users_ids": "spez, kn0thing, u/AutoModerator, https://reddit.com/user/spez"
}' |
apify call red_crawler/reddit-bulk-scrape --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=red_crawler/reddit-bulk-scrape",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit Bulk Scrape 10000 IDs V1 — Posts, Comments, Subs, Users",
        "description": "Bulk-hydrate up to 10,000 Reddit posts, comments, subreddits, or users per run. Paste IDs, names, or URLs — get one full record per item. No Reddit account, OAuth, or proxy required. Mix formats freely; duplicates auto-removed. $1.99 per 1,000 results.",
        "version": "1.3",
        "x-build-id": "Q8PXd2z664qyXUhlO"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/red_crawler~reddit-bulk-scrape/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-red_crawler-reddit-bulk-scrape",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/red_crawler~reddit-bulk-scrape/runs": {
            "post": {
                "operationId": "runs-sync-red_crawler-reddit-bulk-scrape",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/red_crawler~reddit-bulk-scrape/run-sync": {
            "post": {
                "operationId": "run-sync-red_crawler-reddit-bulk-scrape",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "endpoint"
                ],
                "properties": {
                    "endpoint": {
                        "title": "What to fetch",
                        "enum": [
                            "bulk_posts",
                            "bulk_comments",
                            "bulk_subreddits",
                            "bulk_users"
                        ],
                        "type": "string",
                        "description": "Choose which bulk lookup to run.",
                        "default": "bulk_posts"
                    },
                    "bulk_posts_ids": {
                        "title": "Post IDs / URLs",
                        "type": "string",
                        "description": "Comma- or newline-separated post inputs. Up to 10000."
                    },
                    "bulk_comments_ids": {
                        "title": "Comment IDs / URLs",
                        "type": "string",
                        "description": "Comma- or newline-separated comment inputs. Up to 10000."
                    },
                    "bulk_subreddits_ids": {
                        "title": "Subreddit names / IDs / URLs",
                        "type": "string",
                        "description": "Comma- or newline-separated subreddit inputs. Up to 10000."
                    },
                    "bulk_users_ids": {
                        "title": "Usernames / IDs / URLs",
                        "type": "string",
                        "description": "Comma- or newline-separated user inputs. Up to 10000."
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
