# Reddit Bulk Scrape 10000 IDs V2 — Posts, Comments, Subs, Users (`red_crawler/reddit-bulk-scrape-v2`) Actor

Bulk-scrape Reddit posts, comments, subreddits, and users in a single call. Pick one of 5 endpoints and paste up to 10000 inputs — IDs, stripped IDs, URLs, or usernames (depending on endpoint). Returns full GQL metadata as one dataset record per item. No Reddit account or proxy required.

- **URL**: https://apify.com/red\_crawler/reddit-bulk-scrape-v2.md
- **Developed by:** [Red Crawler](https://apify.com/red_crawler) (community)
- **Categories:** Automation, Social media, Developer tools
- **Stats:** 2 total users, 1 monthly users, 100.0% runs succeeded, NaN bookmarks
- **User rating**: 5.00 out of 5 stars

## Pricing

from $1.99 / 1,000 results

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Reddit Bulk Scrape V2

![Endpoints](https://img.shields.io/badge/endpoints-5-blue) ![Auth](https://img.shields.io/badge/Reddit_account-not_required-brightgreen) ![Proxy](https://img.shields.io/badge/proxy-not_required-brightgreen) ![Pricing](https://img.shields.io/badge/pricing-pay_per_result-orange)

Hydrate large lists of Reddit IDs in a single run — posts, comments, subreddits, and users. **5 bulk-by-ID endpoints in one actor.** **No Reddit account, OAuth, or proxy required.**

Paste up to **10000 IDs / usernames per run** and get one full record per item back in the dataset.

> **Need feeds, comment trees, or single-record lookups?** They live in the companion actor [**Reddit Scraper V2**](https://apify.com/triangular_triangle/reddit-scrape-v2) — 11 endpoints covering post comments, profile feeds, subreddit feeds, and detailed comment lookups.

---

### Endpoints at a glance

| ## | Endpoint | Input | Cap per run | Best for |
|---|---|---|---|---|
| 1 | **Bulk Posts by ID** | post IDs (raw / `t3_` / URLs) | 10000 | post-list enrichment, hydrating stored IDs |
| 2 | **Bulk Comments by ID** | comment IDs (raw / `t1_` / URLs) | 10000 | comment-list hydration, archival pipelines |
| 3 | **Bulk Communities by ID** | subreddit IDs (stripped or `t5_`) | 10000 | community-list enrichment by ID |
| 4 | **Bulk Profiles by ID** | user IDs (stripped or `t2_`) | 10000 | user-list enrichment by ID |
| 5 | **Bulk Profiles by Name** | usernames / `u/name` / profile URLs | 10000 | user-list enrichment by username |

Inputs accept the most-permissive format Reddit uses for each entity:

| Entity | Accepted formats |
|---|---|
| post | full URL · prefixed `t3_1s4a4j6` · stripped ID `1s4a4j6` |
| comment | full URL · prefixed `t1_lwbnv0t` · stripped ID `lwbnv0t` |
| subreddit (by ID) | prefixed `t5_2qh1i` · stripped ID `2qh1i` |
| user (by ID) | prefixed `t2_1w72` · stripped ID `1w72` |
| user (by name) | username `spez` · prefixed `u/spez` · profile URL `https://reddit.com/user/spez` |

Separate inputs with commas **or** newlines — both work. Mix prefixed and stripped freely; duplicates are removed automatically.

---

### What you can fetch

#### 1. Bulk Posts by ID

Hydrate a list of post IDs to full post records in one call.

**Inputs**

| Field | Notes |
|---|---|
| `bulk_posts_ids` | Comma- or newline-separated post inputs. Up to **10000**. |

**Accepted formats** — full IDs (`t3_1s4a4j6`), stripped IDs (`1s4a4j6`), and full URLs (`https://www.reddit.com/r/Wordpress/comments/1s4a4j6/`). Mix freely.

**Returns per post** — title, body, score, comment count, awards, flair, media (images / video / gallery), all post flags, subreddit, author, created timestamp.

**Use it when** — you have a list of post IDs (from your DB, a previous scrape, or a CSV) and want full post payloads back in one run.

---

#### 2. Bulk Comments by ID

Hydrate a list of comment IDs to full comment records in one call.

**Inputs**

| Field | Notes |
|---|---|
| `bulk_comments_ids` | Comma- or newline-separated comment inputs. Up to **10000**. |

**Accepted formats** — full IDs (`t1_lwbnv0t`), stripped IDs (`lwbnv0t`), and full URLs (`https://www.reddit.com/r/Wordpress/comments/1s4a4j6/comment/lwbnv0t/`). Mix freely.

**Returns per comment** — body (markdown + HTML), score, author, depth, all comment flags, parent post / parent comment IDs, awards, created + edited timestamps, permalink.

**Use it when** — comment-list hydration, archival pipelines, refreshing a stored set of comment IDs.

---

#### 3. Bulk Communities by ID

Hydrate a list of subreddit `t5_` IDs to full community records.

**Inputs**

| Field | Notes |
|---|---|
| `bulk_communities_ids` | Comma- or newline-separated subreddit IDs. Up to **10000**. |

**Accepted formats** — full IDs (`t5_2qh1i`) and stripped IDs (`2qh1i`). Mix prefixed and stripped freely.

> **ID-only endpoint.** Reddit's bulk-by-IDs operation does not exist by name. To look up subreddits by name (`AskReddit`), `r/name`, or URL, use the V1 actor [**Reddit Bulk Scrape**](https://apify.com/triangular_triangle/reddit-bulk-scrape).

**Returns per subreddit** — subscriber count, public + full description, theme (banner, icon, colors), allowed submission types, NSFW flag, type (public / private / restricted), created timestamp.

**Use it when** — community-list enrichment, sidebar / theme audits, hydrating a list of subreddits stored by ID.

---

#### 4. Bulk Profiles by ID

Hydrate a list of user `t2_` IDs to full Redditor records.

**Inputs**

| Field | Notes |
|---|---|
| `bulk_profiles_by_id_ids` | Comma- or newline-separated user IDs. Up to **10000**. |

**Accepted formats** — full IDs (`t2_1w72`) and stripped IDs (`1w72`). Mix prefixed and stripped freely.

> **ID-only endpoint.** To look up users by username, `u/name`, or profile URL, use **Bulk Profiles by Name** below.

**Returns per user** — karma split into post / comment / award / awardee, account creation date, snoovatar, banner, accepted-DMs flag, mod info, employee / verified flags, premium status.

**Use it when** — you have a list of stable `t2_` IDs (which never change, even after a username rename) and want full profile records back.

---

#### 5. Bulk Profiles by Name

Hydrate a list of usernames to full Redditor records.

**Inputs**

| Field | Notes |
|---|---|
| `bulk_profiles_names` | Comma- or newline-separated user inputs. Up to **10000**. |

**Accepted formats** — usernames (`spez`), prefixed names (`u/spez`), and profile URLs (`https://reddit.com/user/spez`). Mix freely.

**Returns per user** — same rich profile record as Bulk Profiles by ID.

**Use it when** — you have a list of usernames (from comments, mentions, a CSV) and want full profiles in one run.

---

### How to run

1. **Pick an endpoint** in the "What to fetch" dropdown.
2. **Open the matching section** and paste your IDs / usernames (comma- or newline-separated). Each section is independent.
3. **Click Start.**

Default endpoint is **Bulk Posts by ID** with a small prefilled list so the actor runs out of the box.

---

### Output

Results are pushed to the actor's default dataset. View as a table or download as JSON / CSV / Excel / XML.

| Endpoint | Rows pushed |
|---|---|
| Bulk Posts by ID | one record per ID (up to 10000) |
| Bulk Comments by ID | one record per ID (up to 10000) |
| Bulk Communities by ID | one record per ID (up to 10000) |
| Bulk Profiles by ID | one record per ID (up to 10000) |
| Bulk Profiles by Name | one record per username (up to 10000) |

Every record carries an `endpoint` field. Most useful columns (id, title / name, score / karma, created date) are placed first. You only ever pay per record pushed to the dataset (see Pricing below).

---

### Status & error reference

**Run status** *(Apify-side, shown on the run page)*

| Apify UI cue | Status | Apify message | Meaning | What to do |
|---|---|---|---|---|
| green check | `SUCCEEDED` | "Actor succeeded with N results in the dataset" | Run finished. Some or zero results pushed. | Open the dataset. |
| red exclamation | `FAILED` | "The Actor process failed…" | Validation error or upstream Reddit fault. | Check the run log. You are NOT charged. |
| red clock | `TIMED-OUT` | "The Actor timed out…" | Run exceeded its timeout. | Re-run with a smaller batch. |
| red square outline | `ABORTED` | "The Actor process was aborted…" | You stopped the run manually. | No charge for unpushed results. |

**Common in-run conditions** *(visible in run log)*

| Condition | Cause | Result |
|---|---|---|
| Empty result set | None of the IDs / names matched a live entity. | Run `SUCCEEDED`, 0 records, no charge. |
| Missing IDs in output | Some IDs were deleted, banned, or never existed. | Run `SUCCEEDED`; only resolvable IDs are returned. |
| Suspended account | Username / `t2_` is suspended. | Run `SUCCEEDED`, mostly-null record for that user. |
| Input list too long | More than 10000 IDs / usernames. | Run `FAILED` with a clear validation error. No charge. |

---

### Common edge cases

- **Deleted / removed posts and comments** — partial metadata returned with `removed_by_category` populated.
- **Suspended / deleted accounts** — minimal data; expect most fields to be null.
- **Banned subreddits** — return zero records for that ID.
- **ID format flexibility** — raw IDs, prefixed (`t1_`, `t3_`, `t5_`, `t2_`), and full Reddit URLs are all accepted on post / comment endpoints.
- **Username rename** — `t2_` IDs are stable; usernames are not. Use **Bulk Profiles by ID** if you need long-term-stable references.
- **Single-record + feed lookups** live in the companion actor [Reddit Scraper V2](https://apify.com/triangular_triangle/reddit-scrape-v2) — use it for post comments, profile feeds, subreddit feeds, single-record lookups, and linked-comment context.

---

### Why this actor is fast

- **Speed — a full 10000-item run completes in around 75 seconds.** No browser to boot, no Playwright / Selenium / Puppeteer overhead. Competing browser-based scrapers typically take 15–60 seconds per item.
- **Reliability — zero browser flakiness.** No headless-Chromium crashes. No JS-render timeouts. No captcha pages. No surprise mid-run failures from a browser quirk.
- **Footprint — runs at 512 MB with ~4× headroom on full-size runs.**

| Run profile | Peak memory | Avg memory | Avg CPU | Peak CPU |
|---|---|---|---|---|
| Bulk by ID, 10000 items | ~95 MB (~18% of 512 MB) | ~91 MB | ~10% | ~57% |

Leave the **Memory** field at its default and you have plenty of headroom for spiky inputs, slow networks, or large lists. There's no benefit to bumping it higher.

---

### Pricing

**Pay-per-result.** You're only charged for records actually pushed to the dataset.

| Outcome | Charged? |
|---|---|
| `SUCCEEDED` with results | Yes — per record pushed. |
| `SUCCEEDED` with zero records | No. |
| `FAILED` (validation / upstream) | No. |
| `ABORTED` | Only for records already pushed before you stopped. |

See the actor's **Pricing** tab for the current per-result rate.

# Actor input Schema

## `endpoint` (type: `string`):

Choose which bulk lookup to run.
## `bulk_posts_ids` (type: `string`):

Comma- or newline-separated post inputs. Up to 10000.
## `bulk_comments_ids` (type: `string`):

Comma- or newline-separated comment inputs. Up to 10000.
## `bulk_communities_ids` (type: `string`):

Comma- or newline-separated subreddit IDs. Up to 10000.
## `bulk_profiles_by_id_ids` (type: `string`):

Comma- or newline-separated user IDs. Up to 10000.
## `bulk_profiles_names` (type: `string`):

Comma- or newline-separated user inputs. Up to 10000.

## Actor input object example

```json
{
  "endpoint": "bulk_posts",
  "bulk_posts_ids": "1jq3e8u, 1s4a4j6, https://www.reddit.com/r/Wordpress/comments/1szbpra/",
  "bulk_comments_ids": "c60n1vq, cszv2lg",
  "bulk_communities_ids": "t5_2qh1i, 2qh0u",
  "bulk_profiles_by_id_ids": "t2_1w72, 3djhw",
  "bulk_profiles_names": "spez, u/kn0thing, https://reddit.com/user/AutoModerator"
}
````

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "endpoint": "bulk_posts",
    "bulk_posts_ids": "1jq3e8u, 1s4a4j6, https://www.reddit.com/r/Wordpress/comments/1szbpra/",
    "bulk_comments_ids": "c60n1vq, cszv2lg",
    "bulk_communities_ids": "t5_2qh1i, 2qh0u",
    "bulk_profiles_by_id_ids": "t2_1w72, 3djhw",
    "bulk_profiles_names": "spez, u/kn0thing, https://reddit.com/user/AutoModerator"
};

// Run the Actor and wait for it to finish
const run = await client.actor("red_crawler/reddit-bulk-scrape-v2").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "endpoint": "bulk_posts",
    "bulk_posts_ids": "1jq3e8u, 1s4a4j6, https://www.reddit.com/r/Wordpress/comments/1szbpra/",
    "bulk_comments_ids": "c60n1vq, cszv2lg",
    "bulk_communities_ids": "t5_2qh1i, 2qh0u",
    "bulk_profiles_by_id_ids": "t2_1w72, 3djhw",
    "bulk_profiles_names": "spez, u/kn0thing, https://reddit.com/user/AutoModerator",
}

# Run the Actor and wait for it to finish
run = client.actor("red_crawler/reddit-bulk-scrape-v2").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "endpoint": "bulk_posts",
  "bulk_posts_ids": "1jq3e8u, 1s4a4j6, https://www.reddit.com/r/Wordpress/comments/1szbpra/",
  "bulk_comments_ids": "c60n1vq, cszv2lg",
  "bulk_communities_ids": "t5_2qh1i, 2qh0u",
  "bulk_profiles_by_id_ids": "t2_1w72, 3djhw",
  "bulk_profiles_names": "spez, u/kn0thing, https://reddit.com/user/AutoModerator"
}' |
apify call red_crawler/reddit-bulk-scrape-v2 --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=red_crawler/reddit-bulk-scrape-v2",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit Bulk Scrape 10000 IDs V2 — Posts, Comments, Subs, Users",
        "description": "Bulk-scrape Reddit posts, comments, subreddits, and users in a single call. Pick one of 5 endpoints and paste up to 10000 inputs — IDs, stripped IDs, URLs, or usernames (depending on endpoint). Returns full GQL metadata as one dataset record per item. No Reddit account or proxy required.",
        "version": "0.9",
        "x-build-id": "4e4jdDWkC3dojpXva"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/red_crawler~reddit-bulk-scrape-v2/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-red_crawler-reddit-bulk-scrape-v2",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/red_crawler~reddit-bulk-scrape-v2/runs": {
            "post": {
                "operationId": "runs-sync-red_crawler-reddit-bulk-scrape-v2",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/red_crawler~reddit-bulk-scrape-v2/run-sync": {
            "post": {
                "operationId": "run-sync-red_crawler-reddit-bulk-scrape-v2",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "endpoint"
                ],
                "properties": {
                    "endpoint": {
                        "title": "What to fetch",
                        "enum": [
                            "bulk_posts",
                            "bulk_comments",
                            "bulk_communities",
                            "bulk_profiles_by_id",
                            "bulk_profiles"
                        ],
                        "type": "string",
                        "description": "Choose which bulk lookup to run.",
                        "default": "bulk_posts"
                    },
                    "bulk_posts_ids": {
                        "title": "Post IDs / URLs",
                        "type": "string",
                        "description": "Comma- or newline-separated post inputs. Up to 10000."
                    },
                    "bulk_comments_ids": {
                        "title": "Comment IDs / URLs",
                        "type": "string",
                        "description": "Comma- or newline-separated comment inputs. Up to 10000."
                    },
                    "bulk_communities_ids": {
                        "title": "Community IDs (t5_)",
                        "type": "string",
                        "description": "Comma- or newline-separated subreddit IDs. Up to 10000."
                    },
                    "bulk_profiles_by_id_ids": {
                        "title": "User IDs (t2_)",
                        "type": "string",
                        "description": "Comma- or newline-separated user IDs. Up to 10000."
                    },
                    "bulk_profiles_names": {
                        "title": "Usernames / URLs",
                        "type": "string",
                        "description": "Comma- or newline-separated user inputs. Up to 10000."
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
