# Reddit Scraper V1 — Subreddit Feeds, Posts, Comments (4) (`red_crawler/reddit-content-fetcher`) Actor

Scrape Reddit posts and comments by URL or subreddit name. No Reddit account or OAuth required.

- **URL**: https://apify.com/red\_crawler/reddit-content-fetcher.md
- **Developed by:** [Red Crawler](https://apify.com/red_crawler) (community)
- **Categories:** Automation, SEO tools, Social media
- **Stats:** 1 total users, 0 monthly users, 100.0% runs succeeded, NaN bookmarks
- **User rating**: 5.00 out of 5 stars

## Pricing

from $1.99 / 1,000 results

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Reddit Scraper — Subreddit Feeds, Posts, Comments

![Endpoints](https://img.shields.io/badge/endpoints-4-blue) ![Auth](https://img.shields.io/badge/Reddit_account-not_required-brightgreen) ![Proxy](https://img.shields.io/badge/proxy-not_required-brightgreen) ![Pricing](https://img.shields.io/badge/pricing-pay_per_result-orange) ![Cap](https://img.shields.io/badge/feed_cap-~1000_posts-lightgrey)

Scrape Reddit posts and comments by URL or subreddit name. Four self-contained endpoints — pull a subreddit's feed, a single post's full payload, a post's full comment tree, or a single comment's metadata. **No Reddit account, OAuth, or proxy required.**

Pick the endpoint, fill the matching section, hit **Start**.

---

### Endpoints at a glance

| ## | Endpoint | Records returned | Best for |
|---|---|---|---|
| 1 | **Scrape Posts** | up to 1000 posts (subreddit feed) | niche monitoring, daily snapshots, RSS-style feeds |
| 2 | **Post Detail** | 1 record (one post) | refreshing a single post, importing a thread |
| 3 | **Scrape Comments** | up to 5000 (or uncapped) | sentiment, archives, megathread research |
| 4 | **Comment Detail** | 1 record (one comment) | quoting, refreshing one comment |

Every endpoint accepts URLs, prefixed fullnames, or raw IDs:

| Entity | Examples |
|---|---|
| post | `https://www.reddit.com/r/Wordpress/comments/1s4a4j6/` · `t3_1s4a4j6` · `1s4a4j6` |
| comment | `https://www.reddit.com/r/.../comment/lwbnv0t/` · `t1_lwbnv0t` · `lwbnv0t` |
| subreddit | `AskReddit` · `r/AskReddit` · `/r/AskReddit` · full subreddit URL |

---

### What you can fetch

#### 1. Scrape Posts — subreddit feed

Pulls a subreddit's post feed and **streams pages** so records appear in the dataset within seconds. Pages are fetched in 100s and stitched together up to your `limit`.

**Inputs**

| Field | Type | Default | Notes |
|---|---|---|---|
| `subreddit` | string | `AskReddit` | Subreddit name (without `r/`). |
| `sort` | enum | `hot` | `best` / `hot` / `new` / `top` / `rising` / `controversial`. |
| `time` | enum | *(none)* | Only used when sort is `top` / `controversial`. `hour` … `all`. |
| `limit` | int | `25` | 1 – 1000. |

**Returns per post** — Reddit ID, fullname, title, body / selftext, author, subreddit, score, ups / downs / upvote ratio, comment count, crosspost count, created + edited timestamps, permalink, external URL, domain, post-type flags (`is_self`, `is_video`, `over_18`, `spoiler`, `locked`, `stickied`, `pinned`, `archived`), distinguished status, removal category, link & author flair, thumbnail, media (images / video / gallery), awards.

**Use it when** — niche monitoring, daily community snapshots, content syndication (`r/programming` `hot` → RSS), bulk research, competitor watching.

---

#### 2. Post Detail

Full payload of a single post.

**Inputs**

| Field | Type | Notes |
|---|---|---|
| `post` | string | URL, `t3_` fullname, or raw ID. |

**Returns** — same rich post record as Scrape Posts.

**Use it when** — single-post deep dive, refreshing one record after an edit, importing a single thread.

---

#### 3. Scrape Comments — post comment tree

Comments under a single post, with control over how the tree is traversed.

**Inputs**

| Field | Type | Default | Notes |
|---|---|---|---|
| `post` | string | *(required)* | URL, `t3_`, or raw ID. |
| `sort` | enum | `top` | `best` / `top` / `new` / `controversial` / `old` / `qa`. |
| `mode` | enum | `custom` | `custom` (capped), `top_level` (top-level only), `all` (uncapped). |
| `count` | int | `100` | 1 – 5000. Used by `custom` mode. |

**Returns per comment** — ID, fullname, parent post / parent comment IDs, author, body (markdown + HTML), score / ups / downs / controversiality, created + edited timestamps, permalink, OP flag (`is_submitter`), depth, stickied / distinguished / locked / archived / score-hidden flags, subreddit, awards.

**Use it when** — sentiment analysis, comment archives, support-ticket mining, debate / megathread research, training data.

---

#### 4. Comment Detail

Full payload of a single comment.

**Inputs**

| Field | Type | Notes |
|---|---|---|
| `comment` | string | URL, `t1_` fullname, or raw ID. |

**Returns** — same rich comment record as Scrape Comments.

**Use it when** — pulling a quoted comment, refreshing one record after edits, citation tooling.

---

### How to run

1. **Pick an endpoint** in the "What to fetch" dropdown.
2. **Open the matching section** and fill its fields. Each section is independent — fields outside your chosen section are ignored.
3. **Click Start.**

Default subreddit is `AskReddit` and default test post is the public WordPress post — the actor runs out of the box.

---

### Output

Results are pushed to the actor's default dataset. View as a table or download as JSON / CSV / Excel / XML.

| Endpoint | Rows pushed |
|---|---|
| Scrape Posts | up to `limit` posts |
| Post Detail | 1 record |
| Scrape Comments | up to `count` (or uncapped if `mode=all`) |
| Comment Detail | 1 record |

Every record carries an `endpoint` field. Most useful columns (id, title, score, created, etc.) are placed first so the Table view is readable without scrolling.

---

### Status & error reference

**Run status** *(Apify-side, shown on the run page)*

| Apify UI cue | Status | Apify message | Meaning | What to do |
|---|---|---|---|---|
| green check | `SUCCEEDED` | "Actor succeeded with N results in the dataset" | Run finished. Some or zero results pushed. | Open the dataset. |
| red exclamation | `FAILED` | "The Actor process failed…" | Validation error or upstream Reddit fault. | Check the run log. You are NOT charged. |
| red clock | `TIMED-OUT` | "The Actor timed out…" | Run exceeded its timeout. | Re-run; consider lowering `limit` or using `mode=custom`. |
| red square outline | `ABORTED` | "The Actor process was aborted…" | You stopped the run manually. | No charge for unpushed results. |

**Common in-run conditions** *(visible in run log)*

| Condition | Cause | Result |
|---|---|---|
| Empty result set | Subreddit empty / banned / private. | Run `SUCCEEDED`, 0 records, no charge. |
| Subreddit feed cap | Asked for more than ~1000 posts. | Run `SUCCEEDED`, capped at Reddit's pagination limit. |
| Removed post stub | Post was removed; metadata still partial. | Run `SUCCEEDED`, returns stub with `removed_by_category`. |
| `qa` sort fallback | `qa` sort outside QA-mode subs. | Run `SUCCEEDED`, falls back to `top`. |
| Validation error: `post` required | Missing `post` field on Detail / Comments. | Run `FAILED` immediately, no charge. |

---

### Common edge cases

- **Removed / deleted posts** return whatever metadata Reddit still exposes — often a stub with `removed_by_category`.
- **Private / quarantined subreddits** return zero records.
- **Subreddit feed cap** — Reddit caps subreddit feed pagination at ~1000 unique posts. Higher `limit` won't return more.
- **Comments `all` mode is uncapped** — long threads (10k+ comments) hit Reddit's tree size limit before our cap.
- **Comment `qa` sort** — only meaningful in QA-mode subreddits; falls back to `top` elsewhere.
- **NSFW content** — fully supported; the `over_18` flag tells you if a post is age-gated.

---

### Why this actor is fast

- **Speed — 1–3 seconds per call, end-to-end.** Pure HTTP to Reddit's API. No browser to boot, no Playwright / Selenium / Puppeteer overhead. Competing browser-based scrapers typically take 15–60 seconds per call.
- **Reliability — zero browser flakiness.** No headless-Chromium crashes. No JS-render timeouts. No captcha pages. No surprise mid-run failures from a browser quirk.
- **Footprint — under 100 MB RAM per run.** Most browser-based scrapers need 1–4 GB. We're a thin async dispatcher — Reddit auth, proxy rotation, retry, and GraphQL handling all happen off-actor on our backend.

---

### Pricing

**Pay-per-result.** You're only charged for records actually pushed to the dataset.

| Outcome | Charged? |
|---|---|
| `SUCCEEDED` with results | Yes — per record pushed. |
| `SUCCEEDED` with zero records | No. |
| `FAILED` (validation / upstream) | No. |
| `ABORTED` | Only for records already pushed before you stopped. |

See the actor's **Pricing** tab for the current per-result rate.

# Actor input Schema

## `endpoint` (type: `string`):

Choose which kind of Reddit content to retrieve.
## `subreddit` (type: `string`):

Subreddit name without the r/ prefix. Used by: Scrape Posts.
## `sort` (type: `string`):

Post sort order. Used by: Scrape Posts.
## `time_filter` (type: `string`):

Time window — IGNORED unless sort is set to 'top' or 'controversial'. Leave as '—' for other sort orders.
## `limit` (type: `integer`):

Max posts to fetch (1–1000). Used by: Scrape Posts.
## `post_detail_url` (type: `string`):

Reddit post permalink, t3_xxxxx, or short id. Used by: Post Detail.
## `comments_post_url` (type: `string`):

Reddit post permalink, t3_xxxxx, or short id whose comments you want. Used by: Scrape Comments.
## `comments_num` (type: `integer`):

How many comments to return (1–5000). Used by: Scrape Comments.
## `comment_sort` (type: `string`):

Comment sort order. Used by: Scrape Comments.
## `comment_mode` (type: `string`):

How to traverse the comment tree. 'custom' = full tree to count, 'top_level' = top comments only, 'all' = full tree no limit.
## `comment_url` (type: `string`):

Reddit comment permalink, t1_xxxxx, or short id. Used by: Comment Detail.

## Actor input object example

```json
{
  "endpoint": "subreddit_posts",
  "subreddit": "AskReddit",
  "sort": "hot",
  "time_filter": "",
  "limit": 25,
  "post_detail_url": "https://www.reddit.com/r/Wordpress/comments/1s4a4j6/",
  "comments_post_url": "https://www.reddit.com/r/Wordpress/comments/1s4a4j6/",
  "comments_num": 100,
  "comment_sort": "top",
  "comment_mode": "custom"
}
````

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "endpoint": "subreddit_posts",
    "subreddit": "AskReddit",
    "post_detail_url": "https://www.reddit.com/r/Wordpress/comments/1s4a4j6/",
    "comments_post_url": "https://www.reddit.com/r/Wordpress/comments/1s4a4j6/"
};

// Run the Actor and wait for it to finish
const run = await client.actor("red_crawler/reddit-content-fetcher").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "endpoint": "subreddit_posts",
    "subreddit": "AskReddit",
    "post_detail_url": "https://www.reddit.com/r/Wordpress/comments/1s4a4j6/",
    "comments_post_url": "https://www.reddit.com/r/Wordpress/comments/1s4a4j6/",
}

# Run the Actor and wait for it to finish
run = client.actor("red_crawler/reddit-content-fetcher").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "endpoint": "subreddit_posts",
  "subreddit": "AskReddit",
  "post_detail_url": "https://www.reddit.com/r/Wordpress/comments/1s4a4j6/",
  "comments_post_url": "https://www.reddit.com/r/Wordpress/comments/1s4a4j6/"
}' |
apify call red_crawler/reddit-content-fetcher --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=red_crawler/reddit-content-fetcher",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit Scraper V1 — Subreddit Feeds, Posts, Comments (4)",
        "description": "Scrape Reddit posts and comments by URL or subreddit name. No Reddit account or OAuth required.",
        "version": "1.8",
        "x-build-id": "2V4LJ3hvHO5YfPTag"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/red_crawler~reddit-content-fetcher/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-red_crawler-reddit-content-fetcher",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/red_crawler~reddit-content-fetcher/runs": {
            "post": {
                "operationId": "runs-sync-red_crawler-reddit-content-fetcher",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/red_crawler~reddit-content-fetcher/run-sync": {
            "post": {
                "operationId": "run-sync-red_crawler-reddit-content-fetcher",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "endpoint"
                ],
                "properties": {
                    "endpoint": {
                        "title": "What to fetch",
                        "enum": [
                            "subreddit_posts",
                            "post_detail",
                            "post_comments",
                            "comment_detail"
                        ],
                        "type": "string",
                        "description": "Choose which kind of Reddit content to retrieve.",
                        "default": "subreddit_posts"
                    },
                    "subreddit": {
                        "title": "Subreddit name",
                        "pattern": "^[A-Za-z0-9_]{1,50}$",
                        "type": "string",
                        "description": "Subreddit name without the r/ prefix. Used by: Scrape Posts."
                    },
                    "sort": {
                        "title": "Sort",
                        "enum": [
                            "best",
                            "hot",
                            "new",
                            "top",
                            "rising",
                            "controversial"
                        ],
                        "type": "string",
                        "description": "Post sort order. Used by: Scrape Posts.",
                        "default": "hot"
                    },
                    "time_filter": {
                        "title": "Time filter (only for sort = top or controversial)",
                        "enum": [
                            "",
                            "hour",
                            "day",
                            "week",
                            "month",
                            "year",
                            "all"
                        ],
                        "type": "string",
                        "description": "Time window — IGNORED unless sort is set to 'top' or 'controversial'. Leave as '—' for other sort orders.",
                        "default": ""
                    },
                    "limit": {
                        "title": "Limit",
                        "minimum": 1,
                        "maximum": 1000,
                        "type": "integer",
                        "description": "Max posts to fetch (1–1000). Used by: Scrape Posts.",
                        "default": 25
                    },
                    "post_detail_url": {
                        "title": "Post URL or ID",
                        "type": "string",
                        "description": "Reddit post permalink, t3_xxxxx, or short id. Used by: Post Detail."
                    },
                    "comments_post_url": {
                        "title": "Post URL or ID",
                        "type": "string",
                        "description": "Reddit post permalink, t3_xxxxx, or short id whose comments you want. Used by: Scrape Comments."
                    },
                    "comments_num": {
                        "title": "Comments count",
                        "minimum": 1,
                        "maximum": 5000,
                        "type": "integer",
                        "description": "How many comments to return (1–5000). Used by: Scrape Comments.",
                        "default": 100
                    },
                    "comment_sort": {
                        "title": "Comment sort",
                        "enum": [
                            "best",
                            "top",
                            "new",
                            "controversial",
                            "old",
                            "qa"
                        ],
                        "type": "string",
                        "description": "Comment sort order. Used by: Scrape Comments.",
                        "default": "top"
                    },
                    "comment_mode": {
                        "title": "Comment mode",
                        "enum": [
                            "custom",
                            "top_level",
                            "all"
                        ],
                        "type": "string",
                        "description": "How to traverse the comment tree. 'custom' = full tree to count, 'top_level' = top comments only, 'all' = full tree no limit.",
                        "default": "custom"
                    },
                    "comment_url": {
                        "title": "Comment URL or ID",
                        "type": "string",
                        "description": "Reddit comment permalink, t1_xxxxx, or short id. Used by: Comment Detail."
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
