# Reddit Thread & Comments Scraper (`datara/reddit-thread-comments-scraper`) Actor

Scrape any Reddit post and its complete comment thread — including deeply nested replies — in seconds. Supports bulk URLs, cursor-based pagination for large threads, flat or nested output, score filtering, and depth capping. Perfect for sentiment analysis, AI training data, and community research.

- **URL**: https://apify.com/datara/reddit-thread-comments-scraper.md
- **Developed by:** [Datara](https://apify.com/datara) (community)
- **Categories:** Automation, Lead generation, Social media
- **Stats:** 1 total users, 0 monthly users, 0.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

from $3.00 / 1,000 results

This Actor is paid per event and usage. You are charged both the fixed price for specific events and for Apify platform usage.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Reddit Thread & Comments Scraper

Extract any Reddit post and its full comment tree — including nested replies at any depth — in seconds. Supports cursor-based pagination for threads with hundreds or thousands of comments.

---

### What This Actor Does

Given one or more Reddit post URLs, this actor:

1. Fetches the **post metadata** (title, author, score, upvote ratio, subscriber count, etc.)
2. Fetches the **full comment tree** including nested replies, paginating through all available comment pages
3. Pushes each post and comment as a clean, structured dataset record

Output records are typed (`recordType: "post"` or `"comment"`) and immediately usable in spreadsheets, databases, AI pipelines, or downstream automations.

---

### Use Cases

- **Sentiment analysis** — analyse how communities respond to products, brands, or announcements
- **AI training data** — collect high-quality human conversation threads for LLM fine-tuning or RLHF
- **Community research** — surface recurring themes, pain points, and opinions across subreddits
- **Qualitative market research** — understand what real users say about your category
- **Content strategy** — identify high-scoring discussions to inform editorial direction

---

### Input Fields

| Field | Type | Default | Description |
|---|---|---|---|
| `postUrl` | string | — | Single Reddit post URL to scrape |
| `postUrls` | array | `[]` | List of Reddit post URLs for bulk scraping (overrides `postUrl`) |
| `maxPages` | integer | `3` | Max comment pages to fetch per post (cursor pagination, ~25 comments/page) |
| `flattenComments` | boolean | `true` | Output comments as individual flat records (true) or with nested replies arrays (false) |
| `includePostRecord` | boolean | `true` | Include the post as a separate dataset record |
| `minCommentScore` | integer | `0` | Skip comments below this score threshold |
| `maxCommentDepth` | integer | `10` | Maximum reply nesting depth to include (0 = top-level only) |
| `maxCommentsPerPost` | integer | `200` | Cap on total comment records per post (1–5000) |

> **Bulk mode:** If `postUrls` is non-empty, the single `postUrl` field is ignored. Duplicate URLs are automatically deduplicated.

---

### Single URL Example Input

```json
{
  "postUrl": "https://www.reddit.com/r/startups/comments/1abc23/we_just_hit_10k_mrr_heres_what_worked/",
  "maxPages": 5,
  "flattenComments": true,
  "includePostRecord": true,
  "minCommentScore": 5,
  "maxCommentDepth": 5,
  "maxCommentsPerPost": 500
}
````

***

### Bulk URL Example Input

```json
{
  "postUrls": [
    "https://www.reddit.com/r/SaaS/comments/1abc11/thoughts_on_pricing_models/",
    "https://www.reddit.com/r/entrepreneur/comments/1abc22/bootstrapped_to_1m_ama/",
    "https://www.reddit.com/r/startups/comments/1abc33/why_we_shut_down/"
  ],
  "maxPages": 3,
  "flattenComments": true,
  "includePostRecord": true,
  "minCommentScore": 2,
  "maxCommentDepth": 10,
  "maxCommentsPerPost": 300
}
```

***

### Output Schema

#### Post Record (`recordType: "post"`)

| Field | Type | Description |
|---|---|---|
| `recordType` | string | Always `"post"` |
| `id` | string | Reddit short ID (e.g. `1lfbo7u`) |
| `name` | string | Reddit fullname, prefixed `t3_` (e.g. `t3_1lfbo7u`) |
| `title` | string | Post title |
| `author` | string | Username of the poster |
| `authorFullname` | string | Reddit internal author ID (e.g. `t2_16syu27ar1`) |
| `subreddit` | string | Subreddit name (without `r/`) |
| `url` | string | Full post URL |
| `score` | integer | Net vote score |
| `ups` | integer | Upvote count (fuzzy-rounded by Reddit) |
| `downs` | integer | Downvote count (almost always 0) |
| `upvoteRatio` | number | Ratio of upvotes to total votes (0–1) |
| `numComments` | integer | Total comment count as reported by Reddit |
| `subredditSubscribers` | integer | Subscriber count of the subreddit |
| `isVideo` | boolean | True if the post contains a Reddit-hosted video |
| `totalAwardsReceived` | integer | Number of Reddit awards |
| `createdUtc` | number | Unix timestamp (UTC seconds) of post creation |
| `createdAt` | string | ISO 8601 datetime of post creation |
| `scrapedAt` | string | ISO 8601 datetime when the record was scraped |

**Example post record:**

```json
{
  "recordType": "post",
  "id": "1lfbo7u",
  "name": "t3_1lfbo7u",
  "title": "What is a thing you love that lots of people hate?",
  "author": "Vetro_Nodulare2",
  "authorFullname": "t2_16syu27ar1",
  "subreddit": "AskReddit",
  "url": "https://www.reddit.com/r/AskReddit/comments/1lfbo7u/what_is_a_thing_you_love_that_lots_of_people_hate/",
  "score": 47,
  "ups": 47,
  "downs": 0,
  "upvoteRatio": 0.91,
  "numComments": 353,
  "subredditSubscribers": 56146601,
  "isVideo": false,
  "totalAwardsReceived": 0,
  "createdUtc": 1750341959,
  "createdAt": "2025-06-19T14:05:59.000Z",
  "scrapedAt": "2025-06-20T09:45:12.000Z"
}
```

***

#### Comment Record (`recordType: "comment"`)

| Field | Type | Description |
|---|---|---|
| `recordType` | string | Always `"comment"` |
| `id` | string | Reddit short ID (e.g. `mymupxb`) |
| `name` | string | Reddit fullname, prefixed `t1_` (e.g. `t1_mymupxb`) |
| `postId` | string | Short ID of the parent post |
| `postUrl` | string | Full URL of the parent post |
| `author` | string | Username of the commenter |
| `authorFullname` | string | Reddit internal author ID |
| `body` | string | Plain-text comment body |
| `score` | integer | Net vote score |
| `ups` | integer | Upvote count |
| `downs` | integer | Downvote count |
| `depth` | integer | Nesting depth (0 = top-level, 1 = reply to top-level, etc.) |
| `parentId` | string | Fullname of the parent (`t3_...` if replying to post, `t1_...` if replying to comment) |
| `linkId` | string | Fullname of the parent post (always `t3_...`) |
| `subreddit` | string | Subreddit name |
| `url` | string | Full URL of this comment |
| `permalink` | string | Relative permalink path |
| `gilded` | integer | Number of times gilded |
| `stickied` | boolean | True if pinned by a moderator |
| `locked` | boolean | True if the comment thread is locked |
| `archived` | boolean | True if too old to receive votes |
| `controversiality` | integer | 0 or 1; 1 = high vote split |
| `totalAwardsReceived` | integer | Number of Reddit awards |
| `createdUtc` | number | Unix timestamp (UTC seconds) of comment creation |
| `createdAt` | string | ISO 8601 datetime of comment creation |
| `scrapedAt` | string | ISO 8601 datetime when scraped |
| `replies` | array | \[Nested mode only] Child comment records |

**Example comment record (flat mode):**

```json
{
  "recordType": "comment",
  "id": "mymupxb",
  "name": "t1_mymupxb",
  "postId": "1lfbo7u",
  "postUrl": "https://www.reddit.com/r/AskReddit/comments/1lfbo7u/what_is_a_thing_you_love_that_lots_of_people_hate/",
  "author": "Background-Emu-2890",
  "authorFullname": "t2_efdlposp6",
  "body": "Black cat — I have one and I love her so much!",
  "score": 75,
  "ups": 75,
  "downs": 0,
  "depth": 0,
  "parentId": "t3_1lfbo7u",
  "linkId": "t3_1lfbo7u",
  "subreddit": "AskReddit",
  "url": "https://www.reddit.com/r/AskReddit/comments/1lfbo7u/what_is_a_thing_you_love_that_lots_of_people_hate/mymupxb/",
  "permalink": "/r/AskReddit/comments/1lfbo7u/what_is_a_thing_you_love_that_lots_of_people_hate/mymupxb/",
  "gilded": 0,
  "stickied": false,
  "locked": false,
  "archived": false,
  "controversiality": 0,
  "totalAwardsReceived": 0,
  "createdUtc": 1750342221,
  "createdAt": "2025-06-19T14:10:21.000Z",
  "scrapedAt": "2025-06-20T09:45:12.000Z"
}
```

***

### Flat vs Nested Comment Output

**Flat mode** (`flattenComments: true`, default):

- Every comment and reply is pushed as its own dataset record
- Use `depth` to understand nesting level (0 = top-level)
- Use `parentId` to reconstruct the tree (`t1_XXX` = parent comment, `t3_XXX` = direct reply to post)
- Best for: spreadsheet analysis, databases, ML pipelines, CSV export

**Nested mode** (`flattenComments: false`):

- Top-level comment records contain a `replies` array with child records embedded
- Each child also contains its own `replies` array, forming a full tree
- Best for: JSON tree processing, displaying thread structure

***

### Pagination

The ScrapeCreators API uses cursor-based pagination. Each page returns approximately 25 top-level comments. Set `maxPages` to control how many pages to fetch:

- `maxPages: 1` — fast, ~25 top-level comments
- `maxPages: 5` — ~125 top-level comments + all their replies
- `maxPages: 20` — comprehensive extraction for large threads

The actor stops pagination early if `maxCommentsPerPost` is reached or the API signals no more pages available.

***

### Error Handling

- Failed URLs push an error record (`error: true`) and processing continues for remaining URLs
- Posts with no comments push a warning record
- The dataset always contains at least one record per run

***

### Pricing

This actor uses **Pay Per Event (PPE)** pricing:

- **$0.30 per 100 records** (each post and each comment count as one record)
- A thread with 1 post + 199 comments = 200 records ≈ $0.60
- Bulk run: 10 threads × 200 comments = ~2,010 records ≈ $6.03

***

### Support

For questions or feature requests, contact the actor publisher via the Apify Store messaging system.

# Actor input Schema

## `postUrl` (type: `string`):

Full URL of a Reddit post to scrape. Used when scraping a single thread. Example: https://www.reddit.com/r/AskReddit/comments/abc123/my\_post/

## `postUrls` (type: `array`):

List of Reddit post URLs for bulk scraping. When this array is non-empty, it overrides the single Post URL field above.

## `maxPages` (type: `integer`):

Maximum number of comment pages to fetch per post using cursor-based pagination. Each page returns ~25 top-level comments. Set to 1 for a quick grab, or higher for comprehensive extraction. Range: 1–20.

## `flattenComments` (type: `boolean`):

When enabled, all comments including nested replies are output as individual flat dataset records, each carrying their depth level and parentId. When disabled, top-level comment records include their replies as nested arrays.

## `includePostRecord` (type: `boolean`):

Include the post itself as a separate record (recordType: post) in the output dataset.

## `minCommentScore` (type: `integer`):

Exclude comments with a score below this threshold. Set to 0 to include all comments regardless of score.

## `maxCommentDepth` (type: `integer`):

Maximum nesting depth of replies to include. 0 = top-level only, 1 = top-level plus direct replies, and so on. The API returns a depth field on each comment.

## `maxCommentsPerPost` (type: `integer`):

Cap on the total number of comment records extracted per post across all pages. Range: 1–5000.

## Actor input object example

```json
{
  "postUrl": "https://www.reddit.com/r/AskReddit/comments/1lfbo7u/what_is_a_thing_you_love_that_lots_of_people_hate/",
  "postUrls": [],
  "maxPages": 3,
  "flattenComments": true,
  "includePostRecord": true,
  "minCommentScore": 0,
  "maxCommentDepth": 10,
  "maxCommentsPerPost": 200
}
```

# Actor output Schema

## `results` (type: `string`):

No description

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "postUrl": "https://www.reddit.com/r/AskReddit/comments/1lfbo7u/what_is_a_thing_you_love_that_lots_of_people_hate/",
    "postUrls": [],
    "maxPages": 3,
    "flattenComments": true,
    "includePostRecord": true,
    "minCommentScore": 0,
    "maxCommentDepth": 10,
    "maxCommentsPerPost": 200
};

// Run the Actor and wait for it to finish
const run = await client.actor("datara/reddit-thread-comments-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "postUrl": "https://www.reddit.com/r/AskReddit/comments/1lfbo7u/what_is_a_thing_you_love_that_lots_of_people_hate/",
    "postUrls": [],
    "maxPages": 3,
    "flattenComments": True,
    "includePostRecord": True,
    "minCommentScore": 0,
    "maxCommentDepth": 10,
    "maxCommentsPerPost": 200,
}

# Run the Actor and wait for it to finish
run = client.actor("datara/reddit-thread-comments-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "postUrl": "https://www.reddit.com/r/AskReddit/comments/1lfbo7u/what_is_a_thing_you_love_that_lots_of_people_hate/",
  "postUrls": [],
  "maxPages": 3,
  "flattenComments": true,
  "includePostRecord": true,
  "minCommentScore": 0,
  "maxCommentDepth": 10,
  "maxCommentsPerPost": 200
}' |
apify call datara/reddit-thread-comments-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=datara/reddit-thread-comments-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit Thread & Comments Scraper",
        "description": "Scrape any Reddit post and its complete comment thread — including deeply nested replies — in seconds. Supports bulk URLs, cursor-based pagination for large threads, flat or nested output, score filtering, and depth capping. Perfect for sentiment analysis, AI training data, and community research.",
        "version": "0.0",
        "x-build-id": "XCSjahUkmQPl3rJXC"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/datara~reddit-thread-comments-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-datara-reddit-thread-comments-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/datara~reddit-thread-comments-scraper/runs": {
            "post": {
                "operationId": "runs-sync-datara-reddit-thread-comments-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/datara~reddit-thread-comments-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-datara-reddit-thread-comments-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "properties": {
                    "postUrl": {
                        "title": "Reddit Post URL",
                        "type": "string",
                        "description": "Full URL of a Reddit post to scrape. Used when scraping a single thread. Example: https://www.reddit.com/r/AskReddit/comments/abc123/my_post/",
                        "default": "https://www.reddit.com/r/AskReddit/comments/1lfbo7u/what_is_a_thing_you_love_that_lots_of_people_hate/"
                    },
                    "postUrls": {
                        "title": "Reddit Post URLs (Bulk)",
                        "type": "array",
                        "description": "List of Reddit post URLs for bulk scraping. When this array is non-empty, it overrides the single Post URL field above.",
                        "default": [],
                        "items": {
                            "type": "string"
                        }
                    },
                    "maxPages": {
                        "title": "Max Comment Pages",
                        "minimum": 1,
                        "maximum": 20,
                        "type": "integer",
                        "description": "Maximum number of comment pages to fetch per post using cursor-based pagination. Each page returns ~25 top-level comments. Set to 1 for a quick grab, or higher for comprehensive extraction. Range: 1–20.",
                        "default": 3
                    },
                    "flattenComments": {
                        "title": "Flatten Comments",
                        "type": "boolean",
                        "description": "When enabled, all comments including nested replies are output as individual flat dataset records, each carrying their depth level and parentId. When disabled, top-level comment records include their replies as nested arrays.",
                        "default": true
                    },
                    "includePostRecord": {
                        "title": "Include Post Record",
                        "type": "boolean",
                        "description": "Include the post itself as a separate record (recordType: post) in the output dataset.",
                        "default": true
                    },
                    "minCommentScore": {
                        "title": "Minimum Comment Score",
                        "minimum": -9999,
                        "maximum": 999999,
                        "type": "integer",
                        "description": "Exclude comments with a score below this threshold. Set to 0 to include all comments regardless of score.",
                        "default": 0
                    },
                    "maxCommentDepth": {
                        "title": "Max Comment Depth",
                        "minimum": 0,
                        "maximum": 100,
                        "type": "integer",
                        "description": "Maximum nesting depth of replies to include. 0 = top-level only, 1 = top-level plus direct replies, and so on. The API returns a depth field on each comment.",
                        "default": 10
                    },
                    "maxCommentsPerPost": {
                        "title": "Max Comments Per Post",
                        "minimum": 1,
                        "maximum": 5000,
                        "type": "integer",
                        "description": "Cap on the total number of comment records extracted per post across all pages. Range: 1–5000.",
                        "default": 200
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
