# Reddit Scraper (`alwaysprimedev/reddit-scraper`) Actor

Scrape Reddit posts, threads, and comments from any subreddit, search, or user — clean structured JSON, fast.

- **URL**: https://apify.com/alwaysprimedev/reddit-scraper.md
- **Developed by:** [Always Prime](https://apify.com/alwaysprimedev) (community)
- **Categories:** Automation, Lead generation, Social media
- **Stats:** 2 total users, 1 monthly users, 80.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

from $2.50 / 1,000 posts

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## 🚀 Reddit Scraper — every post, comment & thread, as clean JSON

[![Apify](https://img.shields.io/badge/Run%20on-Apify-00b96b?logo=apify&logoColor=white)](https://apify.com/)
[![Python](https://img.shields.io/badge/Built%20with-Python%203.11-3776AB?logo=python&logoColor=white)](https://www.python.org/)
[![Output: JSON · CSV · Excel](https://img.shields.io/badge/Output-JSON%20%7C%20CSV%20%7C%20Excel-orange)](#-sample-output)

> Pull **structured Reddit data at speed** — posts, comments, scores, flairs, awards, media, timestamps. No login. No code. No babysitting.

🏠 Subreddits · 🔍 Keyword search · 👤 User submissions/comments · 🔗 Custom URLs — all four sources, one input form.

---

### ⚡️ Why this scraper

- **🎯 50+ fields per post** — full title and body, score breakdown, upvote ratio, flair, awards, removal status, media URLs, edit timestamps. Nothing dropped on the floor.
- **💬 Comment threads on demand** — flip one switch and get the full comment tree per post, threaded via `parent_id` and `depth`.
- **🚄 Fast** — ~3 posts/second steady-state on default settings; ~250ms median per detail fetch.
- **🧠 Smart pagination** — stops the moment your `Max items` budget is reached. Never over-fetches, never wastes Apify Compute Units.
- **🔁 Incremental mode** — pass a `since` timestamp and only get posts newer than your last run. Perfect for daily monitoring jobs.
- **🛡️ Built-in failure budget** — if Reddit starts pushing back (challenges, hard 4xx), the actor aborts cleanly instead of burning through your CU on a broken extractor.
- **📊 Three export formats out of the box** — JSON, CSV, Excel. Direct download links from the run page.

---

### 🚀 Quick start

1. Click **Try for free** (top-right). No code, no API key.
2. Pick a search type — Subreddit, Search, User, or paste your own URLs.
3. Hit **Start** and let it run.
4. Download as **JSON / CSV / Excel** from the run page.

---

### 📥 Input

| Field | Type | Description |
|---|---|---|
| **What to scrape** (`searchType`) | enum | `subreddit` · `search` · `user` · `urls` |
| **Subreddits** (`subreddits`) | string list | e.g. `python`, `programming` (no `r/` prefix) |
| **Search query** (`query`) | string | Keywords. Reddit operators work: `author:`, `subreddit:`, `self:yes`, `flair:`. |
| **Users** (`users`) | string list | Usernames to scrape (no `u/` prefix) |
| **User content type** (`userContent`) | enum | `submitted` (posts) or `comments` |
| **Sort by** (`sortBy`) | enum | `hot` · `new` · `top` · `rising` · `controversial` · `relevance` · `comments` |
| **Time window** (`time`) | enum | `hour` · `day` · `week` · `month` · `year` · `all` (only matters for `top`/`controversial`) |
| **Max items** (`maxItems`) | int | Stop after N posts. `0` = unlimited. Default `50`. |
| **Scrape comments** (`scrapeComments`) | bool | Fetch the comment tree for every post. Default off (cheaper for indexing). |
| **Max comments per post** (`commentDepth`) | int | Cap on comments per post (BFS). Default `200`. |
| **Only posts newer than** (`since`) | datetime | ISO 8601 cutoff for incremental runs. |
| **Concurrency** (`concurrency`) | int | Parallel fetches. Default `5`, max `25`. |
| **Start URLs** (`startUrls`) | string list | Advanced override — paste any reddit URLs and ignore the search-type builder. |

---

### 📦 Sample output

One record per post — flat, JSON-friendly, ready to load into BigQuery / Postgres / pandas.

```json
{
  "id": "1t3x7ba",
  "fullname": "t3_1t3x7ba",
  "url": "https://www.reddit.com/r/Python/comments/1t3x7ba/whos_going_to_pycon_us_next_week/",
  "subreddit": "Python",
  "subreddit_prefixed": "r/Python",
  "subreddit_id": "t5_2qh0y",
  "title": "Who's going to PyCon US next week?",
  "selftext": "Me ✋ I hope to see a good number of you all in Long Beach, too! ...",
  "is_self": true,
  "domain": "self.Python",
  "post_hint": "self",
  "link_url": null,
  "author": "Loren-PSF",
  "author_fullname": "t2_so0s40st",
  "author_flair_text": ":pythonLogo: Python Software Foundation Staff",
  "distinguished": null,
  "score": 46,
  "ups": 46,
  "upvote_ratio": 0.91,
  "num_comments": 35,
  "num_crossposts": 0,
  "total_awards_received": 0,
  "gilded": 0,
  "over_18": false,
  "spoiler": false,
  "locked": false,
  "stickied": true,
  "archived": false,
  "is_video": false,
  "is_original_content": false,
  "link_flair_text": "Discussion",
  "link_flair_css_class": "discussion",
  "link_flair_background_color": "#f50057",
  "thumbnail": null,
  "preview_image_url": "https://external-preview.redd.it/FBtD3iI-OdRHdmfJbVushiwzLeMcmgTx-Ff3FnwUUg0.jpeg",
  "video_url": null,
  "removed_by_category": null,
  "removal_reason": null,
  "created_at": "2026-05-04T22:40:29+00:00",
  "edited_at": null,
  "scraped_at": "2026-05-09T13:43:47+00:00",
  "comments": [
    {
      "id": "myz2pn1",
      "parent_id": "t3_1t3x7ba",
      "depth": 0,
      "author": "vintagegeek",
      "body": "I'll be there with bells on. Looking forward to meeting people!",
      "score": 19,
      "is_submitter": false,
      "stickied": false,
      "permalink": "https://www.reddit.com/r/Python/comments/1t3x7ba/.../myz2pn1/",
      "created_at": "2026-05-04T23:01:14+00:00",
      "edited_at": null
    }
  ],
  "comments_count_scraped": 35
}
````

***

### 💡 Use cases

| Who | What for |
|---|---|
| 📈 **Market researchers** | Track sentiment, competitor mentions and product feedback across niche subreddits. |
| 🤖 **AI / ML teams** | Build training corpora from focused subreddits — clean text, threading preserved. |
| 📰 **Journalists & analysts** | Monitor breaking-story subreddits and surface trending discussions for coverage. |
| 💼 **Brand / community managers** | Find unanswered support questions about your product across Reddit, on a daily cron. |
| 🏷️ **Recruiters & talent intel** | Pull discussions in tech-job subreddits to track skill demand and salary chatter. |
| 🧑‍🔬 **Academic researchers** | Public-discourse datasets for sociolinguistics, network analysis, opinion mining. |

***

### 🧰 Tips & tricks

- 🪶 **Index-first, hydrate later.** Run with `scrapeComments: false` and `maxItems: 0` to cheaply enumerate everything. Then a second run with `startUrls` and `scrapeComments: true` only on the posts you care about.
- ⏱️ **Daily diffs.** Save the timestamp of your last successful run, then pass it as `since` next time. The actor short-circuits old posts before fetching them.
- 🎛️ **Subreddit-scoped search.** Set `searchType: search`, fill `query`, and add subreddits to `subreddits` — the actor automatically scopes search to those subreddits.
- 🔗 **Mix custom URLs.** Drop any `reddit.com/...` URL into `startUrls` (a thread, a multireddit, a sort variant) — the actor strips/appends `.json` itself.

***

### ❓ FAQ

**Does it need a Reddit account?** No.

**What about the new Reddit API limits?** This actor doesn't use Reddit's Data API, so the post-2023 commercial pricing tiers don't apply.

**Can I scrape NSFW subreddits?** Yes. NSFW posts are returned with `over_18: true` so you can filter downstream.

**Will it get all comments on a huge thread?** Up to your `commentDepth` cap (default 200, max 5000), breadth-first across the tree. For Reddit's truly massive megathreads (>10K comments), Reddit itself paginates and not every comment is reachable in one fetch — that's a Reddit limitation, not the scraper's.

**What if a post is deleted while scraping?** Deleted posts come through with `author: "[deleted]"`, `selftext: "[deleted]"`, and `removed_by_category: "deleted"`. They're not skipped — you get the metadata Reddit still surfaces.

**How fresh is the data?** Real-time. Each record carries a `scraped_at` UTC timestamp.

***

### 📅 Changelog

#### 0.1 (initial release)

- Subreddit, search, user, and start-URL modes
- Configurable comment-tree scraping with depth cap
- Incremental `since` filter, `maxItems` cap, dedup, failure budget
- JSON / CSV / Excel exports

***

### ⚖️ Legal

This scraper accesses Reddit through public, non-authenticated requests. Reddit's robots.txt disallows automated crawling, and Reddit's User Agreement and Public Content Policy restrict automated/commercial use of Reddit content. By using this scraper you take on responsibility for the legality of your specific use case in your jurisdiction (including GDPR / CCPA where applicable). The scraper does not bypass authentication, paywalls, or technical access controls. Use it for research, journalism, internal analytics, ML/AI training datasets, or other lawful purposes — and confirm that those purposes are compatible with Reddit's policies and any applicable law before running large-scale jobs. Personal data scraped from Reddit (usernames, comment bodies, flair) may constitute PII under GDPR even though usernames are pseudonymous; treat the output dataset accordingly.

# Actor input Schema

## `searchType` (type: `string`):

Choose the source of posts: a subreddit's listing, a keyword search, a user's submissions/comments, or a custom list of Reddit URLs.

## `subreddits` (type: `array`):

List of subreddit names to scrape (without the r/ prefix). Used when search type is 'subreddit', or to scope a search to specific subreddits.

## `query` (type: `string`):

Keywords to search for. Required when search type is 'search'. Reddit's search supports operators like author:USERNAME, subreddit:NAME, self:yes/no, flair:NAME.

## `users` (type: `array`):

Reddit usernames to scrape (without the u/ prefix). Used when search type is 'user'.

## `userContent` (type: `string`):

When scraping a user, choose whether to fetch their submitted posts or their comments.

## `sortBy` (type: `string`):

How Reddit should order results before scraping. Available options depend on search type: subreddits use hot/new/top/rising/controversial; search adds 'relevance' and 'comments'; users use hot/new/top/controversial.

## `time` (type: `string`):

Time range filter — only matters when sort is 'top' or 'controversial'. Otherwise ignored by Reddit.

## `maxItems` (type: `integer`):

Stop after this many posts have been scraped. Set to 0 for unlimited (the scraper will paginate until Reddit runs out of results).

## `scrapeComments` (type: `boolean`):

If on, fetches each post's full comment thread (one extra request per post — slower and ~2× the cost). If off, you get post metadata only, which is much cheaper for large indexing runs.

## `commentDepth` (type: `integer`):

Cap on how many comments to keep per post (counted breadth-first across the comment tree). Only applies when 'Scrape comments' is on.

## `since` (type: `string`):

Optional ISO 8601 cutoff (e.g. 2026-01-01T00:00:00Z). Posts created before this timestamp are skipped — perfect for incremental/diff runs that only pull what's new since last time.

## `concurrency` (type: `integer`):

How many simultaneous detail fetches to run. Higher is faster but burns Reddit's per-IP rate budget — leave at 5 unless you've configured a proxy pool.

## `startUrls` (type: `array`):

Bypass the searchType-based URL builder and scrape exactly these Reddit URLs. Any reddit.com or old.reddit.com URL works — the scraper appends .json automatically.

## Actor input object example

```json
{
  "searchType": "subreddit",
  "subreddits": [
    "python",
    "programming"
  ],
  "query": "fastapi",
  "users": [
    "spez"
  ],
  "userContent": "submitted",
  "sortBy": "hot",
  "time": "all",
  "maxItems": 50,
  "scrapeComments": false,
  "commentDepth": 200,
  "since": "2026-01-01T00:00:00Z",
  "concurrency": 5,
  "startUrls": [
    "https://www.reddit.com/r/python/top/?t=week"
  ]
}
```

# Actor output Schema

## `posts` (type: `string`):

All scraped posts as a JSON array.

## `postsCsv` (type: `string`):

All scraped posts as a CSV — opens directly in Excel or Google Sheets.

## `postsXlsx` (type: `string`):

All scraped posts as an XLSX workbook.

## `consoleView` (type: `string`):

Browse, filter and re-export results in the Apify web console.

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "subreddits": [
        "python",
        "programming"
    ]
};

// Run the Actor and wait for it to finish
const run = await client.actor("alwaysprimedev/reddit-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = { "subreddits": [
        "python",
        "programming",
    ] }

# Run the Actor and wait for it to finish
run = client.actor("alwaysprimedev/reddit-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "subreddits": [
    "python",
    "programming"
  ]
}' |
apify call alwaysprimedev/reddit-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=alwaysprimedev/reddit-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit Scraper",
        "description": "Scrape Reddit posts, threads, and comments from any subreddit, search, or user — clean structured JSON, fast.",
        "version": "0.1",
        "x-build-id": "r7cAxLOfrKbPze0pR"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/alwaysprimedev~reddit-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-alwaysprimedev-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/alwaysprimedev~reddit-scraper/runs": {
            "post": {
                "operationId": "runs-sync-alwaysprimedev-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/alwaysprimedev~reddit-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-alwaysprimedev-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "properties": {
                    "searchType": {
                        "title": "What to scrape",
                        "enum": [
                            "subreddit",
                            "search",
                            "user",
                            "urls"
                        ],
                        "type": "string",
                        "description": "Choose the source of posts: a subreddit's listing, a keyword search, a user's submissions/comments, or a custom list of Reddit URLs.",
                        "default": "subreddit"
                    },
                    "subreddits": {
                        "title": "Subreddits",
                        "uniqueItems": true,
                        "type": "array",
                        "description": "List of subreddit names to scrape (without the r/ prefix). Used when search type is 'subreddit', or to scope a search to specific subreddits.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "query": {
                        "title": "Search query",
                        "type": "string",
                        "description": "Keywords to search for. Required when search type is 'search'. Reddit's search supports operators like author:USERNAME, subreddit:NAME, self:yes/no, flair:NAME."
                    },
                    "users": {
                        "title": "Users",
                        "uniqueItems": true,
                        "type": "array",
                        "description": "Reddit usernames to scrape (without the u/ prefix). Used when search type is 'user'.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "userContent": {
                        "title": "User content type",
                        "enum": [
                            "submitted",
                            "comments"
                        ],
                        "type": "string",
                        "description": "When scraping a user, choose whether to fetch their submitted posts or their comments.",
                        "default": "submitted"
                    },
                    "sortBy": {
                        "title": "Sort by",
                        "enum": [
                            "hot",
                            "new",
                            "top",
                            "rising",
                            "controversial",
                            "relevance",
                            "comments"
                        ],
                        "type": "string",
                        "description": "How Reddit should order results before scraping. Available options depend on search type: subreddits use hot/new/top/rising/controversial; search adds 'relevance' and 'comments'; users use hot/new/top/controversial.",
                        "default": "hot"
                    },
                    "time": {
                        "title": "Time window",
                        "enum": [
                            "hour",
                            "day",
                            "week",
                            "month",
                            "year",
                            "all"
                        ],
                        "type": "string",
                        "description": "Time range filter — only matters when sort is 'top' or 'controversial'. Otherwise ignored by Reddit.",
                        "default": "all"
                    },
                    "maxItems": {
                        "title": "Max items",
                        "minimum": 0,
                        "maximum": 100000,
                        "type": "integer",
                        "description": "Stop after this many posts have been scraped. Set to 0 for unlimited (the scraper will paginate until Reddit runs out of results).",
                        "default": 50
                    },
                    "scrapeComments": {
                        "title": "Scrape comments",
                        "type": "boolean",
                        "description": "If on, fetches each post's full comment thread (one extra request per post — slower and ~2× the cost). If off, you get post metadata only, which is much cheaper for large indexing runs.",
                        "default": false
                    },
                    "commentDepth": {
                        "title": "Max comments per post",
                        "minimum": 1,
                        "maximum": 5000,
                        "type": "integer",
                        "description": "Cap on how many comments to keep per post (counted breadth-first across the comment tree). Only applies when 'Scrape comments' is on.",
                        "default": 200
                    },
                    "since": {
                        "title": "Only posts newer than",
                        "type": "string",
                        "description": "Optional ISO 8601 cutoff (e.g. 2026-01-01T00:00:00Z). Posts created before this timestamp are skipped — perfect for incremental/diff runs that only pull what's new since last time."
                    },
                    "concurrency": {
                        "title": "Concurrency",
                        "minimum": 1,
                        "maximum": 25,
                        "type": "integer",
                        "description": "How many simultaneous detail fetches to run. Higher is faster but burns Reddit's per-IP rate budget — leave at 5 unless you've configured a proxy pool.",
                        "default": 5
                    },
                    "startUrls": {
                        "title": "Start URLs (advanced)",
                        "uniqueItems": true,
                        "type": "array",
                        "description": "Bypass the searchType-based URL builder and scrape exactly these Reddit URLs. Any reddit.com or old.reddit.com URL works — the scraper appends .json automatically.",
                        "items": {
                            "type": "string"
                        }
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
