# Reddit User Profile Posts And Comments Scraper (`scrapeflow/reddit-user-profile-posts-and-comments-scraper`) Actor

- **URL**: https://apify.com/scrapeflow/reddit-user-profile-posts-and-comments-scraper.md
- **Developed by:** [ScrapeFlow](https://apify.com/scrapeflow) (community)
- **Categories:** Automation, Lead generation, Social media
- **Stats:** 2 total users, 1 monthly users, 100.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

from $3.99 / 1,000 results

This Actor is paid per event and usage. You are charged both the fixed price for specific events and for Apify platform usage.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Reddit User Profile Posts & Comments Scraper (Apify Actor)

This Apify Actor is a **production-grade Reddit user profile scraper** that extracts public posts from Reddit profiles (old.reddit.com) and optionally pulls comments for each post. It supports **usernames, profile URLs, and keyword searches** while handling proxy fallbacks, retry logic, and live dataset saving so results are preserved even if a run is interrupted. The code is fully asynchronous, function-based, and follows Apify best practices for proxy handling, rate limiting, and structured outputs.

---

### What this actor does

- Crawls **Reddit user profiles** on old.reddit.com and collects posts with detailed metadata (title, score, subreddit, permalink, created time, preview/media, flair, NSFW flags, and more).
- Supports **keyword searches** via `keyword:<term>` to gather posts matching a query on old Reddit search.
- Optionally fetches **top-level comments** for each post via Reddit’s JSON endpoint, capped by `maxComments`.
- Streams every parsed item to the Apify dataset immediately (live saving) to avoid data loss.
- Implements **proxy fallback logic**: starts direct, then falls back to Apify datacenter proxy on block, and finally to residential proxy (stays on residential after it’s used). Logs every proxy transition.
- Adds polite randomized delays and retry/backoff behavior to reduce blocks and keep runs stable.

---

### Why choose this Reddit scraper

- **Proxy-aware by design**: automatic direct → datacenter → residential fallback with clear log messages, plus sticky behavior on residential once triggered.
- **Async + resilient**: aiohttp-based concurrency with per-request retry/backoff and graceful error handling.
- **Rich parsing**: Ports robust HTML parsing from a proven standalone script, including thumbnails, flair, preview, media hints, and author flair.
- **Comments optionality**: Pull only the comments you need, up to a defined cap, to control cost and speed.
- **Live dataset writes**: Results are saved as soon as they are parsed, protecting your data if a run stops.
- **Flexible inputs**: Accepts usernames, profile URLs (old or new Reddit), and keyword searches (`keyword:term`) in bulk.
- **Sort control**: Choose `new`, `hot`, `top`, or `controversial` to match the ordering you need.
- **SEO-focused output**: Detailed, structured data suitable for analytics, monitoring, sentiment, and content curation pipelines.

---

### Key features at a glance

- Targets: Reddit profiles (submitted posts) and keyword search results on old.reddit.com.
- Inputs: bulk sources, sort selection, post/comment caps, proxy config.
- Outputs: post objects with selftext, flair, preview/media info, engagement fields, and optional comments array.
- Anti-blocking: proxy fallback, jittered delays, 3x retries with exponential-style waits, block detection on status codes 403/429/503.
- Logging: verbose Apify logs for proxy transitions, pagination progress, and parse health (with HTML previews when parsing yields zero posts).

---

### How it works (flow)

1. **Input parsing**: The actor normalizes each `startUrls` entry into one of three kinds: `user` (username or profile URL), `keyword` (`keyword:<term>`), or `url` (treated as user unless it’s clearly a search).
2. **Proxy setup**: Prepares Apify proxy configs for datacenter and residential. Starts in **direct** mode; on block escalates to **datacenter**, then **residential** (and remains residential after first use). If a tier is unavailable, it logs and stays at the current tier.
3. **Fetching**: Uses aiohttp with shared headers that mimic a real browser. Each page fetch includes retry/backoff and block detection (403/429/503).
4. **Parsing**: Reuses the proven HTML parsing functions from the original standalone scraper to extract rich post metadata from `div#thing_t3_*` elements.
5. **Comments (optional)**: If `maxComments > 0`, calls the Reddit JSON thread endpoint for each post (capped) and attaches a `comments` array.
6. **Live saving**: Each parsed post is pushed immediately to the dataset view defined in `.actor/actor.json`.
7. **Pagination and limits**: Continues paging until `maxPosts` per target is reached, or no posts are parsed; includes small random delays between pages.

---

### Input schema (actor.json)

- `startUrls` (array, required): Mixed inputs accepted.
  - Username formats: `u/example`, `example`, `https://old.reddit.com/user/example`, `https://www.reddit.com/u/example`.
  - Keyword format: `keyword:python` (scrapes old Reddit search results for that term).
- `sortOrder` (string): `new` (default) | `hot` | `top` | `controversial`.
- `maxPosts` (integer): Maximum posts per target (default 100, max 500).
- `maxComments` (integer): Maximum comments per post (default 0 to skip).
- `proxyConfiguration` (object): Standard Apify proxy config; actor starts direct and auto-falls back. Provide Apify token/proxy password for datacenter/residential usage.

Example input
```json
{
  "startUrls": [
    { "url": "https://old.reddit.com/user/spez" },
    { "url": "u/kn0thing" },
    { "url": "keyword:python" }
  ],
  "sortOrder": "new",
  "maxPosts": 50,
  "maxComments": 5,
  "proxyConfiguration": {
    "useApifyProxy": true,
    "apifyProxyGroups": ["RESIDENTIAL"]
  }
}
````

***

### Output format

Each dataset item represents a Reddit post with fields aligned to the original HTML scraper. Highlights:

- Identity: `id`, `name`, `permalink`, `url`, `domain`, `subreddit`, `author`.
- Content: `title`, `selftext`, `selftext_html`, `post_hint`, `is_self`, `over_18`, `spoiler`.
- Engagement: `score`, `ups`, `downs` (0), `num_comments`, `gilded`.
- Flair: `link_flair_text`, `link_flair_css_class`, `link_flair_richtext`, `author_flair_text`, `author_flair_css_class`.
- Media/preview: `thumbnail`, `thumbnail_height`, `thumbnail_width`, `preview`, `media`, `secure_media`, `media_embed`.
- Moderation/status: `stickied`, `locked`, `archived`, `distinguished`, `treatment_tags`.
- Comments: `comments` (array, only if `maxComments > 0`; each has `id`, `author`, `body`, `score`, `created_utc`, `permalink`, `replies_count`).

Dataset view (from `.actor/actor.json`) shows key columns: `author`, `title`, `subreddit`, `score`, `num_comments`, `permalink`, `created_utc`, `is_self`, `selftext`.

***

### Proxy strategy and anti-blocking

- Start direct (no proxy). If the platform blocks, escalate to **datacenter** proxy. If blocked again, escalate to **residential** and stay there.
- Block detection: HTTP 403/429/503 triggers escalation; retries include jittered sleeps (0.8–1.5s).
- Residential retries: up to 3 after switching; then fail fast with an explicit error.
- Logging: every transition is logged (`Switching to datacenter...`, `Switching to residential...`, `Residential proxy retry...`).
- Tips for reliability:
  - Provide `APIFY_TOKEN` so proxy passwords resolve automatically.
  - Prefer residential proxies for higher success on Reddit; keep `User-Agent` as provided unless you must change it.
  - Use reasonable `maxPosts` and `maxComments` to limit load.

***

### Rate limits and performance

- Concurrency: a modest semaphore for targets (default 3) and connection caps on aiohttp to avoid overloading Reddit.
- Delays: random 0.6–1.4s between pages; small jitter between retries.
- Retries: per-request retry until escalation rules are satisfied; residential retries capped at 3.

***

### How to run on Apify console

1. Go to your actor: **Reddit User Profile Posts & Comments Scraper**.
2. Open the **Input** tab and paste your JSON (see example above). Ensure `startUrls` is provided.
3. Set `proxyConfiguration` to use Apify Proxy (datacenter or residential). Supplying `APIFY_TOKEN` or proxy password is required for proxy use.
4. Click **Run**. Watch live logs for proxy transitions, pagination, and parse status.
5. Results appear in the **Dataset** tab; export as JSON, CSV, or XLSX.

***

### How to run locally

```bash
python -m venv .venv
.venv\\Scripts\\activate
pip install -r requirements.txt

## set input using local storage
mkdir -p apify_storage/key_value_stores/default
echo "{ \"startUrls\": [{\"url\": \"https://old.reddit.com/user/spez\"}], \"maxPosts\": 5, \"sortOrder\": \"new\", \"maxComments\": 0, \"proxyConfiguration\": {\"useApifyProxy\": false} }" > apify_storage/key_value_stores/default/INPUT.json

set APIFY_LOCAL_STORAGE_DIR=%CD%\\apify_storage
python -m src
```

Notes:

- Without Apify proxy credentials, you may see blocks. Provide `APIFY_TOKEN` or `APIFY_PROXY_PASSWORD` to enable proxies locally.
- Logs will show if parsing finds zero posts; a short HTML preview is emitted to help debug blocks or empty pages.

***

### Field-by-field input guidance

- **startUrls (array, required)**
  - Usernames: `u/someone`, `someone`, or full profile URLs (old/new Reddit).
  - Keywords: prefix with `keyword:` to trigger search mode on old Reddit.
  - Mix freely in one run; the actor will normalize each.
- **sortOrder (string)**
  - Use `new` for freshest posts; `top` or `controversial` for engagement-centric pulls; `hot` for trending.
- **maxPosts (int)**
  - Cap per target; actor stops when limit is reached or when no more posts parse.
- **maxComments (int)**
  - If 0, comments are skipped (fastest). Otherwise, up to this many top-level comments are fetched from Reddit JSON.
- **proxyConfiguration (object)**
  - Standard Apify proxy settings. Actor auto-escalates; residential is sticky once used. Provide token/password for proxy auth.

***

### Practical use cases (SEO-friendly)

- **Reddit brand monitoring scraper**: Track brand mentions across user posts.
- **Content research**: Harvest self posts with full text for sentiment and topic modeling.
- **Community analysis**: Analyze top posts for specific users or keywords to understand engagement patterns.
- **Lead discovery**: Identify active authors in niche subreddits via keyword-based pulls.
- **Trend tracking**: Combine `sortOrder=hot` with keyword search to follow emerging topics.
- **Dataset creation**: Build structured corpora of Reddit posts and optional comments for LLM fine-tuning or analytics.

***

### Troubleshooting and tips

- **Got “Input 'startUrls' is required”**: Ensure `startUrls` is present and non-empty in JSON.
- **Zero posts parsed**: Check logs; a preview of the HTML is printed. Likely a block—enable proxies or switch to residential.
- **Proxy errors**: Provide `APIFY_TOKEN` or `APIFY_PROXY_PASSWORD`. If groups are needed, set `apifyProxyGroups` in `proxyConfiguration`.
- **Slow runs**: Reduce `maxPosts`/`maxComments` or lower concurrency (edit semaphore if you fork).
- **Comment depth**: Only top-level comments are fetched; replies count is provided for context.

***

### FAQ

**Does this scrape private or suspended accounts?**\
No. Only publicly accessible pages on old.reddit.com are parsed.

**Can it bypass Reddit rate limits?**\
It uses respectful delays and proxy rotation, but you should keep `maxPosts` reasonable and prefer residential proxies for reliability.

**Why old.reddit.com?**\
The HTML structure is stable for scraping and matches the original parser logic for consistency with legacy outputs.

**What about pagination?**\
The actor paginates using `after` tokens from parsed posts and stops when `maxPosts` is hit or a page yields no posts.

**Is the output schema stable?**\
It mirrors the original script’s rich fields; minor adjustments may occur if Reddit markup changes. The dataset view highlights primary columns.

***

### Changelog (high level)

- Initial Apify actor: async aiohttp scraper, proxy fallback (direct → DC → RES), live dataset pushes, optional comments, detailed logging, and legacy field parity with the standalone scripts.

***

### Support

- For Apify platform questions: https://console.apify.com/support
- For actor-specific issues, review logs (proxy transitions, HTML preview on empty parse). If blocks persist, run with residential proxy and lower request volumes.

***

**Keywords:** Reddit user profile scraper, Reddit posts scraper, Reddit comments scraper, old.reddit.com scraper, Apify actor for Reddit, Reddit keyword scraper, Reddit proxy scraping, Reddit dataset export, Reddit sentiment data, Reddit brand monitoring, Reddit content research, Reddit analytics, Reddit lead generation.

# Actor input Schema

## `startUrls` (type: `array`):

📋 Enter Reddit usernames, profile URLs, or keyword searches.

Examples:
• Username: cyPersimmon9 or u/cyPersimmon9
• Profile URL: https://www.reddit.com/user/cyPersimmon9
• Keyword search: keyword:python

💡 Tip: You can add multiple usernames or keywords to scrape them all!

## `sortOrder` (type: `string`):

📊 Choose how to sort the posts:
• 🆕 new - Most recent posts first
• 🔥 hot - Most popular posts
• ⬆️ top - Highest scored posts
• 💥 controversial - Most debated posts

## `maxPosts` (type: `integer`):

🎯 Maximum number of posts to collect per profile or keyword.

💡 Range: 1-500 posts
📌 Default: 20 posts

## `fetchSelftext` (type: `boolean`):

📝 Enable this to extract full text content from self posts.

✅ When enabled: Fetches individual post pages to get complete selftext
❌ When disabled: Only extracts metadata (faster, less data)

💡 Recommended: Enable for text analysis or content research

## `proxyConfiguration` (type: `object`):

🔒 Proxy settings for reliable scraping.

🔄 Smart fallback system:
1️⃣ Starts with direct connection
2️⃣ Falls back to datacenter proxy if blocked
3️⃣ Switches to residential proxy if needed

💡 Leave default (no proxy) for most use cases

## Actor input object example

```json
{
  "startUrls": [
    "cyPersimmon9"
  ],
  "sortOrder": "new",
  "maxPosts": 20,
  "fetchSelftext": false,
  "proxyConfiguration": {
    "useApifyProxy": false
  }
}
```

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "startUrls": [
        "cyPersimmon9"
    ],
    "proxyConfiguration": {
        "useApifyProxy": false
    }
};

// Run the Actor and wait for it to finish
const run = await client.actor("scrapeflow/reddit-user-profile-posts-and-comments-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "startUrls": ["cyPersimmon9"],
    "proxyConfiguration": { "useApifyProxy": False },
}

# Run the Actor and wait for it to finish
run = client.actor("scrapeflow/reddit-user-profile-posts-and-comments-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "startUrls": [
    "cyPersimmon9"
  ],
  "proxyConfiguration": {
    "useApifyProxy": false
  }
}' |
apify call scrapeflow/reddit-user-profile-posts-and-comments-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=scrapeflow/reddit-user-profile-posts-and-comments-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit User Profile Posts And Comments Scraper",
        "version": "0.1",
        "x-build-id": "IYy0seqCJcP5eivi6"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/scrapeflow~reddit-user-profile-posts-and-comments-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-scrapeflow-reddit-user-profile-posts-and-comments-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/scrapeflow~reddit-user-profile-posts-and-comments-scraper/runs": {
            "post": {
                "operationId": "runs-sync-scrapeflow-reddit-user-profile-posts-and-comments-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/scrapeflow~reddit-user-profile-posts-and-comments-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-scrapeflow-reddit-user-profile-posts-and-comments-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "startUrls"
                ],
                "properties": {
                    "startUrls": {
                        "title": "👤 Usernames / URLs / Keywords",
                        "type": "array",
                        "description": "📋 Enter Reddit usernames, profile URLs, or keyword searches.\n\nExamples:\n• Username: cyPersimmon9 or u/cyPersimmon9\n• Profile URL: https://www.reddit.com/user/cyPersimmon9\n• Keyword search: keyword:python\n\n💡 Tip: You can add multiple usernames or keywords to scrape them all!",
                        "items": {
                            "type": "string"
                        }
                    },
                    "sortOrder": {
                        "title": "🔀 Sort Order",
                        "enum": [
                            "new",
                            "hot",
                            "top",
                            "controversial"
                        ],
                        "type": "string",
                        "description": "📊 Choose how to sort the posts:\n• 🆕 new - Most recent posts first\n• 🔥 hot - Most popular posts\n• ⬆️ top - Highest scored posts\n• 💥 controversial - Most debated posts",
                        "default": "new"
                    },
                    "maxPosts": {
                        "title": "📈 Max Posts Per Profile",
                        "minimum": 1,
                        "maximum": 500,
                        "type": "integer",
                        "description": "🎯 Maximum number of posts to collect per profile or keyword.\n\n💡 Range: 1-500 posts\n📌 Default: 20 posts",
                        "default": 20
                    },
                    "fetchSelftext": {
                        "title": "📄 Fetch Self Post Content",
                        "type": "boolean",
                        "description": "📝 Enable this to extract full text content from self posts.\n\n✅ When enabled: Fetches individual post pages to get complete selftext\n❌ When disabled: Only extracts metadata (faster, less data)\n\n💡 Recommended: Enable for text analysis or content research",
                        "default": false
                    },
                    "proxyConfiguration": {
                        "title": "🌐 Proxy Configuration",
                        "type": "object",
                        "description": "🔒 Proxy settings for reliable scraping.\n\n🔄 Smart fallback system:\n1️⃣ Starts with direct connection\n2️⃣ Falls back to datacenter proxy if blocked\n3️⃣ Switches to residential proxy if needed\n\n💡 Leave default (no proxy) for most use cases"
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
