# Reddit Scraper - Neatrat ⚡ (`neatrat/reddit-scraper`) Actor

🚀 High-speed Reddit scraping. No API limits. No Proxies needed.

- **URL**: https://apify.com/neatrat/reddit-scraper.md
- **Developed by:** [Neatrat](https://apify.com/neatrat) (community)
- **Categories:** Social media, Lead generation, Automation
- **Stats:** 5 total users, 1 monthly users, 100.0% runs succeeded, 13 bookmarks
- **User rating**: 5.00 out of 5 stars

## Pricing

from $2.90 / 1,000 results

This Actor is paid per event and usage. You are charged both the fixed price for specific events and for Apify platform usage.
Since this Actor supports Apify Store discounts, the price gets lower the higher subscription plan you have.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Reddit Scraper - Neatrat ⚡

> 🚀 **High-speed Reddit scraping. No API limits. No proxy needed. Pay only for results.**

Get fresh Reddit posts, comments, subreddits, user pages, popular feeds, leaderboards, and search results as clean structured JSON, without touching the Reddit API, without managing proxies, and without paying for compute time you didn't use.

Built for **developers, marketers, researchers, data teams, and AI agents** who just want Reddit data to show up, correct and on time.

---

### Why Neatrat's Reddit Scraper?

- ⚡ **High-speed.** Runs finish in seconds, not minutes. Tuned to be the fastest Reddit scraper on Apify.
- 🔓 **No API limits.** Skip Reddit's rate limits, quotas, and app-registration paperwork.
- 🌐 **No proxy needed.** Residential proxies are included. Nothing to buy, configure, or rotate.
- 💸 **Pay only for results.** Flat **$3 per 1,000 stored items**. No per-hour compute surprises. Blocked responses are never billed.
- 🧼 **Clean input.** Paste a Reddit link or a keyword. Done. No twenty toggles to learn.
- 🔎 **Auto URL detection.** Posts, comments, subreddits, users, searches, `r/popular`, and leaderboards all handled from one list.
- 🤖 **AI-agent friendly.** The same engine is exposed as an MCP (Model Context Protocol) server, so agents in Claude Desktop, Cursor, and VS Code can call Reddit scraping as a native tool.
- 🪶 **Lean runtime.** 512 MB is all it needs, which keeps Apify compute charges minimal on every plan.

### What you can scrape

| You give us | You get |
| --- | --- |
| A post URL | Full post with title, body, metadata, and expandable comment threads |
| A comment permalink | The comment with its ancestor context and replies |
| A subreddit URL | Post listings, with pagination and sort control |
| A user URL | Profile, submitted posts, or comment history |
| A search URL | Search results across Reddit or scoped to one subreddit |
| `r/popular` or `subreddits/leaderboard` | Trending posts or trending communities |
| Keywords | Keyword search across posts, communities, and users |

Everything comes back as **structured JSON**, one item at a time, straight to the Apify dataset. Stream it into Make, Zapier, n8n, Google Sheets, a webhook, or your own pipeline.

### Pricing

**$3 per 1,000 stored items.** That's the whole price.

- Residential proxies included.
- Live Reddit fetches included.
- Dataset delivery included.
- No hidden compute upcharge.
- Blocked responses are **never billed** - they're skipped and counted in the run summary.

#### Free trial for Apify users

Not on a paid plan yet? Kick the tires for free:

- **5 lifetime runs**
- **500 lifetime stored results**

When you hit the limit the actor exits cleanly and points you to a paid plan. No credit-card surprises, no partial charges.

---

### How to use it

You can combine direct URLs and keyword searches in the same run.

#### Option 1 - Drop in Reddit URLs

Just paste the URLs. The actor figures out the rest.

```json
{
  "startUrls": [
    { "url": "https://www.reddit.com/r/programming/" },
    { "url": "https://www.reddit.com/r/programming/comments/173viwj/" },
    { "url": "https://www.reddit.com/user/spez" },
    { "url": "https://www.reddit.com/r/popular/" }
  ]
}
````

#### Option 2 - Search by keyword

```json
{
  "searchTerms": ["typescript", "bun runtime"],
  "searchTypes": ["posts", "communities"],
  "withinSubreddit": "programming",
  "searchSort": "new",
  "timeFilter": "week"
}
```

`searchTypes` accepts any combination of `"posts"`, `"communities"`, `"users"`.

#### Option 3 - Full-depth post crawl

Turn any post listing into a deep scrape that follows every post into its full comment thread:

```json
{
  "startUrls": [{ "url": "https://www.reddit.com/r/generativeAI/" }],
  "crawlComments": true,
  "maxPosts": 10,
  "maxCommentsPerPost": 50
}
```

***

### Input reference

| Field | Type | Default | What it does |
| --- | --- | --- | --- |
| `startUrls` | `{ url }[]` | - | Reddit URLs to scrape. Posts, comments, subreddits, users, searches, popular, leaderboard. |
| `searchTerms` | `string[]` | - | Keywords to search for. |
| `searchTypes` | `("posts" \| "communities" \| "users")[]` | `["posts"]` | Which surfaces each keyword hits. |
| `withinSubreddit` | `string` | `null` | Restrict keyword post search to one subreddit (e.g. `programming`). |
| `searchSort` | `"relevance" \| "new" \| "comments" \| "top"` | `"new"` | Sort order for keyword post search. |
| `timeFilter` | `"all" \| "hour" \| "day" \| "week" \| "month" \| "year"` | `"all"` | Time window for searches and top/controversial sorts. |
| `postSort` | `"hot" \| "new" \| "top" \| "rising" \| "controversial"` | `"hot"` | Sort for subreddit listings when the URL doesn't specify one. |
| `crawlComments` | `boolean` | `false` | Treat post listings as discovery and fetch full comments for every post. |
| `pages` | `integer` | `1` | Listing pages to follow. For posts, this is how many extra comment expansion rounds to run. |
| `includeNsfw` | `boolean` | `true` | When `false`, NSFW items are filtered out before billing. |
| `maxItems` | `integer` | `100` | Total dataset cap for the whole run. |
| `maxPosts` | `integer` | `25` | Cap per post-style listing. |
| `maxComments` | `integer` | `100` | Global cap on nested comments stored across all full-post fetches. |
| `maxCommentsPerPost` | `integer` | `20` | Per-post cap on nested comments. |
| `maxCommunities` | `integer` | `10` | Cap for community search and leaderboard. |
| `maxUsers` | `integer` | `25` | Cap for user search. |
| `requestTimeoutSecs` | `integer` | `45` | Per-request timeout. |

***

### Example inputs

**Single subreddit feed**

```json
{
  "startUrls": [{ "url": "https://www.reddit.com/r/programming/" }],
  "maxPosts": 25,
  "maxItems": 25
}
```

**One post with deep comment expansion**

```json
{
  "startUrls": [
    { "url": "https://www.reddit.com/r/programming/comments/173viwj/" }
  ],
  "pages": 3,
  "maxCommentsPerPost": 200,
  "maxItems": 1
}
```

**Keyword search scoped to one subreddit**

```json
{
  "searchTerms": ["typescript"],
  "searchTypes": ["posts"],
  "withinSubreddit": "programming",
  "searchSort": "new",
  "timeFilter": "week",
  "maxPosts": 50,
  "maxItems": 50
}
```

**Discovery plus full-post crawl**

```json
{
  "startUrls": [{ "url": "https://www.reddit.com/r/generativeAI/" }],
  "crawlComments": true,
  "maxPosts": 10,
  "maxCommentsPerPost": 50,
  "maxItems": 10
}
```

**Mixed run**

```json
{
  "startUrls": [
    { "url": "https://www.reddit.com/r/popular/" },
    { "url": "https://www.reddit.com/user/spez" }
  ],
  "searchTerms": ["apify", "neatrat"],
  "searchTypes": ["posts", "communities"],
  "maxPosts": 15,
  "maxCommunities": 5,
  "maxItems": 120
}
```

***

### Output shape

Every dataset item carries a `dataType` and a `sourceType` so downstream pipelines can filter cleanly even when one run mixes post results, community previews, and user search hits.

Typical `dataType` values:

- `post`: full post with comments
- `comment-permalink`: a comment with ancestor context
- `communityDetails`: subreddit about-box
- `userProfile`: user about-box
- `postPreview`: one item from a post listing
- `commentPreview`: one item from a user comment listing
- `communityPreview`: one item from community search or leaderboard
- `userPreview`: one item from user search

Listing routes are flattened (one dataset item per preview). Single-resource routes store one item. When `crawlComments` is on, raw previews are dropped so you only pay for the full-comment post objects.

***

### Good use cases

- **Marketing**: track brand mentions, watch competitors, monitor product subreddits, find influencers.
- **Research**: pull public discussions about a topic, sample comment sentiment, build datasets.
- **Analytics**: snapshot subreddit activity over time, feed BI dashboards.
- **AI / LLM teams**: build RAG corpora from niche subreddits, keep LLM context fresh, ground agents with live Reddit signal.
- **Community & growth**: spot trending threads in your niche, catch support questions fast.

### For AI agents and MCP users

This scraper also ships as an **MCP (Model Context Protocol) server**, so AI agents in Claude Desktop, Cursor, VS Code, and other MCP-capable clients can call Reddit scraping as a native tool.

If you're building an agent and want it wired in as an MCP tool, reach out through the contact channel on the Apify store page and we'll share the MCP endpoint.

For everyone else, this Apify actor is the simplest way to turn Reddit into clean structured data without thinking about APIs, proxies, or rate limits.

### Support

Questions, feature requests, or custom use cases? Reach out through the Apify store page and we'll get back fast.

# Actor input Schema

## `searchTerms` (type: `array`):

Keywords to search on Reddit. The options below (search targets, sort, time, community filter) only apply to this section. Independent from Direct URLs — you can use either or both, results are combined.

## `searchTypes` (type: `array`):

Which surfaces to search for each keyword. Pick any combination of posts, communities, and users.

## `withinSubreddit` (type: `string`):

Optional. When set, keyword post searches are restricted to this subreddit. Accepts `r/programming` or `programming`.

## `searchSort` (type: `string`):

Sort order for keyword post searches.

## `timeFilter` (type: `string`):

Optional time window for keyword searches and top/controversial sorts.

## `startUrls` (type: `array`):

Reddit URLs to scrape. The actor auto-detects posts, comment permalinks, subreddits, user pages, search URLs, popular, and leaderboard pages — no need to pick the right endpoint manually. Use `/r/<subreddit>/about/` to scrape a subreddit's metadata (sidebar, description, rules) instead of its posts.

## `postSort` (type: `string`):

Sort used for subreddit listing URLs when the URL itself does not already pick a sort (e.g. `/r/programming/` vs `/r/programming/top/`).

## `crawlComments` (type: `boolean`):

When enabled, the actor also expands each post's comment tree (multiple comment pages). Full post bodies are always fetched for listing items regardless of this flag. Leave off for faster scrapes that only need post content.

## `includeNsfw` (type: `boolean`):

When disabled, NSFW posts are filtered out before storage and never billed.

## `maxItems` (type: `integer`):

Global cap across the whole run. Each stored dataset item counts against this. Free Apify users are additionally capped at 500 total results across all runs.

## `maxPosts` (type: `integer`):

Cap for post-style listings: subreddit feeds, popular, user posts, and keyword post search.

## `maxComments` (type: `integer`):

Global cap on nested comments stored across all full-post fetches. Also used as the listing cap for user comment pages.

## `maxCommentsPerPost` (type: `integer`):

Per-post cap on nested comments stored inside each full post result.

## `maxCommunities` (type: `integer`):

Cap for community-style listings: community search and leaderboard.

## `maxUsers` (type: `integer`):

Cap for user search results.

## `pages` (type: `integer`):

Listing pages to follow. For posts and comment permalinks, this controls how many extra comment expansion rounds the scraper runs.

## Actor input object example

```json
{
  "searchTypes": [
    "posts"
  ],
  "searchSort": "new",
  "timeFilter": "all",
  "startUrls": [
    {
      "url": "https://www.reddit.com/r/programming/"
    }
  ],
  "postSort": "hot",
  "crawlComments": false,
  "includeNsfw": true,
  "maxItems": 100,
  "maxPosts": 25,
  "maxComments": 100,
  "maxCommentsPerPost": 20,
  "maxCommunities": 10,
  "maxUsers": 25,
  "pages": 1
}
```

# Actor output Schema

## `overview` (type: `string`):

All scraped items with key fields: title, subreddit, author, score, comments, content, permalink, URL.

## `posts` (type: `string`):

Reddit posts and post previews only.

## `comments` (type: `string`):

Comment previews from user comment pages and comment searches.

## `communities` (type: `string`):

Subreddits from leaderboard or community search.

## `users` (type: `string`):

Users from user search or profile scrapes.

## `allFields` (type: `string`):

Full dataset with every field the scraper returned.

## `summary` (type: `string`):

Task counts, stored results, blocked/failed tasks, and error list.

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "startUrls": [
        {
            "url": "https://www.reddit.com/r/programming/"
        }
    ]
};

// Run the Actor and wait for it to finish
const run = await client.actor("neatrat/reddit-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = { "startUrls": [{ "url": "https://www.reddit.com/r/programming/" }] }

# Run the Actor and wait for it to finish
run = client.actor("neatrat/reddit-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "startUrls": [
    {
      "url": "https://www.reddit.com/r/programming/"
    }
  ]
}' |
apify call neatrat/reddit-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=neatrat/reddit-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit Scraper - Neatrat ⚡",
        "description": "🚀 High-speed Reddit scraping. No API limits. No Proxies needed.",
        "version": "0.1",
        "x-build-id": "DGNggk2QGtavHaEjN"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/neatrat~reddit-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-neatrat-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/neatrat~reddit-scraper/runs": {
            "post": {
                "operationId": "runs-sync-neatrat-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/neatrat~reddit-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-neatrat-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "properties": {
                    "searchTerms": {
                        "title": "Search Keywords",
                        "type": "array",
                        "description": "Keywords to search on Reddit. The options below (search targets, sort, time, community filter) only apply to this section. Independent from Direct URLs — you can use either or both, results are combined.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "searchTypes": {
                        "title": "Search targets",
                        "uniqueItems": true,
                        "type": "array",
                        "description": "Which surfaces to search for each keyword. Pick any combination of posts, communities, and users.",
                        "items": {
                            "type": "string",
                            "enum": [
                                "posts",
                                "communities",
                                "users"
                            ],
                            "enumTitles": [
                                "Search for posts",
                                "Search for communities",
                                "Search for users"
                            ]
                        },
                        "default": [
                            "posts"
                        ]
                    },
                    "withinSubreddit": {
                        "title": "Limit search to a community",
                        "type": "string",
                        "description": "Optional. When set, keyword post searches are restricted to this subreddit. Accepts `r/programming` or `programming`."
                    },
                    "searchSort": {
                        "title": "Sort search results by",
                        "enum": [
                            "relevance",
                            "new",
                            "comments",
                            "top"
                        ],
                        "type": "string",
                        "description": "Sort order for keyword post searches.",
                        "default": "new"
                    },
                    "timeFilter": {
                        "title": "Time range",
                        "enum": [
                            "all",
                            "hour",
                            "day",
                            "week",
                            "month",
                            "year"
                        ],
                        "type": "string",
                        "description": "Optional time window for keyword searches and top/controversial sorts.",
                        "default": "all"
                    },
                    "startUrls": {
                        "title": "Reddit URLs",
                        "type": "array",
                        "description": "Reddit URLs to scrape. The actor auto-detects posts, comment permalinks, subreddits, user pages, search URLs, popular, and leaderboard pages — no need to pick the right endpoint manually. Use `/r/<subreddit>/about/` to scrape a subreddit's metadata (sidebar, description, rules) instead of its posts.",
                        "items": {
                            "type": "object",
                            "required": [
                                "url"
                            ],
                            "properties": {
                                "url": {
                                    "type": "string",
                                    "title": "URL of a web page",
                                    "format": "uri"
                                }
                            }
                        }
                    },
                    "postSort": {
                        "title": "Default sort for subreddit URLs",
                        "enum": [
                            "hot",
                            "new",
                            "top",
                            "rising",
                            "controversial"
                        ],
                        "type": "string",
                        "description": "Sort used for subreddit listing URLs when the URL itself does not already pick a sort (e.g. `/r/programming/` vs `/r/programming/top/`).",
                        "default": "hot"
                    },
                    "crawlComments": {
                        "title": "Scrape comments for each post",
                        "type": "boolean",
                        "description": "When enabled, the actor also expands each post's comment tree (multiple comment pages). Full post bodies are always fetched for listing items regardless of this flag. Leave off for faster scrapes that only need post content.",
                        "default": false
                    },
                    "includeNsfw": {
                        "title": "Include NSFW (18+) content",
                        "type": "boolean",
                        "description": "When disabled, NSFW posts are filtered out before storage and never billed.",
                        "default": true
                    },
                    "maxItems": {
                        "title": "Max items in dataset",
                        "minimum": 1,
                        "type": "integer",
                        "description": "Global cap across the whole run. Each stored dataset item counts against this. Free Apify users are additionally capped at 500 total results across all runs.",
                        "default": 100
                    },
                    "maxPosts": {
                        "title": "Max posts",
                        "minimum": 0,
                        "type": "integer",
                        "description": "Cap for post-style listings: subreddit feeds, popular, user posts, and keyword post search.",
                        "default": 25
                    },
                    "maxComments": {
                        "title": "Max comments per run",
                        "minimum": 0,
                        "type": "integer",
                        "description": "Global cap on nested comments stored across all full-post fetches. Also used as the listing cap for user comment pages.",
                        "default": 100
                    },
                    "maxCommentsPerPost": {
                        "title": "Max comments per post",
                        "minimum": 0,
                        "type": "integer",
                        "description": "Per-post cap on nested comments stored inside each full post result.",
                        "default": 20
                    },
                    "maxCommunities": {
                        "title": "Max communities",
                        "minimum": 0,
                        "type": "integer",
                        "description": "Cap for community-style listings: community search and leaderboard.",
                        "default": 10
                    },
                    "maxUsers": {
                        "title": "Max users",
                        "minimum": 0,
                        "type": "integer",
                        "description": "Cap for user search results.",
                        "default": 25
                    },
                    "pages": {
                        "title": "Pages per listing",
                        "minimum": 1,
                        "type": "integer",
                        "description": "Listing pages to follow. For posts and comment permalinks, this controls how many extra comment expansion rounds the scraper runs.",
                        "default": 1
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
