# Reddit MCP Scraper Pro (Multi-Mode) (`crawlerbros/reddit-mcp-scraper-pro`) Actor

Unified Reddit scraper — run subreddit, comments and profile modes in a single execution. Advanced filters: minScore, maxAgeDays, excludeNsfw, keywordFilter, authorFilter, maxDepth. Every record is tagged with recordType so downstream pipelines can route easily. No login required.

- **URL**: https://apify.com/crawlerbros/reddit-mcp-scraper-pro.md
- **Developed by:** [Crawler Bros](https://apify.com/crawlerbros) (community)
- **Categories:** Social media, Developer tools, Automation
- **Stats:** 2 total users, 1 monthly users, 100.0% runs succeeded, 13 bookmarks
- **User rating**: 5.00 out of 5 stars

## Pricing

from $1.00 / 1,000 results

This Actor is paid per event and usage. You are charged both the fixed price for specific events and for Apify platform usage.
Since this Actor supports Apify Store discounts, the price gets lower the higher subscription plan you have.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Reddit MCP Server

A unified Apify MCP (Model Context Protocol) server for comprehensive Reddit scraping. This actor provides a single interface to scrape subreddits, comments, and user profiles using browser automation with Playwright.

### 🚀 Features

#### Multi-Mode Scraping

This MCP server supports three scraping modes:

1. **Subreddit Mode** - Scrape posts from Reddit subreddits
2. **Comments Mode** - Scrape comments from Reddit posts
3. **Profile Mode** - Scrape user profiles and their posts

#### Key Capabilities

✅ **Unified Interface** - Single actor for all Reddit scraping needs  
✅ **Browser Automation** - Bypasses API restrictions using Playwright  
✅ **No Authentication Required** - Scrape public content without login  
✅ **Comprehensive Data** - Extract all relevant fields and metadata  
✅ **Automatic Pagination** - Load multiple pages automatically  
✅ **NSFW Support** - Automatically handles NSFW confirmation dialogs  
✅ **Structured Output** - Clean JSON data ready for AI consumption

### 📋 Input Parameters

#### Common Parameters

| Parameter | Type   | Required | Description                                          |
| --------- | ------ | -------- | ---------------------------------------------------- |
| `mode`    | string | Yes      | Scraping mode: `subreddit`, `comments`, or `profile` |

#### Subreddit Mode Parameters

| Parameter    | Type    | Default | Description                                                      |
| ------------ | ------- | ------- | ---------------------------------------------------------------- |
| `subreddits` | array   | -       | List of subreddit names (without 'r/' prefix)                    |
| `maxPosts`   | integer | `25`    | Maximum posts per subreddit (1-1000)                             |
| `sort`       | string  | `"hot"` | Sort method: `hot`, `new`, `top`, `controversial`                |
| `timeFilter` | string  | `"day"` | Time filter for top/controversial (hour/day/week/month/year/all) |

#### Comments Mode Parameters

| Parameter       | Type    | Default | Description                            |
| --------------- | ------- | ------- | -------------------------------------- |
| `postUrls`      | array   | -       | List of Reddit post URLs to scrape     |
| `maxComments`   | integer | `100`   | Maximum comments per post (1-10000)    |
| `expandThreads` | boolean | `true`  | Automatically expand collapsed threads |

#### Profile Mode Parameters

| Parameter   | Type    | Default       | Description                                        |
| ----------- | ------- | ------------- | -------------------------------------------------- |
| `usernames` | array   | -             | List of Reddit usernames (without 'u/' prefix)     |
| `maxPosts`  | integer | `100`         | Maximum posts per user (1-1000)                    |
| `section`   | string  | `"submitted"` | Profile section: `submitted`, `overview`, `gilded` |
| `sort`      | string  | `"new"`       | Sort method: `hot`, `new`, `top`, `controversial`  |

### 📝 Input Examples

#### Example 1: Scrape Subreddits

```json
{
  "mode": "subreddit",
  "subreddits": ["python", "programming", "webdev"],
  "maxPosts": 50,
  "sort": "hot",
  "timeFilter": "day"
}
````

#### Example 2: Scrape Comments

```json
{
  "mode": "comments",
  "postUrls": [
    "https://www.reddit.com/r/programming/comments/1abc123/interesting_discussion/",
    "https://old.reddit.com/r/python/comments/1def456/another_post/"
  ],
  "maxComments": 200,
  "expandThreads": true
}
```

#### Example 3: Scrape User Profiles

```json
{
  "mode": "profile",
  "usernames": ["spez", "example_user"],
  "maxPosts": 100,
  "section": "submitted",
  "sort": "top"
}
```

### 📊 Output Format

#### Subreddit Mode Output

Each post includes:

```json
{
  "subreddit": "python",
  "subreddit_prefixed": "r/python",
  "post_id": "1abc123",
  "post_name": "t3_1abc123",
  "title": "Interesting Python discussion",
  "author": "example_user",
  "selftext": "Post content preview...",
  "score": 456,
  "num_comments": 89,
  "url": "https://old.reddit.com/r/python/comments/...",
  "permalink": "https://old.reddit.com/r/python/comments/...",
  "domain": "self.python",
  "is_self_post": true,
  "link_flair": "Discussion",
  "thumbnail_url": null,
  "created_utc": 1747683628,
  "created_at": "2025-10-31T12:30:00",
  "is_stickied": false,
  "is_locked": false,
  "is_nsfw": false
}
```

#### Comments Mode Output

Each comment includes:

```json
{
  "comment_id": "abc123xyz",
  "comment_name": "t1_abc123xyz",
  "author": "example_user",
  "text": "This is a great discussion!",
  "score": 42,
  "awards_count": 2,
  "permalink": "https://old.reddit.com/r/...",
  "post_url": "https://old.reddit.com/r/...",
  "depth": 0,
  "parent_comment_id": null,
  "is_op": false,
  "is_edited": true,
  "is_stickied": false,
  "created_utc": 1728912645,
  "created_at": "2025-10-31T12:30:45"
}
```

#### Profile Mode Output

Profile data with posts:

```json
{
  "username": "spez",
  "post_karma": 0,
  "comment_karma": 0,
  "total_karma": 1047690,
  "account_created": "2005-06-06T04:00:00+00:00",
  "posts": [
    {
      "post_id": "abc123",
      "title": "Announcing new features",
      "author": "spez",
      "subreddit": "announcements",
      "score": 15234,
      "num_comments": 1250,
      "url": "https://old.reddit.com/...",
      "created_at": "2025-10-31T12:30:45",
      "is_stickied": true,
      "is_nsfw": false
    }
  ]
}
```

### 🎯 Use Cases

#### Research & Analysis

- **Sentiment Analysis** - Analyze community opinions across subreddits
- **Trend Detection** - Track emerging topics and discussions
- **User Behavior** - Study posting patterns and engagement
- **Content Analysis** - Build datasets for machine learning

#### Business Intelligence

- **Market Research** - Gather user feedback and discussions
- **Brand Monitoring** - Track mentions and sentiment
- **Competitive Analysis** - Monitor competitor discussions
- **Customer Insights** - Understand customer needs and pain points

#### AI & ML Applications

- **Training Data** - Build high-quality datasets for AI models
- **RAG Systems** - Feed Reddit content to retrieval systems
- **Chatbot Training** - Use conversations for dialogue models
- **Content Generation** - Analyze successful content patterns

### 🛠️ Local Development

#### Prerequisites

```bash
pip install -r requirements.txt
playwright install chromium
```

#### Create Input File

Create `storage/key_value_stores/default/INPUT.json`:

```json
{
  "mode": "subreddit",
  "subreddits": ["python"],
  "maxPosts": 10
}
```

#### Run Locally

```bash
cd Reddit/mcp
apify run
```

#### Check Results

Results are saved in `storage/datasets/default/`

### 🚀 Deployment

#### Using Apify CLI

```bash
## Login to Apify
apify login

## Push to Apify platform
apify push
```

#### Manual Upload

1. Create a new actor on [Apify Console](https://console.apify.com/)
2. Upload all files including `Dockerfile`, `requirements.txt`, and `.actor/` directory
3. Configure input parameters
4. Run the actor

### 📚 API Integration

#### JavaScript/Node.js

```javascript
const { ApifyClient } = require("apify-client");

const client = new ApifyClient({ token: "YOUR_API_TOKEN" });

const input = {
  mode: "subreddit",
  subreddits: ["python", "programming"],
  maxPosts: 50,
  sort: "hot",
};

const run = await client.actor("YOUR_ACTOR_ID").call(input);
const { items } = await client.dataset(run.defaultDatasetId).listItems();

console.log(`Scraped ${items.length} posts`);
```

#### Python

```python
from apify_client import ApifyClient

client = ApifyClient('YOUR_API_TOKEN')

input_data = {
    'mode': 'subreddit',
    'subreddits': ['python', 'programming'],
    'maxPosts': 50,
    'sort': 'hot'
}

run = client.actor('YOUR_ACTOR_ID').call(run_input=input_data)

for item in client.dataset(run['defaultDatasetId']).iterate_items():
    print(f"Post: {item['title']}")
    print(f"Score: {item['score']}")
```

### ⚡ Performance Tips

#### Optimize Speed

- Start with lower `maxPosts` values for testing
- Use specific subreddits instead of scraping all posts
- Disable `expandThreads` in comments mode if not needed
- Process fewer URLs/usernames per run

#### Avoid Rate Limiting

- Add delays between requests (built-in)
- Don't scrape the same content repeatedly
- Respect Reddit's servers - use reasonable limits
- Consider batching requests across multiple runs

### ⚠️ Limitations

- **Public Content Only** - Cannot scrape private subreddits or profiles
- **No Authentication** - Requires public access to content
- **Rate Limits** - Reddit may throttle excessive requests
- **Browser-Based** - Slower than direct API but more reliable
- **Dynamic Content** - Some features may change if Reddit updates layout

### 🐛 Troubleshooting

#### No Results Returned

- Verify subreddit/username/URL is correct
- Check if content is public (not private/restricted)
- Try with smaller `maxPosts` values first
- Review logs for specific error messages

#### Timeout Errors

- Content may be loading slowly
- Try with fewer items or smaller limits
- Check if Reddit is accessible from your location

#### Missing Data Fields

- Some fields may be null if not available
- Deleted content shows "\[deleted]" for authors
- Hidden scores may show as 0

### 📄 License

This actor is provided as-is for scraping public Reddit data in accordance with Reddit's terms of service.

### 🔗 Related Actors

- [Reddit Subreddit Scraper](../reddit/) - Dedicated subreddit scraper
- [Reddit Comment Scraper](../reddit-comment/) - Dedicated comment scraper
- [Reddit Profile Scraper](../reddit-profile/) - Dedicated profile scraper

### 💡 Notes

- This MCP server uses browser automation to access Reddit's public interface
- Always respect Reddit's robots.txt and terms of service
- Use responsibly and avoid overwhelming Reddit's servers
- Consider implementing additional rate limiting for large-scale scraping
- The actor works best with the Apify platform's infrastructure

### 🆘 Support

For issues, questions, or feature requests, please open an issue in the repository or contact support.

***

**Made with ❤️ for the AI community | Powered by Apify**

# Actor input Schema

## `mode` (type: `string`):

Single mode (back-compat). Use `modes` for running multiple modes in one run.

## `modes` (type: `array`):

Run multiple modes in a single execution (e.g. `["subreddit", "profile"]`). When set, overrides the single `mode` field.

## `subreddits` (type: `array`):

List of subreddit names without `r/` prefix.

## `maxPosts` (type: `integer`):

Max posts per subreddit or profile.

## `sort` (type: `string`):

How to sort posts on the listing.

## `timeFilter` (type: `string`):

Time range for `top` or `controversial` sorts.

## `postUrls` (type: `array`):

List of Reddit post URLs.

## `maxComments` (type: `integer`):

Max comments per post (in comments mode).

## `expandThreads` (type: `boolean`):

Click 'load more comments' buttons to expand collapsed threads.

## `usernames` (type: `array`):

Reddit usernames. Accepts plain, `u/spez`, or full URLs.

## `section` (type: `string`):

Which profile section to scrape in profile mode.

## `minScore` (type: `integer`):

Drop records with score below this. Applies to posts and comments alike.

## `maxAgeDays` (type: `integer`):

Drop records older than N days.

## `excludeNsfw` (type: `boolean`):

Drop NSFW posts.

## `keywordFilter` (type: `string`):

Only emit records whose title/content/text contains this substring (case-insensitive).

## `authorFilter` (type: `string`):

Only emit records by this author (case-insensitive substring match).

## `maxDepth` (type: `integer`):

(Comments mode) Drop comments deeper than N levels.

## Actor input object example

```json
{
  "mode": "subreddit",
  "modes": [],
  "subreddits": [
    "python"
  ],
  "maxPosts": 25,
  "sort": "hot",
  "timeFilter": "day",
  "postUrls": [],
  "maxComments": 100,
  "expandThreads": true,
  "usernames": [],
  "section": "submitted",
  "excludeNsfw": false
}
```

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "subreddits": [
        "python"
    ],
    "postUrls": [],
    "usernames": []
};

// Run the Actor and wait for it to finish
const run = await client.actor("crawlerbros/reddit-mcp-scraper-pro").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "subreddits": ["python"],
    "postUrls": [],
    "usernames": [],
}

# Run the Actor and wait for it to finish
run = client.actor("crawlerbros/reddit-mcp-scraper-pro").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "subreddits": [
    "python"
  ],
  "postUrls": [],
  "usernames": []
}' |
apify call crawlerbros/reddit-mcp-scraper-pro --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=crawlerbros/reddit-mcp-scraper-pro",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit MCP Scraper Pro (Multi-Mode)",
        "description": "Unified Reddit scraper — run subreddit, comments and profile modes in a single execution. Advanced filters: minScore, maxAgeDays, excludeNsfw, keywordFilter, authorFilter, maxDepth. Every record is tagged with recordType so downstream pipelines can route easily. No login required.",
        "version": "1.0",
        "x-build-id": "S3acLaH9cIMfBU2BB"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/crawlerbros~reddit-mcp-scraper-pro/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-crawlerbros-reddit-mcp-scraper-pro",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/crawlerbros~reddit-mcp-scraper-pro/runs": {
            "post": {
                "operationId": "runs-sync-crawlerbros-reddit-mcp-scraper-pro",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/crawlerbros~reddit-mcp-scraper-pro/run-sync": {
            "post": {
                "operationId": "run-sync-crawlerbros-reddit-mcp-scraper-pro",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "properties": {
                    "mode": {
                        "title": "Scraping mode (single)",
                        "enum": [
                            "subreddit",
                            "comments",
                            "profile"
                        ],
                        "type": "string",
                        "description": "Single mode (back-compat). Use `modes` for running multiple modes in one run.",
                        "default": "subreddit"
                    },
                    "modes": {
                        "title": "Modes (multiple, Pro)",
                        "type": "array",
                        "description": "Run multiple modes in a single execution (e.g. `[\"subreddit\", \"profile\"]`). When set, overrides the single `mode` field.",
                        "default": [],
                        "items": {
                            "type": "string"
                        }
                    },
                    "subreddits": {
                        "title": "Subreddit names (for subreddit mode)",
                        "type": "array",
                        "description": "List of subreddit names without `r/` prefix.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "maxPosts": {
                        "title": "Max posts",
                        "minimum": 1,
                        "maximum": 1000,
                        "type": "integer",
                        "description": "Max posts per subreddit or profile.",
                        "default": 25
                    },
                    "sort": {
                        "title": "Sort posts by",
                        "enum": [
                            "hot",
                            "new",
                            "top",
                            "rising",
                            "controversial"
                        ],
                        "type": "string",
                        "description": "How to sort posts on the listing.",
                        "default": "hot"
                    },
                    "timeFilter": {
                        "title": "Time filter (top/controversial)",
                        "enum": [
                            "hour",
                            "day",
                            "week",
                            "month",
                            "year",
                            "all"
                        ],
                        "type": "string",
                        "description": "Time range for `top` or `controversial` sorts.",
                        "default": "day"
                    },
                    "postUrls": {
                        "title": "Post URLs (for comments mode)",
                        "type": "array",
                        "description": "List of Reddit post URLs.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "maxComments": {
                        "title": "Max comments",
                        "minimum": 1,
                        "maximum": 10000,
                        "type": "integer",
                        "description": "Max comments per post (in comments mode).",
                        "default": 100
                    },
                    "expandThreads": {
                        "title": "Expand comment threads",
                        "type": "boolean",
                        "description": "Click 'load more comments' buttons to expand collapsed threads.",
                        "default": true
                    },
                    "usernames": {
                        "title": "Usernames (for profile mode)",
                        "type": "array",
                        "description": "Reddit usernames. Accepts plain, `u/spez`, or full URLs.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "section": {
                        "title": "Profile section",
                        "enum": [
                            "submitted",
                            "comments",
                            "gilded"
                        ],
                        "type": "string",
                        "description": "Which profile section to scrape in profile mode.",
                        "default": "submitted"
                    },
                    "minScore": {
                        "title": "Min score (filter)",
                        "minimum": -10000,
                        "maximum": 10000000,
                        "type": "integer",
                        "description": "Drop records with score below this. Applies to posts and comments alike."
                    },
                    "maxAgeDays": {
                        "title": "Max age in days (filter)",
                        "minimum": 1,
                        "maximum": 36500,
                        "type": "integer",
                        "description": "Drop records older than N days."
                    },
                    "excludeNsfw": {
                        "title": "Exclude NSFW",
                        "type": "boolean",
                        "description": "Drop NSFW posts.",
                        "default": false
                    },
                    "keywordFilter": {
                        "title": "Keyword filter (substring)",
                        "type": "string",
                        "description": "Only emit records whose title/content/text contains this substring (case-insensitive)."
                    },
                    "authorFilter": {
                        "title": "Author filter (substring)",
                        "type": "string",
                        "description": "Only emit records by this author (case-insensitive substring match)."
                    },
                    "maxDepth": {
                        "title": "Max comment depth",
                        "minimum": 0,
                        "maximum": 50,
                        "type": "integer",
                        "description": "(Comments mode) Drop comments deeper than N levels."
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
