# Reddit Scraper - AI Answers, Posts, Comments, Search (`openclawai/reddit-deep-scraper`) Actor

The only Reddit scraper with AI Answers + TLS fingerprinting. Scrape posts, comments, search results & subreddits. No API key, no login. AI-ready JSON output. 6 actions in 1   Actor. Browser-grade anti-detection. Parallel comment fetching. $5/1k posts.

- **URL**: https://apify.com/openclawai/reddit-deep-scraper.md
- **Developed by:** [Pika Choo](https://apify.com/openclawai) (community)
- **Categories:** Lead generation, Agents, SEO tools
- **Stats:** 3 total users, 1 monthly users, 66.7% runs succeeded, 5 bookmarks
- **User rating**: 4.85 out of 5 stars

## Pricing

from $2.00 / 1,000 post scrapeds

This Actor is paid per event and usage. You are charged both the fixed price for specific events and for Apify platform usage.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Reddit Deep Scraper — Posts, Comments, Search & AI Answers

The **only Reddit scraper on Apify with TLS fingerprinting and Reddit AI Answers**. While other scrapers use basic HTTP clients and break constantly, this one mimics a real real browser browser — making it virtually undetectable by Reddit's anti-bot systems.

No Reddit API key. No login. No rate limit headaches. Just data.

---

### Why This Scraper Beats Every Other Option

| Feature | Other Scrapers | This Scraper |
|---------|---------------|-------------|
| Anti-detection | Basic HTTP (gets blocked) | **TLS fingerprinting (real browser)** |
| Reddit AI Answers | Not available | **Built-in** |
| Comment extraction | Limited or none | **Full nested comment trees** |
| Subreddit discovery | Not available | **Keyword-based subreddit finder** |
| When Reddit blocks you | Fails | **Auto-fallback to old.reddit.com + PullPush** |
| Rate limiting | Crashes | **Exponential backoff + smart retry** |
| Speed | Sequential | **Parallel fetching (up to 20 threads)** |

---

### 6 Powerful Actions in 1 Actor

#### 1. Scrape Subreddit
Extract posts from any subreddit with full metadata — title, body, author, upvotes, media URLs, timestamps. Optionally fetch **complete comment trees** with parallel workers.

```json
{
    "action": "scrape_subreddit",
    "subreddit": "technology",
    "sort": "top",
    "timeFilter": "week",
    "limit": 100,
    "includeComments": true
}
````

#### 2. Search Posts

Keyword search across all of Reddit or within a specific subreddit. Find every post mentioning your brand, product, or topic.

```json
{
    "action": "search_posts",
    "query": "best CRM software",
    "sort": "relevance",
    "limit": 100
}
```

#### 3. Search Comments

Find comments matching your keywords. Extracts comment IDs from Reddit's HTML search, then fetches full comment details via bulk API.

```json
{
    "action": "search_comments",
    "query": "your brand name",
    "subreddit": "technology"
}
```

#### 4. Find Subreddits by Keyword

Discover the most relevant subreddits for any topic. Returns a ranked list with mention counts, sample post titles, and activity dates. Perfect for finding where your audience hangs out.

```json
{
    "action": "search_subreddits",
    "query": "machine learning",
    "limit": 50
}
```

#### 5. Fetch Single Post

Get any Reddit post by URL with full content — title, body, author, upvotes, timestamp. Use it to enrich existing datasets or verify specific posts.

```json
{
    "action": "fetch_post",
    "postUrl": "https://www.reddit.com/r/technology/comments/abc123/post_title/"
}
```

#### 6. Reddit AI Answers (Exclusive)

Query Reddit's built-in AI answer engine. Returns markdown-formatted answers with follow-up questions and source posts/subreddits. **No other scraper on Apify has this.**

```json
{
    "action": "reddit_answers",
    "query": "What are the best tools for web scraping in 2026?"
}
```

***

### What You Get Back

#### Post data

```json
{
    "post_id": "1abc123",
    "permalink": "/r/technology/comments/1abc123/title/",
    "subreddit_name": "technology",
    "author_name": "user123",
    "title": "Example post title",
    "body": "Full post body text...",
    "media": ["https://i.redd.it/image.jpg"],
    "num_comments": 42,
    "num_upvotes": 1500,
    "post_timestamp": "2026-04-01T12:00:00Z",
    "comments": [
        {
            "author_name": "commenter1",
            "body": "This is a top-level comment",
            "media": [],
            "parent_id": "t3_1abc123"
        },
        {
            "author_name": "replier2",
            "body": "This is a nested reply",
            "media": [],
            "parent_id": "t1_abc456"
        }
    ]
}
```

#### Reddit AI Answer data

```json
{
    "markdown": "### Full AI-generated answer in markdown...",
    "follow_ups": ["Related question 1", "Related question 2"],
    "source_posts": ["post_id_1", "post_id_2"],
    "source_subreddits": ["subreddit1", "subreddit2"],
    "queries_remaining": 29
}
```

***

### Use Cases

**Market Research** — Monitor brand mentions, product discussions, and industry trends across thousands of subreddits.

**Sentiment Analysis** — Collect posts and comments for NLP/AI processing. AI-ready JSON output plugs directly into your pipeline.

**SEO & Content Marketing** — Reddit dominates Google search results. Find which threads rank for your keywords and what people are saying.

**Lead Generation** — Find people actively asking questions your product solves. Target them with relevant content.

**Competitor Intelligence** — Track what users say about your competitors. Identify pain points you can solve.

**AI Training Data** — Bulk export Reddit discussions for fine-tuning LLMs, training classifiers, or building datasets.

**Academic Research** — Structured data collection from Reddit for social science, NLP, and computational studies.

***

### How It Works Under the Hood

1. **TLS Fingerprinting** — Every request uses a real browser fingerprint. Reddit sees a real browser, not a bot.

2. **8 Rotating User-Agents** — Real browser user-agent strings from Windows, Mac, and Linux.

3. **Smart Fallback Chain** — If `www.reddit.com` blocks you, automatically tries `old.reddit.com`, then PullPush API as disaster recovery.

4. **Exponential Backoff** — Hit a rate limit? The scraper backs off with jitter and retries automatically. No manual intervention needed.

5. **Parallel Comment Fetching** — Configurable worker pool (1-20 threads) fetches comments concurrently with random jitter to avoid detection.

***

### Pricing

| What you scrape | Cost |
|----------------|------|
| Posts (without comments) | **$2 / 1,000 posts** |
| Posts (with full comment trees) | **$5 / 1,000 posts** |
| Search results | **$2 / 1,000 results** |
| Subreddits found | **$2 / 1,000 subreddits** |
| Single post fetch | **$5 / 1,000 fetches** |
| Reddit AI Answers | **$10 / 1,000 queries** |

You only pay for results delivered. No monthly subscription. No minimum.

***

### Tips for Best Results

- Use **residential proxies** for reliable scraping — select "Residential" in the proxy configuration
- Start with a small `limit` (10-50) to test, then scale up
- Set `includeComments: false` for faster, cheaper runs when you only need post metadata
- Use `sort: "top"` with `timeFilter: "week"` to get the most popular recent content
- For AI Answers, ask natural questions like you would on Google

# Actor input Schema

## `action` (type: `string`):

Choose your action — then scroll down and fill only the fields marked for that action:

• **Scrape Subreddit** → ✅ Subreddit
• **Search Posts** → ✅ Query + ⚠️ Subreddit (optional)
• **Search Comments** → ✅ Query + ⚠️ Subreddit (optional)
• **Find Subreddits** → ✅ Query
• **Fetch Post** → ✅ Post URL
• **Reddit AI Answers** → ✅ Query

## `subreddit` (type: `string`):

**Subreddit name without the r/ prefix**

✅ Required for: Scrape Subreddit
⚠️ Optional for: Search Posts, Search Comments
❌ Ignore for: Find Subreddits, Fetch Post, Reddit AI Answers

Example: `technology` (not `r/technology`)

## `query` (type: `string`):

**Keyword to search or question to ask**

✅ Required for: Search Posts, Search Comments, Find Subreddits, Reddit AI Answers
❌ Ignore for: Scrape Subreddit, Fetch Post

Examples:
• `best web scraping tools 2026`
• `how to learn Python`
• `AI automation`

## `postUrl` (type: `string`):

**Full Reddit post URL**

✅ Required for: Fetch Single Post
❌ Ignore for: All other actions

Example: `https://www.reddit.com/r/technology/comments/abc123/my_post/`

## `sort` (type: `string`):

**How to sort results**

For Scrape Subreddit:
• 🔥 **hot** (trending now)
• ✨ **new** (newest first)
• 🏆 **top** (highest rated)
• 📈 **rising** (gaining traction)
• ⚡ **controversial** (most debated)

For Search:
• 🎯 **relevance** (best match)
• ✨ **new** (newest first)
• 🏆 **top** (highest rated)
• 💬 **comments** (most discussed)

## `timeFilter` (type: `string`):

**Time range for results**

⚠️ Only applies when Sort = **top** or **controversial**

Options: past hour, day, week, month, year, or all time

## `limit` (type: `integer`):

**Maximum number of posts/results to return**

⚠️ Higher = more data but slower and costs more
💰 Each post with comments costs more than posts only

Recommended:
• 🟢 Quick test: 10-25
• 🟡 Normal: 50-100
• 🔴 Large: 200-500

## `includeComments` (type: `boolean`):

**Fetch full comment trees for each post**

✅ Enabled: Get complete discussions with nested replies
❌ Disabled: Posts only (faster, cheaper)

⚠️ Only applies to: **Scrape Subreddit**
💰 Comments significantly increase cost and time
📊 A post with 500 comments = 500 API calls

## `threads` (type: `integer`):

**Number of parallel workers**

🟢 Low (1-5): Slower but safer, less proxy bandwidth
🟡 Medium (6-10): Balanced speed and reliability
🔴 High (11-20): Fastest but uses more proxy bandwidth

⚠️ Only applies to: **Scrape Subreddit**, **Find Subreddits**
💡 Higher = faster but may trigger rate limits

## `proxyConfiguration` (type: `object`):

**Reddit requires residential proxies for best results**

📌 **Default Setup (Recommended):**
• ✅ Use Apify Proxy
• 🏠 Residential proxies (best success rate)
• 💰 Costs: $8/GB

💡 **Alternative:**
• 🏢 BUYPROXIES94952 (free datacenter, 5 IPs)
• ⚠️ Higher chance of Reddit blocks
• 💵 Free but less reliable

## Actor input object example

```json
{
  "action": "scrape_subreddit",
  "subreddit": "technology",
  "query": "best web scraping tools 2026",
  "postUrl": "https://www.reddit.com/r/technology/comments/1sdjh66/example/",
  "sort": "hot",
  "timeFilter": "week",
  "limit": 50,
  "includeComments": false,
  "threads": 10,
  "proxyConfiguration": {
    "useApifyProxy": true,
    "apifyProxyGroups": [
      "RESIDENTIAL"
    ]
  }
}
```

# Actor output Schema

## `results` (type: `string`):

No description

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "action": "scrape_subreddit",
    "subreddit": "technology",
    "sort": "hot",
    "timeFilter": "week",
    "limit": 50,
    "includeComments": false,
    "threads": 10,
    "proxyConfiguration": {
        "useApifyProxy": true,
        "apifyProxyGroups": [
            "RESIDENTIAL"
        ]
    }
};

// Run the Actor and wait for it to finish
const run = await client.actor("openclawai/reddit-deep-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "action": "scrape_subreddit",
    "subreddit": "technology",
    "sort": "hot",
    "timeFilter": "week",
    "limit": 50,
    "includeComments": False,
    "threads": 10,
    "proxyConfiguration": {
        "useApifyProxy": True,
        "apifyProxyGroups": ["RESIDENTIAL"],
    },
}

# Run the Actor and wait for it to finish
run = client.actor("openclawai/reddit-deep-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "action": "scrape_subreddit",
  "subreddit": "technology",
  "sort": "hot",
  "timeFilter": "week",
  "limit": 50,
  "includeComments": false,
  "threads": 10,
  "proxyConfiguration": {
    "useApifyProxy": true,
    "apifyProxyGroups": [
      "RESIDENTIAL"
    ]
  }
}' |
apify call openclawai/reddit-deep-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=openclawai/reddit-deep-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit Scraper - AI Answers, Posts, Comments, Search",
        "description": "The only Reddit scraper with AI Answers + TLS fingerprinting. Scrape posts, comments, search results & subreddits. No API key, no login. AI-ready JSON output. 6 actions in 1   Actor. Browser-grade anti-detection. Parallel comment fetching. $5/1k posts.",
        "version": "1.0",
        "x-build-id": "4p6fySU4cnGaMFKTc"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/openclawai~reddit-deep-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-openclawai-reddit-deep-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/openclawai~reddit-deep-scraper/runs": {
            "post": {
                "operationId": "runs-sync-openclawai-reddit-deep-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/openclawai~reddit-deep-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-openclawai-reddit-deep-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "action"
                ],
                "properties": {
                    "action": {
                        "title": "🎯 What do you want to scrape?",
                        "enum": [
                            "scrape_subreddit",
                            "search_posts",
                            "search_comments",
                            "search_subreddits",
                            "fetch_post",
                            "reddit_answers"
                        ],
                        "type": "string",
                        "description": "Choose your action — then scroll down and fill only the fields marked for that action:\n\n• **Scrape Subreddit** → ✅ Subreddit\n• **Search Posts** → ✅ Query + ⚠️ Subreddit (optional)\n• **Search Comments** → ✅ Query + ⚠️ Subreddit (optional)\n• **Find Subreddits** → ✅ Query\n• **Fetch Post** → ✅ Post URL\n• **Reddit AI Answers** → ✅ Query",
                        "default": "scrape_subreddit"
                    },
                    "subreddit": {
                        "title": "📝 Subreddit Name",
                        "type": "string",
                        "description": "**Subreddit name without the r/ prefix**\n\n✅ Required for: Scrape Subreddit\n⚠️ Optional for: Search Posts, Search Comments\n❌ Ignore for: Find Subreddits, Fetch Post, Reddit AI Answers\n\nExample: `technology` (not `r/technology`)"
                    },
                    "query": {
                        "title": "🔍 Search Query / Question",
                        "type": "string",
                        "description": "**Keyword to search or question to ask**\n\n✅ Required for: Search Posts, Search Comments, Find Subreddits, Reddit AI Answers\n❌ Ignore for: Scrape Subreddit, Fetch Post\n\nExamples:\n• `best web scraping tools 2026`\n• `how to learn Python`\n• `AI automation`"
                    },
                    "postUrl": {
                        "title": "🔗 Post URL",
                        "type": "string",
                        "description": "**Full Reddit post URL**\n\n✅ Required for: Fetch Single Post\n❌ Ignore for: All other actions\n\nExample: `https://www.reddit.com/r/technology/comments/abc123/my_post/`"
                    },
                    "sort": {
                        "title": "📊 Sort Order",
                        "enum": [
                            "hot",
                            "new",
                            "top",
                            "rising",
                            "controversial",
                            "relevance",
                            "comments"
                        ],
                        "type": "string",
                        "description": "**How to sort results**\n\nFor Scrape Subreddit:\n• 🔥 **hot** (trending now)\n• ✨ **new** (newest first)\n• 🏆 **top** (highest rated)\n• 📈 **rising** (gaining traction)\n• ⚡ **controversial** (most debated)\n\nFor Search:\n• 🎯 **relevance** (best match)\n• ✨ **new** (newest first)\n• 🏆 **top** (highest rated)\n• 💬 **comments** (most discussed)",
                        "default": "hot"
                    },
                    "timeFilter": {
                        "title": "⏰ Time Filter",
                        "enum": [
                            "hour",
                            "day",
                            "week",
                            "month",
                            "year",
                            "all"
                        ],
                        "type": "string",
                        "description": "**Time range for results**\n\n⚠️ Only applies when Sort = **top** or **controversial**\n\nOptions: past hour, day, week, month, year, or all time",
                        "default": "week"
                    },
                    "limit": {
                        "title": "🔢 Max Results",
                        "minimum": 1,
                        "maximum": 500,
                        "type": "integer",
                        "description": "**Maximum number of posts/results to return**\n\n⚠️ Higher = more data but slower and costs more\n💰 Each post with comments costs more than posts only\n\nRecommended:\n• 🟢 Quick test: 10-25\n• 🟡 Normal: 50-100\n• 🔴 Large: 200-500",
                        "default": 50
                    },
                    "includeComments": {
                        "title": "💬 Include Comments",
                        "type": "boolean",
                        "description": "**Fetch full comment trees for each post**\n\n✅ Enabled: Get complete discussions with nested replies\n❌ Disabled: Posts only (faster, cheaper)\n\n⚠️ Only applies to: **Scrape Subreddit**\n💰 Comments significantly increase cost and time\n📊 A post with 500 comments = 500 API calls",
                        "default": false
                    },
                    "threads": {
                        "title": "⚡ Concurrency (Threads)",
                        "minimum": 1,
                        "maximum": 20,
                        "type": "integer",
                        "description": "**Number of parallel workers**\n\n🟢 Low (1-5): Slower but safer, less proxy bandwidth\n🟡 Medium (6-10): Balanced speed and reliability\n🔴 High (11-20): Fastest but uses more proxy bandwidth\n\n⚠️ Only applies to: **Scrape Subreddit**, **Find Subreddits**\n💡 Higher = faster but may trigger rate limits",
                        "default": 10
                    },
                    "proxyConfiguration": {
                        "title": "🌐 Proxy Configuration",
                        "type": "object",
                        "description": "**Reddit requires residential proxies for best results**\n\n📌 **Default Setup (Recommended):**\n• ✅ Use Apify Proxy\n• 🏠 Residential proxies (best success rate)\n• 💰 Costs: $8/GB\n\n💡 **Alternative:**\n• 🏢 BUYPROXIES94952 (free datacenter, 5 IPs)\n• ⚠️ Higher chance of Reddit blocks\n• 💵 Free but less reliable",
                        "default": {
                            "useApifyProxy": true,
                            "apifyProxyGroups": [
                                "RESIDENTIAL"
                            ]
                        }
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
