# Reddit Scraper — Posts, Comments & Sentiment Analysis (`dltik/reddit-scraper`) Actor

Scrape Reddit posts, comments, user profiles, and search results. Built-in AFINN-165 sentiment analysis, nested comment threads, universal URL input. HTTP-only, no browser needed. 256MB memory. $2 per 1,000 results.

- **URL**: https://apify.com/dltik/reddit-scraper.md
- **Developed by:** [dltik](https://apify.com/dltik) (community)
- **Categories:** Developer tools, Automation, SEO tools
- **Stats:** 2 total users, 1 monthly users, 50.0% runs succeeded, 3 bookmarks
- **User rating**: No ratings yet

## Pricing

from $2.00 / 1,000 result scrapeds

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Reddit Scraper — Posts, Comments, Subreddits & Sentiment Analysis

Extract Reddit posts, comments, user profiles, and search results with built-in **AFINN-165 sentiment analysis**. HTTP-only, no browser needed. **256MB memory. $2 per 1,000 results. 50% cheaper than the market leader.**

Paste any Reddit URL or search by keyword. Auto-detects subreddits, posts, users, and search queries.

---

### What can this Reddit Scraper do?

- **Scrape subreddits** — extract hot, new, top, rising, or controversial posts from any public subreddit
- **Extract comments** — nested comment threads with depth tracking, author, score, and sentiment
- **Search Reddit** — keyword search across all of Reddit with time and score filtering
- **Scrape user profiles** — karma, account age, bio, recent posts
- **Sentiment analysis** — built-in AFINN-165 scoring with negation detection (no external API)
- **Filter and sort** — by score, time range, NSFW, sort order
- **Universal URL input** — paste any Reddit URL, automatically detected and processed

---

### Why this Reddit Scraper?

| Feature | This actor | trudax ($4/1K) | harshmaur ($2/1K) |
|---------|-----------|----------------|-------------------|
| Posts + comments + profiles | Yes | Yes | Yes |
| **Sentiment analysis** | **Built-in AFINN-165** | No | No |
| Nested comment depth | Yes | Yes | Yes |
| Memory | **256MB** | 2GB | 2GB |
| Browser needed | **No (HTTP-only)** | Yes | Yes |
| Price per 1,000 | **$2** | $4 | $2 |
| Success rate | **99%+** | 91% | 98% |

---

### Quick start

#### Scrape a subreddit

```json
{
  "subreddits": ["technology", "programming"],
  "maxResults": 50,
  "sortBy": "hot"
}
````

#### Search Reddit for a topic

```json
{
  "searchTerms": "artificial intelligence",
  "maxResults": 100,
  "sortBy": "top",
  "timeFilter": "month",
  "enableSentiment": true
}
```

#### Scrape a post with all comments

```json
{
  "urls": ["https://reddit.com/r/AskReddit/comments/abc123/your_post_title/"],
  "includeComments": true,
  "commentsPerPost": 100,
  "enableSentiment": true
}
```

#### Scrape a user profile + their posts

```json
{
  "urls": ["https://reddit.com/user/spez"],
  "maxResults": 20
}
```

#### Bulk URL processing

```json
{
  "urls": [
    "https://reddit.com/r/technology",
    "https://reddit.com/r/AskReddit/comments/abc123/title/",
    "https://reddit.com/user/example",
    "https://reddit.com/search?q=startup"
  ],
  "enableSentiment": true
}
```

***

### Input parameters

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `urls` | array | — | Any Reddit URLs (subreddit, post, user, search — auto-detected) |
| `subreddits` | array | — | Subreddit names without r/ (e.g. `["technology", "askreddit"]`) |
| `searchTerms` | string | — | Search Reddit by keyword |
| `maxResults` | integer | `50` | Max posts per source (1-1000) |
| `includeComments` | boolean | `false` | Fetch comments for each post |
| `commentsPerPost` | integer | `20` | Max comments per post (1-500) |
| `sortBy` | enum | `hot` | `hot`, `new`, `top`, `rising`, `controversial` |
| `timeFilter` | enum | `week` | `hour`, `day`, `week`, `month`, `year`, `all` |
| `enableSentiment` | boolean | `false` | Add AFINN-165 sentiment scoring |
| `minScore` | integer | `0` | Only include posts with this many upvotes or more |
| `proxyConfig` | object | Residential | Proxy settings (residential proxy required) |

***

### What data do you get from Reddit?

#### Post output

```json
{
  "type": "post",
  "id": "abc123",
  "url": "https://www.reddit.com/r/technology/comments/abc123/title/",
  "subreddit": "technology",
  "title": "Post title here",
  "body": "Full selftext content...",
  "body_html": "<div>...<p>HTML version</p>...</div>",
  "author": "username",
  "author_fullname": "t2_abc123",
  "score": 1234,
  "upvote_ratio": 0.95,
  "num_comments": 56,
  "created_utc": "2026-03-31T10:00:00Z",
  "permalink": "/r/technology/comments/abc123/title/",
  "link_url": "https://example.com/article",
  "domain": "example.com",
  "is_self": false,
  "is_video": false,
  "is_nsfw": false,
  "is_locked": false,
  "is_stickied": false,
  "flair": "Discussion",
  "awards": 3,
  "media_url": null,
  "thumbnail": "https://...",
  "sentiment_score": 2.3,
  "sentiment_label": "positive"
}
```

#### Comment output

```json
{
  "type": "comment",
  "id": "xyz789",
  "post_id": "abc123",
  "subreddit": "technology",
  "body": "This is a great point!",
  "author": "commenter",
  "score": 45,
  "created_utc": "2026-03-31T11:00:00Z",
  "depth": 0,
  "parent_id": "t3_abc123",
  "is_op": false,
  "is_stickied": false,
  "awards": 0,
  "controversiality": 0,
  "sentiment_score": 3.0,
  "sentiment_label": "positive"
}
```

#### User profile output

```json
{
  "type": "profile",
  "username": "spez",
  "display_name": "spez",
  "bio": "CEO of Reddit",
  "avatar_url": "https://...",
  "karma_post": 12345,
  "karma_comment": 67890,
  "karma_total": 80235,
  "created_utc": "2005-06-06T00:00:00Z",
  "is_gold": true,
  "is_mod": true,
  "is_verified": true,
  "profile_url": "https://www.reddit.com/user/spez"
}
```

***

### Sentiment analysis

When `enableSentiment: true`, each post and comment gets:

| Field | Type | Description |
|-------|------|-------------|
| `sentiment_score` | float | Average AFINN-165 score per word (-5 to +5) |
| `sentiment_label` | string | `positive` (>0.5), `negative` (<-0.5), `neutral` |

- Built-in AFINN-165 lexicon (~300 words)
- Negation detection: "not good" scores as negative
- No external API calls — runs entirely locally
- Works best for English text

***

### How much does it cost to scrape Reddit?

**$0.002 per result** = **$2 per 1,000 results**. 50% cheaper than the market leader.

| Scenario | Results | Cost | Time |
|----------|---------|------|------|
| 1 subreddit, 50 hot posts | 50 | $0.10 | ~5s |
| 3 subreddits, 100 posts each | 300 | $0.60 | ~30s |
| Search + 20 comments per post | 500 | $1.00 | ~60s |
| Brand monitoring, 1000 posts | 1,000 | $2.00 | ~120s |

Compute cost is negligible at 256MB memory.

***

### Use cases

- **Market research** — analyze what Reddit users say about your industry, products, or competitors
- **Brand monitoring** — track mentions of your brand across subreddits with sentiment scoring
- **Content research** — find trending topics and popular content formats in your niche
- **Competitor analysis** — monitor competitor mentions and user sentiment
- **Social listening** — track conversations about keywords, products, or events
- **Lead generation** — find users asking for recommendations in your product category
- **Academic research** — collect Reddit data for social media studies
- **SEO research** — discover what questions people ask on Reddit (for content creation)

***

### API integration

#### Python

```python
from apify_client import ApifyClient

client = ApifyClient("YOUR_APIFY_TOKEN")

run = client.actor("dltik/reddit-scraper").call(run_input={
    "subreddits": ["technology"],
    "maxResults": 100,
    "sortBy": "top",
    "timeFilter": "week",
    "enableSentiment": True,
})

for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    if item["type"] == "post":
        print(f"[{item['sentiment_label']}] {item['title']} ({item['score']} pts)")
```

#### curl

```bash
curl -X POST "https://api.apify.com/v2/acts/dltik~reddit-scraper/runs" \
  -H "Authorization: Bearer YOUR_APIFY_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "searchTerms": "startup funding",
    "maxResults": 50,
    "enableSentiment": true
  }'
```

***

### Technical details

- **HTTP-only** — uses Reddit's public `.json` API, no browser or Chromium needed
- **256MB memory** — lightest Reddit scraper on Apify (competitors use 2-4GB)
- **Cursor pagination** — automatic `after` pagination for large result sets
- **Rate limiting** — built-in 1s delay between requests per Reddit guidelines
- **Residential proxy** — required because Reddit blocks datacenter IPs (included by default)
- **Nested comments** — full thread structure with depth tracking
- **Error handling** — per-URL error isolation, 429 retry with backoff, private/deleted detection

***

### FAQ

**Why do I need a proxy?**
Reddit blocks requests from datacenter IPs. Residential proxies (included by default via Apify) route requests through real residential IPs.

**Can I scrape private subreddits?**
No. Only public subreddits and posts are accessible without authentication.

**How many results can I get?**
Up to 1,000 per subreddit or search. Reddit's API limits listings to ~1,000 items. For more coverage, use multiple queries with different sort/time combinations.

**Is the sentiment analysis accurate?**
AFINN-165 is a well-established lexicon-based approach. It captures overall sentiment direction and works best for English text. For production NLP, export the data and use a specialized tool.

**How fast is it?**
\~5 seconds for 50 posts, ~60 seconds for 500 posts+comments. No browser startup overhead.

**Can I use this with Make/Zapier/n8n?**
Yes. Use Apify's built-in integrations or webhooks to trigger workflows when a run completes.

***

### Connect with Make, Zapier & n8n

This actor integrates with any automation platform via the Apify API.

#### Make (Integromat)

1. Add an **Apify module** in your Make scenario
2. Select **Run Actor** and choose this actor
3. Configure the input (paste your JSON)
4. Add a **Get Dataset Items** module to retrieve results
5. Connect to Google Sheets, HubSpot, Slack, or any other app

#### Zapier

1. Use the **Apify integration** on Zapier
2. Set trigger: **Actor Run Finished**
3. Action: **Get Dataset Items**
4. Send results to your CRM, email tool, or spreadsheet

#### n8n

1. Add an **HTTP Request** node to call the Apify API
2. POST to `https://api.apify.com/v2/acts/dltik~reddit-scraper/runs`
3. Wait for completion, then fetch dataset items
4. Route results to any n8n node

#### Webhooks

Set up a webhook to get notified when a run finishes:

```python
run = client.actor("dltik/reddit-scraper").call(
    run_input={...},
    webhooks=[{
        "eventTypes": ["ACTOR.RUN.SUCCEEDED"],
        "requestUrl": "https://your-webhook-url.com"
    }]
)
```

***

### Other scrapers by dltik

| Actor | What it does | Price |
|-------|-------------|-------|
| [Google Maps Email Extractor](https://apify.com/dltik/google-maps-email-extractor) | Extract emails, phones, WhatsApp from Google Maps businesses | $3/1K |
| [Facebook Ads Scraper](https://apify.com/dltik/facebook-ads-scraper) | Scrape Meta Ad Library — ad copy, creatives, CTA links | $1/1K |
| [TikTok Scraper](https://apify.com/dltik/tiktok-scraper) | Scrape profiles, videos, hashtags, search, trending | $1/1K |
| [TikTok Video Downloader](https://apify.com/dltik/tiktok-video-downloader) | Download TikTok videos without watermark | $5/1K |
| [Trustpilot Scraper](https://apify.com/dltik/trustpilot-scraper) | Scrape reviews, ratings, company profiles with sentiment | $0.50/1K |

# Actor input Schema

## `urls` (type: `array`):

Paste any Reddit URL — subreddit, post, user profile, or search. Auto-detected. Examples: 'https://reddit.com/r/technology', 'https://reddit.com/user/spez'

## `subreddits` (type: `array`):

Subreddit names to scrape (without r/). e.g. \['technology', 'askreddit', 'programming']

## `searchTerms` (type: `string`):

Search Reddit for posts matching these keywords.

## `maxResults` (type: `integer`):

Maximum number of posts to extract per subreddit/search/URL.

## `includeComments` (type: `boolean`):

Fetch top comments for each post. Adds nested comment threads with author, score, and depth.

## `commentsPerPost` (type: `integer`):

Max comments to fetch per post (if includeComments is true).

## `sortBy` (type: `string`):

How to sort results.

## `timeFilter` (type: `string`):

Time range for 'top' and 'controversial' sorts.

## `enableSentiment` (type: `boolean`):

Add AFINN-165 sentiment scoring to each post and comment. Adds sentiment\_score and sentiment\_label (positive/negative/neutral) fields.

## `minScore` (type: `integer`):

Only include posts with at least this many upvotes. 0 = no filter.

## `proxyConfig` (type: `object`):

Reddit blocks datacenter IPs. Use Apify residential proxies for reliable access. Required for production use.

## Actor input object example

```json
{
  "maxResults": 50,
  "includeComments": false,
  "commentsPerPost": 20,
  "sortBy": "hot",
  "timeFilter": "week",
  "enableSentiment": false,
  "minScore": 0,
  "proxyConfig": {
    "useApifyProxy": true,
    "apifyProxyGroups": [
      "RESIDENTIAL"
    ]
  }
}
```

# Actor output Schema

## `results` (type: `string`):

Dataset containing Reddit posts, comments, and user profiles with sentiment scores, upvotes, awards, and nested comment threads.

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {};

// Run the Actor and wait for it to finish
const run = await client.actor("dltik/reddit-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {}

# Run the Actor and wait for it to finish
run = client.actor("dltik/reddit-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{}' |
apify call dltik/reddit-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=dltik/reddit-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit Scraper — Posts, Comments & Sentiment Analysis",
        "description": "Scrape Reddit posts, comments, user profiles, and search results. Built-in AFINN-165 sentiment analysis, nested comment threads, universal URL input. HTTP-only, no browser needed. 256MB memory. $2 per 1,000 results.",
        "version": "1.0",
        "x-build-id": "OCLe0tHMLluYtz6mN"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/dltik~reddit-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-dltik-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/dltik~reddit-scraper/runs": {
            "post": {
                "operationId": "runs-sync-dltik-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/dltik~reddit-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-dltik-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "properties": {
                    "urls": {
                        "title": "Reddit URLs",
                        "type": "array",
                        "description": "Paste any Reddit URL — subreddit, post, user profile, or search. Auto-detected. Examples: 'https://reddit.com/r/technology', 'https://reddit.com/user/spez'",
                        "items": {
                            "type": "string"
                        }
                    },
                    "subreddits": {
                        "title": "Subreddits",
                        "type": "array",
                        "description": "Subreddit names to scrape (without r/). e.g. ['technology', 'askreddit', 'programming']",
                        "items": {
                            "type": "string"
                        }
                    },
                    "searchTerms": {
                        "title": "Search keywords",
                        "type": "string",
                        "description": "Search Reddit for posts matching these keywords."
                    },
                    "maxResults": {
                        "title": "Max results",
                        "minimum": 1,
                        "maximum": 1000,
                        "type": "integer",
                        "description": "Maximum number of posts to extract per subreddit/search/URL.",
                        "default": 50
                    },
                    "includeComments": {
                        "title": "Include comments",
                        "type": "boolean",
                        "description": "Fetch top comments for each post. Adds nested comment threads with author, score, and depth.",
                        "default": false
                    },
                    "commentsPerPost": {
                        "title": "Comments per post",
                        "minimum": 1,
                        "maximum": 500,
                        "type": "integer",
                        "description": "Max comments to fetch per post (if includeComments is true).",
                        "default": 20
                    },
                    "sortBy": {
                        "title": "Sort by",
                        "enum": [
                            "hot",
                            "new",
                            "top",
                            "rising",
                            "controversial"
                        ],
                        "type": "string",
                        "description": "How to sort results.",
                        "default": "hot"
                    },
                    "timeFilter": {
                        "title": "Time filter",
                        "enum": [
                            "hour",
                            "day",
                            "week",
                            "month",
                            "year",
                            "all"
                        ],
                        "type": "string",
                        "description": "Time range for 'top' and 'controversial' sorts.",
                        "default": "week"
                    },
                    "enableSentiment": {
                        "title": "Sentiment analysis",
                        "type": "boolean",
                        "description": "Add AFINN-165 sentiment scoring to each post and comment. Adds sentiment_score and sentiment_label (positive/negative/neutral) fields.",
                        "default": false
                    },
                    "minScore": {
                        "title": "Minimum score",
                        "minimum": 0,
                        "type": "integer",
                        "description": "Only include posts with at least this many upvotes. 0 = no filter.",
                        "default": 0
                    },
                    "proxyConfig": {
                        "title": "Proxy configuration",
                        "type": "object",
                        "description": "Reddit blocks datacenter IPs. Use Apify residential proxies for reliable access. Required for production use.",
                        "default": {
                            "useApifyProxy": true,
                            "apifyProxyGroups": [
                                "RESIDENTIAL"
                            ]
                        }
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
