# Reddit Scraper + AI Lead Finder & Sentiment Analysis (`buseta/reddit-scraper`) Actor

Scrape Reddit posts and comments with AI-powered lead discovery. AI scores every post for your business, suggests replies, and analyzes comment sentiment. Search by subreddit, keyword, or URL. No Reddit API key needed. $2/1K posts.

- **URL**: https://apify.com/buseta/reddit-scraper.md
- **Developed by:** [buseta](https://apify.com/buseta) (community)
- **Categories:** AI, Agents, Lead generation
- **Stats:** 1 total users, 0 monthly users, 0.0% runs succeeded, 1 bookmarks
- **User rating**: No ratings yet

## Pricing

from $2.00 / 1,000 post scrapeds

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Reddit Scraper + AI Lead Finder & Sentiment Analysis

Scrape Reddit posts and comments from any subreddit with AI-powered lead discovery. The AI reads every post, scores how relevant it is for your business, and writes a suggested reply you can post. The cheapest Reddit scraper on Apify at **$2 per 1,000 posts**.

### What You Get

For each post: title, content, author, score, comment count, URL, flair, timestamp.

Plus optional AI features:
- **AI Lead Finder** — Scores every post for your business. Tells you which ones to reply to and writes the reply for you.
- **AI Subreddit Summary** — What's the community talking about? Top topics, sentiment, opportunities.
- **AI Comment Sentiment** — What does the discussion actually say? Positive/negative split, key takeaways.

### Who Is This For?

- **Sales teams & agencies** — Monitor subreddits for people asking for your service. Get notified with a ready-to-post reply.
- **SaaS founders** — Find threads where people complain about competitors. Show up with your solution.
- **Content marketers** — Discover trending topics and recurring questions in your niche.
- **Brand managers** — Track what people say about your brand or product across subreddits.
- **Market researchers** — Analyze community sentiment, trends, and pain points at scale.

### 3 Scraping Modes

#### 1. Subreddit Feed
Get the latest posts from one or more subreddits.
```json
{
    "mode": "subreddit",
    "subreddits": ["leadgeneration", "sales", "coldoutreach"],
    "sort": "new",
    "time_filter": "week",
    "max_posts": 50
}
````

#### 2. Keyword Search

Search for specific topics within subreddits or across all of Reddit.

```json
{
    "mode": "search",
    "subreddits": ["webdesign", "smallbusiness"],
    "search_query": "need a website",
    "sort": "new",
    "time_filter": "month",
    "max_posts": 30
}
```

#### 3. Post URLs

Scrape specific posts with full comment threads.

```json
{
    "mode": "post_urls",
    "post_urls": [
        "https://reddit.com/r/LeadGeneration/comments/1sc839m/web_designer_looking_for_reliable_lead_gen_methods/",
        "https://reddit.com/r/solar/comments/1s49ir2/thinking_about_solar_again_still_worth_it_in_2026/"
    ],
    "max_comments_per_post": 50
}
```

### Use Case Examples

#### Find Leads for Your Business

You run a web design agency. Monitor r/smallbusiness, r/webdesign, and r/Entrepreneur for people who need websites. AI scores each post and writes your reply.

```json
{
    "mode": "subreddit",
    "subreddits": ["smallbusiness", "webdesign", "Entrepreneur"],
    "sort": "new",
    "time_filter": "week",
    "max_posts": 50,
    "ai_lead_finder": true,
    "ai_business_description": "I build websites for small businesses. I help them get found online with modern, mobile-friendly sites."
}
```

**What you get per post:**

```json
{
    "title": "Just started my bakery, do I really need a website?",
    "ai_relevance_score": 92,
    "ai_opportunity_type": "direct_prospect",
    "ai_reason": "Business owner questioning whether they need a website — perfect prospect for web design pitch",
    "ai_suggested_reply": "short answer yes. most people search 'bakery near me' on google and if you dont have a site you wont show up. a simple one-page site with your menu, hours, and location is usually enough to start. happy to help if you need pointers"
}
```

#### Monitor Competitors

You sell a CRM tool and want to find threads where people complain about HubSpot or Salesforce.

```json
{
    "mode": "search",
    "subreddits": ["sales", "smallbusiness", "SaaS"],
    "search_query": "hubspot alternative OR salesforce frustrated OR crm recommendation",
    "sort": "new",
    "time_filter": "month",
    "max_posts": 30,
    "ai_lead_finder": true,
    "ai_business_description": "I built a simple CRM for small sales teams. Half the price of HubSpot with less bloat."
}
```

#### Analyze a Community

You're launching a product and want to understand what r/fitness talks about before creating content.

```json
{
    "mode": "subreddit",
    "subreddits": ["fitness"],
    "sort": "top",
    "time_filter": "month",
    "max_posts": 100,
    "ai_subreddit_summary": true
}
```

**AI Subreddit Summary output:**

```json
{
    "type": "ai_subreddit_summary",
    "top_topics": ["home gym setups", "protein recommendations", "beginner programs", "injury recovery", "cutting vs bulking"],
    "sentiment": "Generally positive and supportive community, with frustration around misinformation from social media fitness influencers",
    "recurring_questions": ["What program should I start with?", "How much protein do I actually need?", "Is Planet Fitness worth it?"],
    "opportunities": "High demand for beginner-friendly content. Supplement and home gym equipment recommendations get heavy engagement."
}
```

#### Track Brand Sentiment

Your brand just launched a new product. See what Reddit thinks.

```json
{
    "mode": "search",
    "search_query": "iPhone 17",
    "sort": "new",
    "time_filter": "week",
    "max_posts": 30,
    "get_comments": true,
    "max_comments_per_post": 30,
    "ai_comment_sentiment": true
}
```

**Per-post sentiment output:**

```json
{
    "ai_comment_sentiment": {
        "positive_pct": 55,
        "negative_pct": 35,
        "neutral_pct": 10,
        "key_takeaways": ["Camera praised as best upgrade", "Battery life complaints dominating", "Price seen as too high for incremental update"],
        "discussion_summary": "Mixed reception. Camera improvements well-received but most commenters feel the price increase isn't justified by the spec bump."
    }
}
```

### Output Fields

| Field | Description |
|-------|-------------|
| `subreddit` | Subreddit name |
| `post_id` | Reddit post ID |
| `title` | Post title |
| `author` | Post author |
| `content` | Post text content |
| `url` | Full Reddit URL |
| `external_url` | Link URL (for link posts) |
| `score` | Upvotes |
| `upvote_ratio` | Upvote percentage |
| `num_comments` | Total comment count |
| `flair` | Post flair/tag |
| `created_at` | Post timestamp (UTC) |
| `comments` | Array of comment objects (if enabled) |
| `ai_relevance_score` | Lead relevance 0-100 (if AI Lead Finder enabled) |
| `ai_opportunity_type` | direct\_prospect / recommendation\_request / competitor\_discussion / industry\_discussion / not\_relevant |
| `ai_reason` | Why this score was given |
| `ai_suggested_reply` | Ready-to-post Reddit reply |
| `ai_comment_sentiment` | Comment sentiment analysis (if enabled) |

### Pricing

| Event | Price |
|-------|-------|
| Post scraped | $2.00 / 1,000 |
| Comment scraped | $0.10 / 1,000 |
| AI Lead Finder | $20.00 / 1,000 posts |
| AI Subreddit Summary | $1.00 / 1,000 posts |
| AI Comment Sentiment | $20.00 / 1,000 posts |
| Platform usage | Free |

#### Typical Run Costs

| Use Case | Posts | Comments | AI | Total Cost |
|----------|-------|----------|-----|-----------|
| Daily lead monitoring (3 subs × 25 posts) | $0.15 | — | — | **$0.15** |
| Lead finding with AI (75 posts + AI scorer) | $0.15 | — | $1.50 | **$1.65** |
| Competitor monitoring (50 posts + 25 comments each) | $0.10 | $0.13 | — | **$0.23** |
| Full brand analysis (100 posts + comments + sentiment) | $0.20 | $0.25 | $2.00 | **$2.45** |
| Subreddit trend report (200 posts + AI summary) | $0.40 | — | $0.20 | **$0.60** |

### Tips

- **AI Lead Finder works best with a specific business description** — "I sell X to Y" beats "marketing agency"
- **Search mode is powerful** — use OR operators: "need website OR looking for developer OR who built your site"
- **Sort by "new" for lead finding** — fresh posts get more engagement when you reply early
- **Sort by "top" for market research** — see what the community cares about most
- **No proxy needed for small runs** — Reddit's JSON API is lenient. Enable proxy for 500+ posts

### Keywords

Reddit scraper, Reddit posts scraper, Reddit comments scraper, subreddit scraper, Reddit lead generation, Reddit monitoring, Reddit sentiment analysis, Reddit market research, Reddit brand monitoring, Reddit API alternative, Reddit data extraction, subreddit trends, Reddit outreach tool, AI lead finder Reddit, cold outreach Reddit

# Actor input Schema

## `mode` (type: `string`):

How to find posts.

• Subreddit — Get latest posts from one or more subreddits
• Search — Search for keywords within specific subreddits or across all of Reddit
• Post URLs — Scrape specific posts by URL (with comments)

## `subreddits` (type: `array`):

List of subreddit names (without r/). Used in 'subreddit' and 'search' modes.

## `search_query` (type: `string`):

Keyword to search for. Used in 'search' mode. Searches within the specified subreddits, or across all of Reddit if no subreddit is set.

## `post_urls` (type: `array`):

List of Reddit post URLs to scrape. Used in 'post\_urls' mode. Always fetches comments for these.

## `sort` (type: `string`):

How to sort posts.

## `time_filter` (type: `string`):

Time range for top posts or search results.

## `max_posts` (type: `integer`):

Maximum number of posts to scrape per subreddit.

## `get_comments` (type: `boolean`):

Fetch comments for each post. Charged at $0.10 per 1,000 comments.

## `max_comments_per_post` (type: `integer`):

Maximum number of top-level comments to scrape per post.

## `ai_lead_finder` (type: `boolean`):

AI reads every post and scores how relevant it is for your business. Returns a relevance score (0-100), opportunity type, reason, and a suggested reply you can post. Requires 'Business description' below. Charged at $20 per 1,000 posts analyzed.

## `ai_business_description` (type: `string`):

Tell the AI what you sell or what you're looking for. Example: 'I sell web design services to local businesses' or 'I'm looking for people who need help with solar panel installation'. Max 300 characters.

## `ai_subreddit_summary` (type: `boolean`):

After scraping, AI summarizes the subreddit: top topics, sentiment, recurring questions, and opportunities. Charged at $1 per 1,000 posts analyzed.

## `ai_comment_sentiment` (type: `boolean`):

AI analyzes comments on each post: discussion summary, sentiment split, key takeaways. Requires 'Scrape comments' to be enabled. Charged at $20 per 1,000 posts analyzed.

## `proxy_config` (type: `object`):

Proxy settings. Reddit works fine without proxy for small runs.

## Actor input object example

```json
{
  "mode": "subreddit",
  "subreddits": [
    "leadgeneration"
  ],
  "post_urls": [],
  "sort": "new",
  "time_filter": "week",
  "max_posts": 25,
  "get_comments": false,
  "max_comments_per_post": 25,
  "ai_lead_finder": false,
  "ai_subreddit_summary": false,
  "ai_comment_sentiment": false,
  "proxy_config": {
    "useApifyProxy": true
  }
}
```

# Actor output Schema

## `results` (type: `string`):

Reddit posts and comments data

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "mode": "subreddit",
    "subreddits": [
        "leadgeneration"
    ],
    "post_urls": [],
    "max_posts": 25,
    "max_comments_per_post": 25,
    "proxy_config": {
        "useApifyProxy": true
    }
};

// Run the Actor and wait for it to finish
const run = await client.actor("buseta/reddit-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "mode": "subreddit",
    "subreddits": ["leadgeneration"],
    "post_urls": [],
    "max_posts": 25,
    "max_comments_per_post": 25,
    "proxy_config": { "useApifyProxy": True },
}

# Run the Actor and wait for it to finish
run = client.actor("buseta/reddit-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "mode": "subreddit",
  "subreddits": [
    "leadgeneration"
  ],
  "post_urls": [],
  "max_posts": 25,
  "max_comments_per_post": 25,
  "proxy_config": {
    "useApifyProxy": true
  }
}' |
apify call buseta/reddit-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=buseta/reddit-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit Scraper + AI Lead Finder & Sentiment Analysis",
        "description": "Scrape Reddit posts and comments with AI-powered lead discovery. AI scores every post for your business, suggests replies, and analyzes comment sentiment. Search by subreddit, keyword, or URL. No Reddit API key needed. $2/1K posts.",
        "version": "1.0",
        "x-build-id": "3OAn1Shselnlh0Xz1"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/buseta~reddit-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-buseta-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/buseta~reddit-scraper/runs": {
            "post": {
                "operationId": "runs-sync-buseta-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/buseta~reddit-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-buseta-reddit-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "mode"
                ],
                "properties": {
                    "mode": {
                        "title": "Scrape mode",
                        "enum": [
                            "subreddit",
                            "search",
                            "post_urls"
                        ],
                        "type": "string",
                        "description": "How to find posts.\n\n• Subreddit — Get latest posts from one or more subreddits\n• Search — Search for keywords within specific subreddits or across all of Reddit\n• Post URLs — Scrape specific posts by URL (with comments)",
                        "default": "subreddit"
                    },
                    "subreddits": {
                        "title": "Subreddits",
                        "type": "array",
                        "description": "List of subreddit names (without r/). Used in 'subreddit' and 'search' modes."
                    },
                    "search_query": {
                        "title": "Search query",
                        "type": "string",
                        "description": "Keyword to search for. Used in 'search' mode. Searches within the specified subreddits, or across all of Reddit if no subreddit is set."
                    },
                    "post_urls": {
                        "title": "Post URLs",
                        "type": "array",
                        "description": "List of Reddit post URLs to scrape. Used in 'post_urls' mode. Always fetches comments for these."
                    },
                    "sort": {
                        "title": "Sort by",
                        "enum": [
                            "new",
                            "hot",
                            "top",
                            "rising"
                        ],
                        "type": "string",
                        "description": "How to sort posts.",
                        "default": "new"
                    },
                    "time_filter": {
                        "title": "Time filter (for 'top' and 'search' sort)",
                        "enum": [
                            "hour",
                            "day",
                            "week",
                            "month",
                            "year",
                            "all"
                        ],
                        "type": "string",
                        "description": "Time range for top posts or search results.",
                        "default": "week"
                    },
                    "max_posts": {
                        "title": "Max posts",
                        "minimum": 1,
                        "maximum": 1000,
                        "type": "integer",
                        "description": "Maximum number of posts to scrape per subreddit.",
                        "default": 25
                    },
                    "get_comments": {
                        "title": "Scrape comments",
                        "type": "boolean",
                        "description": "Fetch comments for each post. Charged at $0.10 per 1,000 comments.",
                        "default": false
                    },
                    "max_comments_per_post": {
                        "title": "Max comments per post",
                        "minimum": 1,
                        "maximum": 500,
                        "type": "integer",
                        "description": "Maximum number of top-level comments to scrape per post.",
                        "default": 25
                    },
                    "ai_lead_finder": {
                        "title": "AI Lead Finder",
                        "type": "boolean",
                        "description": "AI reads every post and scores how relevant it is for your business. Returns a relevance score (0-100), opportunity type, reason, and a suggested reply you can post. Requires 'Business description' below. Charged at $20 per 1,000 posts analyzed.",
                        "default": false
                    },
                    "ai_business_description": {
                        "title": "Your business description (for AI Lead Finder)",
                        "maxLength": 300,
                        "type": "string",
                        "description": "Tell the AI what you sell or what you're looking for. Example: 'I sell web design services to local businesses' or 'I'm looking for people who need help with solar panel installation'. Max 300 characters."
                    },
                    "ai_subreddit_summary": {
                        "title": "AI Subreddit Summary",
                        "type": "boolean",
                        "description": "After scraping, AI summarizes the subreddit: top topics, sentiment, recurring questions, and opportunities. Charged at $1 per 1,000 posts analyzed.",
                        "default": false
                    },
                    "ai_comment_sentiment": {
                        "title": "AI Comment Sentiment (per post)",
                        "type": "boolean",
                        "description": "AI analyzes comments on each post: discussion summary, sentiment split, key takeaways. Requires 'Scrape comments' to be enabled. Charged at $20 per 1,000 posts analyzed.",
                        "default": false
                    },
                    "proxy_config": {
                        "title": "Proxy configuration",
                        "type": "object",
                        "description": "Proxy settings. Reddit works fine without proxy for small runs."
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
