# Reddit Lead Monitor: Subreddit and Keyword Alert Feed (`scrapemint/reddit-lead-monitor`) Actor

Watches subreddits for posts that match your keywords, upvote floor, and age window. Dedupes across runs so you only get new matches. Output JSON, CSV, or Excel. For SaaS founders, marketers, and support teams hunting leads and brand mentions on Reddit.

- **URL**: https://apify.com/scrapemint/reddit-lead-monitor.md
- **Developed by:** [Kennedy Mutisya](https://apify.com/scrapemint) (community)
- **Categories:** SEO tools, Agents, AI
- **Stats:** 2 total users, 1 monthly users, 100.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

Pay per usage

This Actor is paid per platform usage. The Actor is free to use, and you only pay for the Apify platform usage, which gets cheaper the higher subscription plan you have.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-usage

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Reddit Scraper and Subreddit Lead Monitor

Watch any subreddit or search query for new posts that match your keywords, upvote floor, and age window. Export post ID, title, body, author, flair, permalink, upvotes, comment count, and timestamp. Dedupe across runs so you only ever see new matches.

Built for SaaS founders, demand gen marketers, community managers, customer support teams, and brand monitors who need Reddit data without the Reddit API OAuth dance or a $200 per month mention tool.

---

### Who uses this Reddit monitor

```mermaid
flowchart TD
    A[SaaS founders] -->|Find users complaining<br/>about competitors| D[Reddit Lead<br/>Feed]
    B[Demand gen] -->|Track category<br/>conversations| D
    C[Support teams] -->|Catch bug reports<br/>and rage posts| D
    E[Community PMs] -->|See every mention<br/>of your product| D
    D --> F[Outbound lead list]
    D --> G[Brand mention tracker]
    D --> H[Support ticket triage]
````

| Role | What this Reddit scraper unlocks |
|---|---|
| **SaaS founder** | Every post in r/SaaS, r/startups, r/Entrepreneur that mentions your category, fed into a lead list |
| **Demand gen marketer** | Category conversations surfaced daily, so your SDR team reaches out within hours |
| **Support team** | Rage posts and bug reports against your product caught before they hit 500 upvotes |
| **Community manager** | Mentions of your brand on every subreddit, not just the one you run |
| **Market researcher** | Raw user language for positioning, onboarding copy, and interview prep |

***

### How the Reddit scraper works

```mermaid
flowchart LR
    A[Subreddits +<br/>Search queries] --> B[Reddit public<br/>JSON listings]
    B --> C[Paginate via after tokens]
    C --> D{Filter}
    D -->|Keywords match| E[Push to dataset]
    D -->|Upvote floor| E
    D -->|Age window| E
    E --> F[KV store SEEN_IDS]
    F -->|Next run| G[Skip already seen]
    G --> E
```

Pass a list of subreddits and, optionally, a list of keywords. The actor hits Reddit's public JSON endpoints (the same ones the web UI uses), paginates through listings, filters locally for your keywords, upvotes, comments, and post age, then pushes matching posts to the dataset.

Every post ID it pushes is stored in the key value store under `SEEN_IDS`. On the next run, already-seen IDs are skipped. Schedule this actor every 15 minutes and you get a deduped feed of new matching posts, nothing else.

No OAuth, no client ID, no developer app. The public JSON endpoints require nothing.

***

### Reddit mention tools vs this scraper

```mermaid
flowchart LR
    subgraph Paid[F5Bot, Brand24, Mention]
        A1[$49 to $299 per month]
        A2[Seat licensed]
        A3[Keyword cap per plan]
    end
    subgraph Actor[This actor]
        B1[Pay per post]
        B2[Unlimited keywords]
        B3[You own the data]
    end
    Paid -.-> X[Pick based on<br/>your needs]
    Actor --> X
```

| Feature | Reddit mention SaaS | This actor |
|---|---|---|
| Pricing | $49 to $299 per month, flat | Pay per post, first 50 per run free |
| Keyword cap | 3 to 50 per plan tier | Unlimited |
| Subreddit targeting | Global only, hard to scope | Any subreddit or list of subreddits |
| Data ownership | Vendor hosts behind a login | Raw JSON in your Apify account |
| Scheduling | Built in, hourly at best | Apify Scheduler every 1 minute |
| Dedup across runs | Yes, but vendor controlled | Yes, stored in your key value store |
| Export format | CSV or email digest | JSON, CSV, Excel, or API |

***

### Quick start

Watch r/SaaS and r/startups for posts mentioning "scraper" or "scraping tool", posted in the last 24 hours:

```bash
curl -X POST "https://api.apify.com/v2/acts/scrapemint~reddit-lead-monitor/run-sync-get-dataset-items?token=YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "subreddits": ["SaaS", "startups"],
    "keywords": ["scraper", "scraping tool"],
    "sortBy": "new",
    "maxAgeHours": 24,
    "minUpvotes": 0,
    "dedupe": true
  }'
```

Track brand mentions across all of Reddit, last 7 days, only posts with 10+ upvotes:

```json
{
  "searchQueries": ["YourBrandName", "yourbrand.com"],
  "sortBy": "new",
  "maxAgeHours": 168,
  "minUpvotes": 10,
  "maxPostsTotal": 500
}
```

Watch a single subreddit on the "hot" sort, no keyword filter, just the top of the feed:

```json
{
  "subreddits": ["Entrepreneur"],
  "sortBy": "hot",
  "maxPostsPerSource": 50
}
```

***

### What one post record looks like

```json
{
  "postId": "1c8xyzq",
  "fullId": "t3_1c8xyzq",
  "subreddit": "SaaS",
  "subredditPrefixed": "r/SaaS",
  "title": "Looking for a cheap Reddit scraper, any recs?",
  "selftext": "Our mention tool costs $299 a month and I only need...",
  "author": "bootstrapdev",
  "url": "https://www.reddit.com/r/SaaS/comments/1c8xyzq/...",
  "permalink": "https://www.reddit.com/r/SaaS/comments/1c8xyzq/looking_for_a_cheap_reddit_scraper/",
  "domain": "self.SaaS",
  "flair": "Question",
  "upvotes": 42,
  "upvoteRatio": 0.96,
  "numComments": 18,
  "score": 42,
  "createdAt": "2026-04-19T10:14:00.000Z",
  "isSelf": true,
  "over18": false,
  "matchedKeywords": ["scraper"],
  "sourceKind": "subreddit",
  "sourceValue": "SaaS",
  "scrapedAt": "2026-04-19T19:30:00.000Z"
}
```

Every row: post ID, subreddit, title, body, author, permalink, upvotes, comment count, flair, created timestamp, matched keywords, and which subreddit or query surfaced it.

***

### Pricing

First 50 posts per run are free. After that you pay per post extracted. No seat licenses. No tier gating. A 500 post run lands under $1 on the Apify free plan.

***

### FAQ

**Can this scrape any subreddit?**
Yes, any public subreddit. Private subreddits require you to be logged in, which this actor does not support.

**How many posts per subreddit?**
Reddit lists up to 1000 posts per sort. This actor paginates up to `maxPostsPerSource` (default 100). Raise it to 1000 for full listing depth.

**How do I track brand mentions across all of Reddit?**
Use `searchQueries` instead of `subreddits`. Add your brand name, domain, and common misspellings. The actor hits Reddit's global search endpoint.

**Is scraping Reddit legal?**
Reddit exposes the same JSON endpoints publicly that its web UI uses. No authentication required. The Reddit user agreement allows read access for non commercial and commercial use alike, subject to rate limits.

**Does it dedupe so I do not get the same post twice?**
Yes. Post IDs are stored in the key value store under `SEEN_IDS`. Every run skips IDs already seen. Set `dedupe: false` to disable.

**Can I run it on a schedule?**
Yes. Use the Apify Scheduler to run every 15 minutes, every hour, or on your own cron. Pair it with a webhook to push new matches straight to Slack or your CRM.

**What about comments?**
This actor pulls posts, not comments. A separate Reddit Comment Monitor actor handles comment level monitoring.

**Does it work for NSFW subreddits?**
Set `includeNSFW: true`. Off by default for business use cases.

***

### Related actors by Scrapemint

- **Upwork Opportunity Alert** for freelance lead generation
- **Trustpilot Brand Reputation** for DTC and ecommerce brands
- **Google Reviews Intelligence** for local businesses
- **Yelp Review Intelligence** for restaurants and service businesses
- **TripAdvisor Review Intelligence** for hotels and attractions
- **Amazon Review Intelligence** for product reviews and listings
- **App Store Review Scraper** for mobile apps on iOS and Android
- **Indeed Company Review Intelligence** for employer branding

Stack these to cover every public conversation surface one brand touches.

# Actor input Schema

## `subreddits` (type: `array`):

List of subreddit names (without r/). Example: SaaS, startups, smallbusiness. Leave empty if using searchQueries instead.

## `searchQueries` (type: `array`):

Reddit search queries across all of Reddit. Use instead of or alongside subreddits. Example: best scraping tool, alternative to AppFollow.

## `keywords` (type: `array`):

Only posts whose title or body contains any of these keywords are kept. Case insensitive. Leave empty to keep all posts from the listed subreddits.

## `sortBy` (type: `string`):

New returns the latest posts in the subreddit. Hot returns currently trending posts. Top returns the highest scoring posts in the timeWindow.

## `timeWindow` (type: `string`):

Only applies when sortBy is 'top'. Window for Reddit's top ranking.

## `maxPostsPerSource` (type: `integer`):

Reddit returns up to 100 posts per listing page. Pagination pulls more. This is the per source cap.

## `maxAgeHours` (type: `integer`):

Skip posts older than this many hours. 0 keeps all posts regardless of age.

## `minUpvotes` (type: `integer`):

Skip posts with fewer than this many upvotes. 0 keeps everything.

## `minComments` (type: `integer`):

Skip posts with fewer than this many comments. 0 keeps everything.

## `includeNSFW` (type: `boolean`):

Keep NSFW (over 18) posts. Off by default for business use.

## `dedupe` (type: `boolean`):

Skip post IDs pushed on previous runs. Stores IDs in the key value store under SEEN\_IDS. Turn off to return every matching post on every run.

## `maxPostsTotal` (type: `integer`):

Hard cap on posts pushed to the dataset per run. Controls cost.

## `proxyConfiguration` (type: `object`):

Apify proxy settings. Reddit's public JSON API is usually accessible without a proxy, but residential proxies help if rate limited.

## Actor input object example

```json
{
  "subreddits": [
    "SaaS",
    "startups"
  ],
  "sortBy": "new",
  "timeWindow": "day",
  "maxPostsPerSource": 100,
  "maxAgeHours": 24,
  "minUpvotes": 0,
  "minComments": 0,
  "includeNSFW": false,
  "dedupe": true,
  "maxPostsTotal": 200
}
```

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "subreddits": [
        "SaaS",
        "startups"
    ]
};

// Run the Actor and wait for it to finish
const run = await client.actor("scrapemint/reddit-lead-monitor").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = { "subreddits": [
        "SaaS",
        "startups",
    ] }

# Run the Actor and wait for it to finish
run = client.actor("scrapemint/reddit-lead-monitor").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "subreddits": [
    "SaaS",
    "startups"
  ]
}' |
apify call scrapemint/reddit-lead-monitor --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=scrapemint/reddit-lead-monitor",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit Lead Monitor: Subreddit and Keyword Alert Feed",
        "description": "Watches subreddits for posts that match your keywords, upvote floor, and age window. Dedupes across runs so you only get new matches. Output JSON, CSV, or Excel. For SaaS founders, marketers, and support teams hunting leads and brand mentions on Reddit.",
        "version": "0.1",
        "x-build-id": "7mLwxVGhsYITQEjlA"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/scrapemint~reddit-lead-monitor/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-scrapemint-reddit-lead-monitor",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/scrapemint~reddit-lead-monitor/runs": {
            "post": {
                "operationId": "runs-sync-scrapemint-reddit-lead-monitor",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/scrapemint~reddit-lead-monitor/run-sync": {
            "post": {
                "operationId": "run-sync-scrapemint-reddit-lead-monitor",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "properties": {
                    "subreddits": {
                        "title": "Subreddits to watch",
                        "type": "array",
                        "description": "List of subreddit names (without r/). Example: SaaS, startups, smallbusiness. Leave empty if using searchQueries instead.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "searchQueries": {
                        "title": "Search queries (optional)",
                        "type": "array",
                        "description": "Reddit search queries across all of Reddit. Use instead of or alongside subreddits. Example: best scraping tool, alternative to AppFollow.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "keywords": {
                        "title": "Keywords (client side filter)",
                        "type": "array",
                        "description": "Only posts whose title or body contains any of these keywords are kept. Case insensitive. Leave empty to keep all posts from the listed subreddits.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "sortBy": {
                        "title": "Sort",
                        "enum": [
                            "new",
                            "hot",
                            "top",
                            "rising"
                        ],
                        "type": "string",
                        "description": "New returns the latest posts in the subreddit. Hot returns currently trending posts. Top returns the highest scoring posts in the timeWindow.",
                        "default": "new"
                    },
                    "timeWindow": {
                        "title": "Time window (for top sort)",
                        "enum": [
                            "hour",
                            "day",
                            "week",
                            "month",
                            "year",
                            "all"
                        ],
                        "type": "string",
                        "description": "Only applies when sortBy is 'top'. Window for Reddit's top ranking.",
                        "default": "day"
                    },
                    "maxPostsPerSource": {
                        "title": "Max posts per subreddit or query",
                        "minimum": 1,
                        "maximum": 1000,
                        "type": "integer",
                        "description": "Reddit returns up to 100 posts per listing page. Pagination pulls more. This is the per source cap.",
                        "default": 100
                    },
                    "maxAgeHours": {
                        "title": "Max age in hours",
                        "minimum": 0,
                        "maximum": 8760,
                        "type": "integer",
                        "description": "Skip posts older than this many hours. 0 keeps all posts regardless of age.",
                        "default": 24
                    },
                    "minUpvotes": {
                        "title": "Minimum upvotes",
                        "minimum": 0,
                        "maximum": 1000000,
                        "type": "integer",
                        "description": "Skip posts with fewer than this many upvotes. 0 keeps everything.",
                        "default": 0
                    },
                    "minComments": {
                        "title": "Minimum comments",
                        "minimum": 0,
                        "maximum": 100000,
                        "type": "integer",
                        "description": "Skip posts with fewer than this many comments. 0 keeps everything.",
                        "default": 0
                    },
                    "includeNSFW": {
                        "title": "Include NSFW posts",
                        "type": "boolean",
                        "description": "Keep NSFW (over 18) posts. Off by default for business use.",
                        "default": false
                    },
                    "dedupe": {
                        "title": "Deduplicate across runs",
                        "type": "boolean",
                        "description": "Skip post IDs pushed on previous runs. Stores IDs in the key value store under SEEN_IDS. Turn off to return every matching post on every run.",
                        "default": true
                    },
                    "maxPostsTotal": {
                        "title": "Maximum posts per run",
                        "minimum": 1,
                        "maximum": 5000,
                        "type": "integer",
                        "description": "Hard cap on posts pushed to the dataset per run. Controls cost.",
                        "default": 200
                    },
                    "proxyConfiguration": {
                        "title": "Proxy configuration",
                        "type": "object",
                        "description": "Apify proxy settings. Reddit's public JSON API is usually accessible without a proxy, but residential proxies help if rate limited."
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
