# ®️ Reddit Posts Intelligence Scraper (`lokki/reddit-posts-intelligence-scraper`) Actor

Posts-only Reddit scraper using public .json endpoints. Extracts posts and adds lead intent, sentiment, virality, quality, keyword matches, and RAG markdown.

- **URL**: https://apify.com/lokki/reddit-posts-intelligence-scraper.md
- **Developed by:** [Ian Dikhtiar](https://apify.com/lokki) (community)
- **Categories:** Agents, Automation, Social media
- **Stats:** 2 total users, 1 monthly users, 100.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

from $5.00 / 1,000 results

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Reddit Lead Intel Scraper

Reddit is where buyers complain before they book demos.

People ask for alternatives, rant about broken tools, compare competitors, describe painful workflows, and reveal exactly what they want next. The problem is that Reddit is noisy as hell.

This actor finds the useful posts and ranks them.

It scrapes public Reddit posts through Reddit's `.json` endpoints, then enriches every post with lead intent, urgency, sentiment, virality, quality, keyword matches, and RAG-ready markdown.

No Reddit OAuth. No official Reddit API key. No browser crawling.

### What this is really for

Use it when you want to find posts like:

- “What’s the best alternative to Apollo?”
- “I need a tool that can enrich leads without getting blocked.”
- “Has anyone found a cheaper way to monitor brand mentions?”
- “Our CRM is a mess — what should we switch to?”
- “Looking for software that handles outbound and compliance.”

Those are not just posts. They are demand signals.

### Who uses it

- **Founders** looking for early customers, competitor gaps, and raw market pain
- **Growth teams** monitoring Reddit for high-intent conversations
- **Sales teams** finding people actively asking for recommendations
- **Market researchers** collecting voice-of-customer data without manually scrolling Reddit
- **Content teams** discovering topics people actually care about
- **AI teams** building clean Reddit datasets for RAG, classification, and analysis

### What you get in each row

| Category | Fields | Why it matters |
|---|---|---|
| Reddit post | title, text, author, subreddit, permalink, timestamp | The actual post and source context |
| Engagement | score, upvote ratio, comments count | Shows whether the post has traction |
| Lead signals | lead intent score, urgency, matched keywords, signal explanations | Tells you which posts deserve attention first |
| Text signals | sentiment, quality score, spam/noise penalties | Helps separate useful pain from garbage |
| AI-ready text | RAG markdown with metadata | Ready for LLMs, embeddings, alerts, or CRM enrichment |

### The intelligence layer

#### Lead intent score

Every post gets a `lead_intent_score` from 0 to 100.

The score rises when a post looks commercially useful: recommendation requests, alternative searches, buying language, pain/problem language, strong keyword matches, and meaningful engagement.

High scores usually mean: “someone should look at this.”

#### Lead urgency

Each post is labeled `low`, `medium`, or `high` urgency.

This makes it easy to route the best posts into Slack, a spreadsheet, a CRM, a lead review queue, or an LLM workflow.

#### Signal explanations

The actor does not just score posts silently. It tells you why a post was interesting.

Example signals:

- buying/recommendation intent
- pain/problem language
- fast engagement velocity
- negative sentiment risk
- possible spam/low-quality content

#### Sentiment and quality

Reddit contains gold and garbage in the same thread.

The actor scores sentiment and quality so you can find useful complaints, product feedback, and competitor frustration without drowning in memes, spam, and low-effort posts.

#### Virality velocity

A post with five comments in ten minutes can matter more than a post with fifty comments from last year.

Virality velocity helps surface discussions that are moving now.

#### RAG-ready markdown

Every row includes `rag_markdown`, a clean markdown document containing the post title, body, subreddit, author, and source URL.

Use it for:

- vector databases
- LLM summarization
- lead qualification
- category research
- alerts and dashboards
- downstream enrichment

### Example: find competitor alternatives

```json
{
  "queries": ["Apollo alternative", "best lead generation tool", "need sales intelligence software"],
  "subreddits": ["SaaS", "sales", "Entrepreneur"],
  "sort": "relevance",
  "time": "year",
  "maxResults": 100,
  "keywords": ["recommend", "alternative", "looking for", "need", "tool", "software"],
  "negativeKeywords": ["crypto", "casino", "airdrop", "giveaway"],
  "dropNegativeKeywordMatches": true,
  "excludeOver18": true
}
````

### Example output

```json
{
  "type": "post",
  "title": "Evaluating B2B lead generation tool - compliance friendly, enterprise ready",
  "author": "Additional-Pop8840",
  "subreddit": "Entrepreneur",
  "score": 2,
  "num_comments": 21,
  "permalink": "https://www.reddit.com/r/Entrepreneur/comments/...",
  "intelligence": {
    "lead_intent_score": 100,
    "lead_urgency": "high",
    "sentiment_label": "neutral",
    "virality_velocity_per_hour": 0.049,
    "quality_score": 100,
    "matched_keywords": ["tool"],
    "signals": ["buying/recommendation intent", "pain/problem language"]
  }
}
```

### Good search ideas

Try phrases that sound like real Reddit posts:

| Goal | Search examples |
|---|---|
| Find alternatives | `competitor alternative`, `switching from competitor`, `best alternative to competitor` |
| Find pain | `struggling with CRM`, `outbound is not working`, `lead data problem` |
| Find buyers | `need a tool for`, `looking for software`, `what should I use for` |
| Find feedback | `is product worth it`, `has anyone tried product`, `product review` |
| Find trends | `AI tool for sales`, `automated prospecting`, `brand monitoring reddit` |

### Input guide

| Input | Best use |
|---|---|
| `queries` | Search Reddit by buyer phrases, competitor names, pain points, or product categories |
| `subreddits` | Focus on communities where your buyers hang out |
| `startUrls` | Scrape specific Reddit URLs directly |
| `keywords` | Boost lead scoring for your preferred intent phrases |
| `negativeKeywords` | Penalize or remove noisy topics |
| `minLeadIntentScore` | Save only stronger leads; try 60+ or 80+ |
| `maxResults` | Control output size |

Use at least one of `queries`, `subreddits`, or `startUrls`.

### Data source

The actor uses public Reddit `.json` endpoints. It does not require Reddit OAuth or an official Reddit API key.

Reddit may throttle or block some cloud traffic, so Apify Proxy is enabled by default. The actor also uses delays, retries, pagination controls, and graceful partial results.

### What this actor does not do

- It does not crawl full comment trees.
- It does not access private Reddit content.
- It does not recover deleted, gated, quarantined, or unavailable posts.
- It does not pretend heuristic scoring is perfect qualification.

This is intentionally posts-only. A dedicated comments actor is a cleaner, separate product.

# Actor input Schema

## `queries` (type: `array`):

Keywords or phrases to search across Reddit posts. Use phrases your buyers would actually write.

## `subreddits` (type: `array`):

Optional. Limit search to specific communities like SaaS, Entrepreneur, sales, marketing, SEO, ecommerce. Leave empty to search all Reddit.

## `startUrls` (type: `array`):

Optional. Paste Reddit listing, search, subreddit, or post URLs. The actor automatically uses the public .json version and saves post rows only.

## `maxResults` (type: `integer`):

Maximum number of enriched Reddit post rows to save.

## `sort` (type: `string`):

Use relevance for research, new for monitoring, top/hot for trending posts, comments for active discussions.

## `time` (type: `string`):

How far back Reddit should look for top/search results.

## `keywords` (type: `array`):

Posts containing these terms get a higher lead-intent score. Examples: recommend, alternative, looking for, need, tool, software, compare.

## `negativeKeywords` (type: `array`):

Terms that lower quality or can remove posts entirely. Examples: crypto, casino, giveaway, referral, airdrop.

## `minLeadIntentScore` (type: `integer`):

Optional. Set 60+ for stronger leads, 80+ for very high-intent posts. Leave empty to save everything and sort later.

## `dropNegativeKeywordMatches` (type: `boolean`):

If enabled, posts matching any noise/spam keyword are skipped instead of just penalized.

## `excludeOver18` (type: `boolean`):

Skip posts marked over\_18 by Reddit.

## `maxPagesPerSource` (type: `integer`):

Maximum paginated Reddit pages to process per search/subreddit/source URL.

## `proxyConfiguration` (type: `object`):

Keep Apify Proxy enabled. Reddit often blocks cloud datacenter traffic without it.

## `minDelayMs` (type: `integer`):

Minimum delay between requests in milliseconds. Higher is safer for large runs.

## `outputRaw` (type: `boolean`):

Turn on only for debugging or custom parsing. It makes rows much larger.

## Actor input object example

```json
{
  "queries": [
    "best CRM alternative",
    "need a lead generation tool",
    "Apollo alternative"
  ],
  "subreddits": [
    "SaaS",
    "Entrepreneur",
    "sales"
  ],
  "maxResults": 100,
  "sort": "relevance",
  "time": "year",
  "keywords": [
    "recommend",
    "alternative",
    "looking for",
    "need",
    "tool",
    "software",
    "compare"
  ],
  "negativeKeywords": [
    "crypto",
    "casino",
    "airdrop",
    "giveaway",
    "referral"
  ],
  "dropNegativeKeywordMatches": false,
  "excludeOver18": true,
  "maxPagesPerSource": 5,
  "proxyConfiguration": {
    "useApifyProxy": true
  },
  "minDelayMs": 850,
  "outputRaw": false
}
```

# Actor output Schema

## `posts` (type: `string`):

Default dataset containing public Reddit posts enriched with lead intent, urgency, sentiment, virality, quality, keyword matches, and RAG-ready markdown.

## `highIntentPosts` (type: `string`):

Console dataset view for browsing posts, source context, engagement, and lead intelligence quickly.

## `postText` (type: `string`):

Console dataset view focused on the original Reddit title/body and source URL.

## `aiReadyRows` (type: `string`):

Console dataset view focused on URLs, source metadata, scores, and RAG-ready markdown.

## `summary` (type: `string`):

Machine-readable summary with actor name, row type, pushed count, sources, source success/failure counts, and intelligence fields.

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "queries": [
        "best CRM alternative",
        "need a lead generation tool",
        "Apollo alternative"
    ],
    "subreddits": [
        "SaaS",
        "Entrepreneur",
        "sales"
    ],
    "keywords": [
        "recommend",
        "alternative",
        "looking for",
        "need",
        "tool",
        "software",
        "compare"
    ],
    "negativeKeywords": [
        "crypto",
        "casino",
        "airdrop",
        "giveaway",
        "referral"
    ]
};

// Run the Actor and wait for it to finish
const run = await client.actor("lokki/reddit-posts-intelligence-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "queries": [
        "best CRM alternative",
        "need a lead generation tool",
        "Apollo alternative",
    ],
    "subreddits": [
        "SaaS",
        "Entrepreneur",
        "sales",
    ],
    "keywords": [
        "recommend",
        "alternative",
        "looking for",
        "need",
        "tool",
        "software",
        "compare",
    ],
    "negativeKeywords": [
        "crypto",
        "casino",
        "airdrop",
        "giveaway",
        "referral",
    ],
}

# Run the Actor and wait for it to finish
run = client.actor("lokki/reddit-posts-intelligence-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "queries": [
    "best CRM alternative",
    "need a lead generation tool",
    "Apollo alternative"
  ],
  "subreddits": [
    "SaaS",
    "Entrepreneur",
    "sales"
  ],
  "keywords": [
    "recommend",
    "alternative",
    "looking for",
    "need",
    "tool",
    "software",
    "compare"
  ],
  "negativeKeywords": [
    "crypto",
    "casino",
    "airdrop",
    "giveaway",
    "referral"
  ]
}' |
apify call lokki/reddit-posts-intelligence-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=lokki/reddit-posts-intelligence-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "®️ Reddit Posts Intelligence Scraper",
        "description": "Posts-only Reddit scraper using public .json endpoints. Extracts posts and adds lead intent, sentiment, virality, quality, keyword matches, and RAG markdown.",
        "version": "1.0",
        "x-build-id": "cZBkXg8HV0zM4eEkB"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/lokki~reddit-posts-intelligence-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-lokki-reddit-posts-intelligence-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/lokki~reddit-posts-intelligence-scraper/runs": {
            "post": {
                "operationId": "runs-sync-lokki-reddit-posts-intelligence-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/lokki~reddit-posts-intelligence-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-lokki-reddit-posts-intelligence-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "properties": {
                    "queries": {
                        "title": "Search phrases",
                        "type": "array",
                        "description": "Keywords or phrases to search across Reddit posts. Use phrases your buyers would actually write.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "subreddits": {
                        "title": "Target subreddits",
                        "type": "array",
                        "description": "Optional. Limit search to specific communities like SaaS, Entrepreneur, sales, marketing, SEO, ecommerce. Leave empty to search all Reddit.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "startUrls": {
                        "title": "Or paste Reddit URLs",
                        "type": "array",
                        "description": "Optional. Paste Reddit listing, search, subreddit, or post URLs. The actor automatically uses the public .json version and saves post rows only.",
                        "items": {
                            "type": "object",
                            "required": [
                                "url"
                            ],
                            "properties": {
                                "url": {
                                    "type": "string",
                                    "title": "URL of a web page",
                                    "format": "uri"
                                }
                            }
                        }
                    },
                    "maxResults": {
                        "title": "Maximum posts",
                        "minimum": 1,
                        "maximum": 100000,
                        "type": "integer",
                        "description": "Maximum number of enriched Reddit post rows to save.",
                        "default": 100
                    },
                    "sort": {
                        "title": "Sort by",
                        "enum": [
                            "relevance",
                            "new",
                            "hot",
                            "top",
                            "comments"
                        ],
                        "type": "string",
                        "description": "Use relevance for research, new for monitoring, top/hot for trending posts, comments for active discussions.",
                        "default": "relevance"
                    },
                    "time": {
                        "title": "Time window",
                        "enum": [
                            "day",
                            "week",
                            "month",
                            "year",
                            "all",
                            "hour"
                        ],
                        "type": "string",
                        "description": "How far back Reddit should look for top/search results.",
                        "default": "year"
                    },
                    "keywords": {
                        "title": "High-intent words/phrases",
                        "type": "array",
                        "description": "Posts containing these terms get a higher lead-intent score. Examples: recommend, alternative, looking for, need, tool, software, compare.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "negativeKeywords": {
                        "title": "Noise/spam words",
                        "type": "array",
                        "description": "Terms that lower quality or can remove posts entirely. Examples: crypto, casino, giveaway, referral, airdrop.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "minLeadIntentScore": {
                        "title": "Only keep posts above lead score",
                        "minimum": 0,
                        "maximum": 100,
                        "type": "integer",
                        "description": "Optional. Set 60+ for stronger leads, 80+ for very high-intent posts. Leave empty to save everything and sort later."
                    },
                    "dropNegativeKeywordMatches": {
                        "title": "Drop posts with noise/spam words",
                        "type": "boolean",
                        "description": "If enabled, posts matching any noise/spam keyword are skipped instead of just penalized.",
                        "default": false
                    },
                    "excludeOver18": {
                        "title": "Exclude NSFW posts",
                        "type": "boolean",
                        "description": "Skip posts marked over_18 by Reddit.",
                        "default": true
                    },
                    "maxPagesPerSource": {
                        "title": "Pages per source",
                        "minimum": 1,
                        "maximum": 100,
                        "type": "integer",
                        "description": "Maximum paginated Reddit pages to process per search/subreddit/source URL.",
                        "default": 5
                    },
                    "proxyConfiguration": {
                        "title": "Proxy",
                        "type": "object",
                        "description": "Keep Apify Proxy enabled. Reddit often blocks cloud datacenter traffic without it.",
                        "default": {
                            "useApifyProxy": true
                        }
                    },
                    "minDelayMs": {
                        "title": "Delay between Reddit requests",
                        "minimum": 0,
                        "type": "integer",
                        "description": "Minimum delay between requests in milliseconds. Higher is safer for large runs.",
                        "default": 850
                    },
                    "outputRaw": {
                        "title": "Include raw Reddit JSON",
                        "type": "boolean",
                        "description": "Turn on only for debugging or custom parsing. It makes rows much larger.",
                        "default": false
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
