# Reddit Intelligence API (`midwest_united/reddit-intelligence`) Actor

Research any topic on Reddit. Returns sentiment analysis, theme clusters, key concerns, and ranked threads as structured JSON. Ready for AI agents and LLM pipelines. No API keys required.

- **URL**: https://apify.com/midwest\_united/reddit-intelligence.md
- **Developed by:** [Jim Giganti](https://apify.com/midwest_united) (community)
- **Categories:** AI, Developer tools, Lead generation
- **Stats:** 2 total users, 2 monthly users, 50.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

from $59.00 / 1,000 thread analyzeds

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Reddit Intelligence API

Research any topic on Reddit and get structured analysis your AI agent can reason over directly. Input a topic, product name, or company — get back sentiment, theme clusters, key concerns, notable quotes, and ranked threads as clean JSON.

No API keys required. No login. Uses Reddit's public data only.

---

### Who this is for

**AI agent developers** — Give your agent the ability to research public opinion on any topic in one tool call. The `intelligence` output mode returns a single structured record with a plain-English summary, sentiment score, and clustered themes. Your agent reads it and reasons. No parsing, no post-processing.

**Developers building research tools** — Product feedback pipelines, brand monitoring, market research dashboards, competitive intelligence tools. Pull structured Reddit signal on any topic and pipe it into your stack.

**LLM pipeline builders** — Feed clean, ranked, relevance-scored Reddit content into RAG systems, evaluation datasets, or fine-tuning pipelines. The `raw` mode returns explicit nulls on every field — no undefined surprises.

---

### What makes this different from a Reddit scraper

Most Reddit scrapers return a raw dump — post text, upvotes, comment text. You still have to figure out what it means. This actor does that work for you:

- **Relevance scoring** — posts are ranked by a composite signal (engagement quality, discussion depth, recency, content richness, community signals). Low-quality and controversial posts are filtered before they reach your output.
- **Sentiment analysis** — weighted across posts and comments, with negation handling. Knows that "not great" is different from "great."
- **Theme clustering** — groups discussion into eight topic-agnostic categories (pricing, performance, usability, support, features, alternatives, experience, security). Each cluster has its own sentiment score.
- **Notable quotes** — extracts high-signal short excerpts (40–280 chars) from highly-voted comments. The kind of thing you'd actually want to cite.
- **Narrative summary** — one plain-English sentence synthesizing the overall finding. Designed to be the first thing an AI agent reads.

---

### Output modes

#### `intelligence` (default — recommended for AI agents)

Returns **one record** containing the full analysis. This is the MCP-ready mode. An agent calls the tool, reads the `intelligence.summary` field first, then drills into `clusters`, `keyConcerns`, and `notableQuotes` as needed.

```json
{
  "query": "Rust programming language",
  "mode": "intelligence",
  "fetchedAt": "2026-04-04T14:23:11.000Z",
  "intelligence": {
    "overallSentiment": "positive",
    "sentimentScore": 0.52,
    "sentimentBreakdown": { "positive": 38, "negative": 9, "neutral": 21 },
    "summary": "Reddit discussion about \"Rust programming language\" is generally positive across 14 threads from 2024-02-01 to 2026-03-28. The most discussed theme is personal experience (positive sentiment). Key concerns include: slow, difficult, compile.",
    "clusters": [
      { "id": "experience", "label": "Personal experience", "mentionCount": 47, "sentiment": "positive", "sentimentScore": 0.41 },
      { "id": "performance", "label": "Performance & reliability", "mentionCount": 31, "sentiment": "positive", "sentimentScore": 0.58 },
      { "id": "usability", "label": "Ease of use", "mentionCount": 28, "sentiment": "mixed", "sentimentScore": -0.12 }
    ],
    "keyConcerns": [
      { "term": "slow", "count": 12 },
      { "term": "difficult", "count": 9 },
      { "term": "compile", "count": 7 }
    ],
    "keyPraises": [
      { "term": "fast", "count": 24 },
      { "term": "reliable", "count": 18 },
      { "term": "great", "count": 15 }
    ],
    "notableQuotes": [
      {
        "text": "Once it clicks, you never want to go back. The compiler is your pair programmer.",
        "commentScore": 847,
        "sentimentScore": 0.8,
        "postTitle": "Six months with Rust — honest review"
      }
    ]
  },
  "coverage": {
    "threadCount": 14,
    "topSubreddits": [
      { "subreddit": "rust", "threadCount": 6 },
      { "subreddit": "programming", "threadCount": 4 },
      { "subreddit": "learnprogramming", "threadCount": 3 }
    ],
    "dateRange": { "oldest": "2024-02-01", "newest": "2026-03-28" }
  },
  "topThreads": [
    {
      "id": "1abc23",
      "subreddit": "rust",
      "title": "Six months with Rust — honest review",
      "url": "https://reddit.com/r/rust/comments/1abc23/six_months_with_rust_honest_review/",
      "score": 2341,
      "numComments": 187,
      "relevanceScore": 84.2,
      "createdAt": "2026-01-15T09:42:00.000Z"
    }
  ]
}
````

#### `threads` — one record per post

Returns ranked posts with metadata, signal quality rating, and top comments. Use this when you want to display or browse results rather than feed them into an agent.

```json
{
  "rank": 1,
  "id": "1abc23",
  "subreddit": "rust",
  "title": "Six months with Rust — honest review",
  "url": "https://reddit.com/r/rust/comments/1abc23/",
  "score": 2341,
  "upvoteRatio": 0.97,
  "numComments": 187,
  "createdAt": "2026-01-15T09:42:00.000Z",
  "relevanceScore": 84.2,
  "signalQuality": "high",
  "topComments": [
    {
      "author": "some_rustacean",
      "body": "Once it clicks, you never want to go back. The compiler is your pair programmer.",
      "score": 847
    }
  ]
}
```

#### `raw` — explicit nulls, no analysis

Pure data. Every field present on every record, null when absent. No internal scoring fields. Use this for custom pipelines that do their own processing.

***

### Input

| Field | Type | Default | Description |
|---|---|---|---|
| `query` | string | — | Topic, product, company, or question to research. Required unless `subreddit` is set. |
| `subreddit` | string | — | Restrict to a specific subreddit (without `r/`). Can be combined with `query`. |
| `subreddits` | string\[] | — | Search across multiple subreddits. Used with `query`. |
| `outputMode` | string | `intelligence` | `intelligence` | `threads` | `raw` |
| `timeRange` | string | `year` | `hour` | `day` | `week` | `month` | `year` | `all` |
| `maxPosts` | integer | `30` | Posts to fetch before scoring (5–50). More = broader coverage, higher cost. |
| `maxResults` | integer | `15` | Records in output after scoring (1–50). |
| `budgetUsd` | number | — | Optional spend cap. Actor exits cleanly when reached. |

At least one of `query`, `subreddit`, or `subreddits` is required.

***

### Pricing

| Event | Price | When |
|---|---|---|
| `reddit-thread-analyzed` | $0.05 / thread | Intelligence mode — per thread analyzed |
| `reddit-record-returned` | $0.01 / record | Threads and raw modes — per post returned |

A typical intelligence run analyzing 15 threads costs **$0.75**. A threads run returning 15 posts costs **$0.15**.

Use the `budgetUsd` input to cap spend on scheduled runs.

***

### Using with AI agents

#### Claude Desktop / Claude Code (MCP)

Add Apify to your MCP config, then ask Claude directly:

```
What do developers think about Bun vs Node.js? Use the Reddit Intelligence API to find out.
```

Claude will call the actor automatically, read the `intelligence.summary`, and reason over the clusters and concerns in context.

#### LangGraph

```python
from apify_client import ApifyClient

client = ApifyClient("YOUR_APIFY_TOKEN")

run = client.actor("YOUR_USERNAME/reddit-intelligence").call(
    run_input={
        "query": "Next.js vs Remix 2025",
        "outputMode": "intelligence",
        "timeRange": "year",
        "maxPosts": 30,
    }
)

result = client.dataset(run["defaultDatasetId"]).list_items().items[0]

## Feed directly into your LLM context
summary = result["intelligence"]["summary"]
clusters = result["intelligence"]["clusters"]
concerns = result["intelligence"]["keyConcerns"]
```

#### CrewAI

```python
from crewai_tools import ApifyActorsTool

reddit_research = ApifyActorsTool(actor_name="YOUR_USERNAME/reddit-intelligence")

result = reddit_research.run(run_input={
    "query": "Supabase vs Firebase",
    "outputMode": "intelligence",
    "timeRange": "year",
})
```

#### n8n / Make

Set the actor ID to `YOUR_USERNAME/reddit-intelligence`, pass JSON input, and connect the dataset output to any downstream node. Works out of the box with Apify's native n8n and Make integrations.

***

### Common use cases

**Product research before building** — Before starting a new project, ask what developers are saying about the tools you're considering. Get a structured view of real-world pain points and praise in under a minute.

**Competitive intelligence** — Run queries on your competitors and your own product. Compare sentiment scores and cluster themes to understand where you're winning and where users are frustrated.

**Market research for AI applications** — Feed Reddit signal into your RAG pipeline as grounded, community-sourced context. Useful for applications that need to answer questions about public opinion.

**Brand monitoring** — Schedule runs on your product name weekly. Track sentiment score over time. The `dateRange` field on coverage lets you see if you're looking at recent or historical discussion.

**Due diligence** — Researching a vendor, tool, or technology before committing? Pull the last year of Reddit discussion and get a structured view of what practitioners actually think.

***

### Technical notes

**No authentication required.** Uses Reddit's public JSON API endpoints — the same data available at `reddit.com/search.json`. No Reddit account, no OAuth, no API keys.

**Rate limiting.** The actor enforces a 1.1-second minimum between requests to stay within Reddit's unauthenticated rate limits. Retries use exponential backoff with jitter. Rate-limit responses (429) back off longer.

**Relevance scoring.** Posts are scored on five weighted factors before reaching your output: engagement quality (upvote ratio × log score), discussion depth, recency, content richness, and community signals (awards, virality). Posts below the minimum relevance threshold are filtered. Controversial posts with low upvote ratios are penalized.

**Sentiment analysis.** Weighted across all text units (titles, post bodies, comments). Higher-voted content carries more weight. Negation handling ("not great" scores differently from "great"). No LLM dependency — pure signal processing.

**Budget cap.** If `budgetUsd` is set, the actor estimates spend before pushing records and truncates output cleanly if the cap would be exceeded. The run exits with a log message rather than an error.

***

### Issues and feedback

Found a bug or have a feature request? Open an issue on the Issues tab. Response time is typically within 24 hours.

Common requests welcome: additional output fields, new theme cluster categories, language filtering, support for specific subreddit types.

# Actor input Schema

## `query` (type: `string`):

Topic, product name, company, or question to research across Reddit. Example: 'Next.js vs Remix', 'Notion alternatives', 'is Rust worth learning'. Leave blank if using Subreddit only.

## `subreddit` (type: `string`):

Focus results on a specific subreddit. Enter without the r/ prefix — e.g. 'programming', 'reactjs', 'MachineLearning'. Can be combined with a query or used alone. Do not include the r/ prefix.

## `subreddits` (type: `array`):

Search across multiple subreddits at once. Enter one subreddit name per line, without r/ prefix. Used with a query — ignored if Subreddit is also set.

## `outputMode` (type: `string`):

intelligence — one record with full analysis: sentiment, clusters, concerns, notable quotes. Best for AI agents and research pipelines.

threads — one record per post with ranking, top comments, and signal quality. Best for display or browsing.

raw — one record per post with all fields and explicit nulls. Best for custom data pipelines.

## `timeRange` (type: `string`):

How far back to search. 'year' covers most meaningful discussion for most topics. Use 'all' for evergreen topics or niche communities with lower post volume.

## `maxPosts` (type: `integer`):

Number of posts to retrieve before scoring and filtering. More posts means broader coverage but slower runs and higher cost. The scorer trims this down to the highest-signal results before output.

## `maxResults` (type: `integer`):

Maximum number of records pushed to the dataset after scoring. In intelligence mode this caps the threads analyzed. In threads/raw modes this caps the records returned.

## `budgetUsd` (type: `number`):

Optional. If set, the actor stops pushing records once estimated PPE spend reaches this amount and exits cleanly. Useful for scheduled runs with predictable cost. Leave blank for no cap.

## Actor input object example

```json
{
  "outputMode": "intelligence",
  "timeRange": "year",
  "maxPosts": 30,
  "maxResults": 15
}
```

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {};

// Run the Actor and wait for it to finish
const run = await client.actor("midwest_united/reddit-intelligence").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {}

# Run the Actor and wait for it to finish
run = client.actor("midwest_united/reddit-intelligence").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{}' |
apify call midwest_united/reddit-intelligence --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=midwest_united/reddit-intelligence",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit Intelligence API",
        "description": "Research any topic on Reddit. Returns sentiment analysis, theme clusters, key concerns, and ranked threads as structured JSON. Ready for AI agents and LLM pipelines. No API keys required.",
        "version": "1.0",
        "x-build-id": "dC8wfyj3mtqF3t2fm"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/midwest_united~reddit-intelligence/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-midwest_united-reddit-intelligence",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/midwest_united~reddit-intelligence/runs": {
            "post": {
                "operationId": "runs-sync-midwest_united-reddit-intelligence",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/midwest_united~reddit-intelligence/run-sync": {
            "post": {
                "operationId": "run-sync-midwest_united-reddit-intelligence",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "properties": {
                    "query": {
                        "title": "Search query",
                        "type": "string",
                        "description": "Topic, product name, company, or question to research across Reddit. Example: 'Next.js vs Remix', 'Notion alternatives', 'is Rust worth learning'. Leave blank if using Subreddit only."
                    },
                    "subreddit": {
                        "title": "Subreddit",
                        "type": "string",
                        "description": "Focus results on a specific subreddit. Enter without the r/ prefix — e.g. 'programming', 'reactjs', 'MachineLearning'. Can be combined with a query or used alone. Do not include the r/ prefix."
                    },
                    "subreddits": {
                        "title": "Subreddit list",
                        "type": "array",
                        "description": "Search across multiple subreddits at once. Enter one subreddit name per line, without r/ prefix. Used with a query — ignored if Subreddit is also set.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "outputMode": {
                        "title": "Output mode",
                        "enum": [
                            "intelligence",
                            "threads",
                            "raw"
                        ],
                        "type": "string",
                        "description": "intelligence — one record with full analysis: sentiment, clusters, concerns, notable quotes. Best for AI agents and research pipelines.\n\nthreads — one record per post with ranking, top comments, and signal quality. Best for display or browsing.\n\nraw — one record per post with all fields and explicit nulls. Best for custom data pipelines.",
                        "default": "intelligence"
                    },
                    "timeRange": {
                        "title": "Time range",
                        "enum": [
                            "hour",
                            "day",
                            "week",
                            "month",
                            "year",
                            "all"
                        ],
                        "type": "string",
                        "description": "How far back to search. 'year' covers most meaningful discussion for most topics. Use 'all' for evergreen topics or niche communities with lower post volume.",
                        "default": "year"
                    },
                    "maxPosts": {
                        "title": "Max posts to fetch",
                        "minimum": 5,
                        "maximum": 50,
                        "type": "integer",
                        "description": "Number of posts to retrieve before scoring and filtering. More posts means broader coverage but slower runs and higher cost. The scorer trims this down to the highest-signal results before output.",
                        "default": 30
                    },
                    "maxResults": {
                        "title": "Max results in output",
                        "minimum": 1,
                        "maximum": 50,
                        "type": "integer",
                        "description": "Maximum number of records pushed to the dataset after scoring. In intelligence mode this caps the threads analyzed. In threads/raw modes this caps the records returned.",
                        "default": 15
                    },
                    "budgetUsd": {
                        "title": "Spend cap (USD)",
                        "minimum": 0.01,
                        "type": "number",
                        "description": "Optional. If set, the actor stops pushing records once estimated PPE spend reaches this amount and exits cleanly. Useful for scheduled runs with predictable cost. Leave blank for no cap."
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
