# Reddit Email Scraper (`scrapio/reddit-email-scraper`) Actor

Reddit Email Scraper helps you collect emails shared publicly on Reddit. Use the data for partnerships, lead follow-ups, and direct outreach. Fast, scalable scraping with clean, export-ready results.

- **URL**: https://apify.com/scrapio/reddit-email-scraper.md
- **Developed by:** [Scrapio](https://apify.com/scrapio) (community)
- **Categories:** Lead generation, Automation, Developer tools
- **Stats:** 2 total users, 1 monthly users, 100.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

$14.99/month + usage

To use this Actor, you pay a monthly rental fee to the developer. The rent is subtracted from your prepaid usage every month after the free trial period.You also pay for the Apify platform usage, which gets cheaper the higher Apify subscription plan you have.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#rental-actors

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

### **Reddit** Email Scraper 📱

The **Reddit** Email Scraper allows users to **extract** a variety of **data** points from **Reddit**, focusing primarily on email addresses. It scans posts, comments, and user profiles to identify and collect publicly available email information.

This tool is designed to **extract** **data** in a structured format, making it easy to analyze and use for various purposes. In addition to email addresses, the scraper can gather supplementary **data** to provide context and enhance the value of the **extract**ed information.

This includes details such as usernames, post titles, and timestamps. By automating the **data** collection process, the **Reddit** email **extract**ion tool eliminates the need for manual effort, saving users significant time and resources.

It is particularly useful for businesses, researchers, and marketers who need to analyze **Reddit** **data** or connect with specific audiences. The **extract**ed **data** is delivered in a clean and organized format, ready for immediate use in campaigns, studies, or other applications.

Reddit Email Scraper is a powerful tool designed to extract email addresses from Reddit with precision and efficiency. It helps users gather valuable contact information from Reddit posts, comments, and profiles for various purposes.

With Reddit being a hub for discussions across countless topics, this email scraper enables businesses and researchers to tap into a vast pool of potential leads. It automates the extraction process, saving time and effort compared to manual methods.

The Reddit email extraction tool is ideal for marketers, recruiters, and data analysts seeking to connect with specific communities. It ensures accurate and reliable results while adhering to ethical guidelines.

### Support and feedback

- **Bug reports**: Open a ticket in the repository Issues section
- **Custom features**: Contact our enterprise support team
  *Email: hello.scrapio@gmail.com*
### Extractable Data Table 📊
| Data Type | Description |
| --- | --- |
| Email addresses | Extract publicly available email addresses from Reddit posts, comments, and profiles. |
| Usernames | Collect Reddit usernames associated with the extracted email addresses. |
| Post titles | Retrieve the titles of posts where email addresses are found. |
| Post content | Extract the content of posts containing email addresses for additional context. |
| Timestamps | Capture the date and time of posts or comments containing email addresses. |
| Subreddit names | Identify the subreddit where the email address was found. |
| Comment content | Extract the text of comments that include email addresses. |
| Profile information | Gather public profile details associated with Reddit users. |

### Key Features of **Reddit** Email Scraper

Here are the **standout features** that make the **Reddit** Email Scraper a **top-tier tool** for **marketers**, **agencies**, and **researchers**:

- ⭐ **Automated** extraction of email addresses from **Reddit** posts comments and profiles
- ⭐ Ability to filter results by subreddit keywords or specific user profiles
- ⭐ User-friendly interface with customizable settings for tailored scraping tasks
- ⭐ Support for large-scale data scraping with high accuracy and efficiency
- ⭐ Export data in various formats including CSV and JSON for easy analysis
- ⭐ **Advanced** search options to target specific communities or topics on **Reddit**
- ⭐ Compliance with ethical guidelines to ensure responsible data usage
- ⭐ Real-time scraping capabilities to capture the latest data from **Reddit**
- ⭐ Detailed logs and error reporting for better transparency and troubleshooting
- ⭐ **Secure** and private data handling to protect user information

### How to use **Reddit** Email Scraper 🚀

Follow this **simple, step-by-step guide** to start extracting **Reddit** emails today:

1. ✅ **Sign up** for the **Reddit** Email Scraper and **log in** to your account
2. ✅ Enter the keywords or subreddit names you want to target for email extraction
3. ✅ Customize the scraping settings such as filters or output format preferences
4. ✅ **Start** the scraper and allow it to scan **Reddit** for the specified data
5. ✅ Monitor the progress of the scraping process in real-time on the dashboard
6. ✅ Once the scraping is complete download the extracted data in your preferred format
7. ✅ **Review** the data to ensure it meets your requirements and is ready for use
8. ✅ Use the extracted email addresses for your outreach campaigns or research purposes
9. ✅ Save your scraping settings for future use to streamline recurring tasks
10. ✅ Contact support if you encounter any issues or need assistance with the tool

### Use Cases 🎯

Marketing and Outreach
🎯 **Identify** potential leads based on specific subreddits or discussions
🎯 Gather email addresses for targeted email marketing campaigns
🎯 Expand your customer base by reaching out to niche communities

Recruitment and Hiring
🎯 **Find** potential candidates by extracting emails from job-related subreddits
🎯 Connect with professionals discussing industry-specific topics
🎯 Streamline your recruitment process with automated email collection

Academic Research
🎯 **Collect** data for studies on online communities and user behavior
🎯 **Analyze** email activity patterns within specific subreddits
🎯 **Use** extracted data for surveys or academic outreach

Business Development
🎯 **Identify** potential partners or collaborators in relevant subreddits
🎯 Gather contact information for networking or business opportunities
🎯 Enhance your business outreach strategy with targeted email lists

### Why choose us? 💎

Our **Reddit** Email Scraper stands out as the **best** **Reddit** scraper for emails due to its accuracy, efficiency, and ease of use. It is designed to cater to a wide range of users, from marketers to researchers, ensuring that everyone can benefit from its powerful features.

The tool is built with **advanced** algorithms that deliver precise results while minimizing errors. We prioritize user privacy and data security, ensuring that all extracted information is handled responsibly.

Additionally, our email scraper for **Reddit** is highly customizable, allowing users to tailor their scraping tasks to specific needs. We provide excellent customer support to assist users at every step, making the process seamless and hassle-free.

The tool is also **scalable**, capable of handling both small and large data scraping projects with ease. By choosing our **Reddit** email scraping service, you gain access to a **reliable** and efficient solution that saves time and resources.

Trust us to help you extract emails from **Reddit** ethically and effectively.

### **Reddit** Email Scraper Scalability 📈

Our **Reddit** Email Scraper is designed to handle projects of all sizes, making it suitable for both individual users and large organizations. Whether you need to extract a few dozen emails or thousands, the tool can scale to meet your requirements.

It is equipped with robust infrastructure to ensure smooth performance even during **extensive** data scraping tasks. The automated **Reddit** email scraper can process large volumes of data without compromising accuracy or speed.

Our tool also supports batch processing and scheduling, allowing you to manage multiple scraping tasks **efficient**ly. With its scalability, you can focus on your goals while the scraper handles the heavy lifting.

This makes it the best **Reddit** scraper for emails, capable of adapting to your growing needs.

### **Reddit** Email Scraper Legal Guidelines ⚖️

**Yes**—scraping **Reddit** is **legal** as long as you follow **ethical** and **compliant** practices. The **Reddit** Email Scraper extracts only **publicly available** information from **public** **Reddit** profiles, making it **safe** and **compliant** for **research**, **marketing**, and **analysis**.

#### Legal & Ethical Guidelines
⚖️ **Ensure** that you comply with **Reddit**s terms of service when using the **Reddit** Email Scraper
⚖️ **Only** extract publicly available data and avoid accessing private or restricted information
⚖️ **Do not** use the extracted email addresses for spamming or unsolicited communication
⚖️ **Obtain** consent from individuals before using their email addresses for marketing purposes
⚖️ **Avoid** scraping sensitive or personal information that is not intended for public use
⚖️ Stay informed about local data privacy laws and regulations that may apply to your use case
⚖️ **Use** the **Reddit** email extraction tool responsibly and ethically at all times
⚖️ Respect the rights and privacy of **Reddit** users while conducting data scraping activities

### Input Parameters 🧩
📦 Example Input (JSON)
```json
{
  "keywords": ["Reddit Email Scraper"],
  "country": "Global",
  "maxEmailNumbers": 20,
  "platform": "Reddit",
  "engine": "legacy"
}
````

### Input Table

| Data Type | Description |
| --- | --- |
| keywords | Keywords to find relevant profiles |
| country | Country setting (Global) |
| maxEmailNumbers | Maximum emails to collect (default 20) |
| platform | Platform to scrape (Reddit) |
| engine | Engine type (legacy) |
| proxyConfiguration | Optional proxy settings |

### Output Format 📤

📝 Example Output (JSON)

```json
[
  {
    "network": "Reddit",
    "keyword": "Reddit Email Scraper",
    "title": "Google's Single-Benefit Marketing Strategy for Chrome ...",
    "description": "✓For years, once we created a Gmail account, we couldn't change the username (the part before @ gmail.com ). ... Grand Rapids Marketing Co. Read more",
    "url": "https://www.linkedin.com/posts/phill-agnew_heres-how-google-marketed-chrome-browser-activity-7404878510214914048-dLxI",
    "email": "before@gmail.com"
  }
]
```

### Output Table

| Data Type | Description |
| --- | --- |
| network | Identifies Reddit as the source |
| keyword | Keyword that triggered the result (Reddit Email Scraper) |
| title | Profile title or username |
| description | Public bio snippet with contact info |
| url | Direct Reddit profile link |
| email | Extracted email address |

### FAQ ❓

#### What is Reddit **Email Scraper**?

Reddit Email Scraper is a tool designed to extract email addresses and related data from Reddit posts, comments, and profiles.

#### Is Reddit **Email Scraper** **legal** to use?

**Yes**, as long as you comply with Reddit's terms of service and use the tool ethically to extract **publicly available** data.

#### Can I target **specific** subreddits with this tool?

**Yes**, the Reddit Email Scraper allows you to filter results by subreddit names or keywords.

#### What formats can I **export** the data in?

You can export the extracted data in various formats, including **CSV** and **JSON**.

#### Does the tool support **large-scale** data scraping?

**Yes**, the Reddit Email Scraper is designed to handle both small and large-scale data scraping tasks efficiently.

#### Is my data **secure** while using this tool?

**Yes**, we prioritize user privacy and ensure that all data is handled **secure**ly and responsibly.

#### Can I use this tool for email marketing?

**Yes**, but you must obtain consent from individuals before using their email addresses for marketing purposes.

#### Does the tool provide real-time scraping?

**Yes**, the **automated** Reddit email scraper can capture the latest data from Reddit in real-time.

#### What kind of support do you offer?

We provide excellent customer support to assist you with any issues or questions you may have.

#### Can I customize the scraping settings?

**Yes**, the tool allows you to customize settings such as filters, keywords, and output formats.

#### How accurate is the data **extract**ed?

The Reddit Email Scraper uses advanced algorithms to ensure high accuracy and reliable results.

#### Can I use this tool for academic research?

**Yes**, the tool is ideal for collecting data for academic studies and research purposes.

#### What happens if I encounter an error during scraping?

The tool provides detailed logs and error reporting to help you troubleshoot and resolve issues.

#### Is there a **limit** to the number of emails I can **extract**?

The tool is scalable and can handle a large number of email extractions based on your requirements.

#### Do I need technical expertise to use this tool?

**No**, the Reddit Email Scraper is **user-friendly** and suitable for both beginners and advanced users.

# Actor input Schema

## `keywords` (type: `array`):

List of keywords to search for on Reddit (e.g., \['marketing', 'founder', 'business']). The actor will search Google for Reddit profiles/posts containing these keywords and extract email addresses.

## `platform` (type: `string`):

Select platform.

## `location` (type: `string`):

Optional: Add location to search query (e.g., 'London', 'New York'). Leave empty to search globally.

## `emailDomains` (type: `array`):

Optional: Filter results to only include emails from specific domains (e.g., \['@gmail.com', '@outlook.com']). Leave empty to collect all email domains.

## `maxEmails` (type: `integer`):

Maximum number of emails to collect per keyword (default: 20).

## `engine` (type: `string`):

Choose scraping engine. 🚀 Cost Effective (New): Uses residential proxies with async requests for faster, cheaper scraping. 🔧 Legacy: Uses GOOGLE\_SERP proxy with traditional selectors - more reliable but slower and more expensive.

## `proxyConfiguration` (type: `object`):

Choose which proxies to use. By default, no proxy is used. If Google rejects or blocks the request, the actor will automatically fallback to datacenter proxy, then residential proxy with 3 retries.

## Actor input object example

```json
{
  "keywords": [
    "marketing"
  ],
  "platform": "Reddit",
  "location": "",
  "emailDomains": [
    "@gmail.com"
  ],
  "maxEmails": 20,
  "engine": "legacy",
  "proxyConfiguration": {
    "useApifyProxy": false
  }
}
```

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "keywords": [
        "marketing"
    ],
    "emailDomains": [
        "@gmail.com"
    ],
    "proxyConfiguration": {
        "useApifyProxy": false
    }
};

// Run the Actor and wait for it to finish
const run = await client.actor("scrapio/reddit-email-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "keywords": ["marketing"],
    "emailDomains": ["@gmail.com"],
    "proxyConfiguration": { "useApifyProxy": False },
}

# Run the Actor and wait for it to finish
run = client.actor("scrapio/reddit-email-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "keywords": [
    "marketing"
  ],
  "emailDomains": [
    "@gmail.com"
  ],
  "proxyConfiguration": {
    "useApifyProxy": false
  }
}' |
apify call scrapio/reddit-email-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=scrapio/reddit-email-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit Email Scraper",
        "description": "Reddit Email Scraper helps you collect emails shared publicly on Reddit. Use the data for partnerships, lead follow-ups, and direct outreach. Fast, scalable scraping with clean, export-ready results.",
        "version": "0.1",
        "x-build-id": "jzJ2JoPV5vsJDZmcY"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/scrapio~reddit-email-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-scrapio-reddit-email-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/scrapio~reddit-email-scraper/runs": {
            "post": {
                "operationId": "runs-sync-scrapio-reddit-email-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/scrapio~reddit-email-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-scrapio-reddit-email-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "keywords"
                ],
                "properties": {
                    "keywords": {
                        "title": "Keywords",
                        "type": "array",
                        "description": "List of keywords to search for on Reddit (e.g., ['marketing', 'founder', 'business']). The actor will search Google for Reddit profiles/posts containing these keywords and extract email addresses.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "platform": {
                        "title": "Platform",
                        "enum": [
                            "Reddit"
                        ],
                        "type": "string",
                        "description": "Select platform.",
                        "default": "Reddit"
                    },
                    "location": {
                        "title": "Location Filter",
                        "type": "string",
                        "description": "Optional: Add location to search query (e.g., 'London', 'New York'). Leave empty to search globally.",
                        "default": ""
                    },
                    "emailDomains": {
                        "title": "Email Domains Filter",
                        "type": "array",
                        "description": "Optional: Filter results to only include emails from specific domains (e.g., ['@gmail.com', '@outlook.com']). Leave empty to collect all email domains.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "maxEmails": {
                        "title": "Maximum Emails per Keyword",
                        "minimum": 1,
                        "maximum": 5000,
                        "type": "integer",
                        "description": "Maximum number of emails to collect per keyword (default: 20).",
                        "default": 20
                    },
                    "engine": {
                        "title": "Engine",
                        "enum": [
                            "legacy"
                        ],
                        "type": "string",
                        "description": "Choose scraping engine. 🚀 Cost Effective (New): Uses residential proxies with async requests for faster, cheaper scraping. 🔧 Legacy: Uses GOOGLE_SERP proxy with traditional selectors - more reliable but slower and more expensive.",
                        "default": "legacy"
                    },
                    "proxyConfiguration": {
                        "title": "Proxy Configuration",
                        "type": "object",
                        "description": "Choose which proxies to use. By default, no proxy is used. If Google rejects or blocks the request, the actor will automatically fallback to datacenter proxy, then residential proxy with 3 retries."
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
