# Twitter B2b Email Scraper (`scraper-engine/twitter-b2b-email-scraper`) Actor

Twitter B2B Email Scraper 📧🐦 extracts business emails, profile details, and contact data from Twitter accounts. Ideal for lead generation, outreach, and market research. Fast, scalable, and built for reliable B2B data collection and automation workflows. 🚀📊

- **URL**: https://apify.com/scraper-engine/twitter-b2b-email-scraper.md
- **Developed by:** [Scraper Engine](https://apify.com/scraper-engine) (community)
- **Categories:** Lead generation, Social media, Automation
- **Stats:** 2 total users, 1 monthly users, 100.0% runs succeeded, 1 bookmarks
- **User rating**: No ratings yet

## Pricing

$19.99/month + usage

To use this Actor, you pay a monthly rental fee to the developer. The rent is subtracted from your prepaid usage every month after the free trial period.You also pay for the Apify platform usage, which gets cheaper the higher Apify subscription plan you have.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#rental-actors

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

### **Twitter** Email Scraper 📱

The **Twitter** B2B Email Scraper allows users to **extract** a variety of valuable **data** from **Twitter** profiles. This includes email addresses, usernames, profile names, and other public information.

By using this tool, businesses can quickly gather **contact** details for targeted outreach and lead generation. The scraper is designed to retrieve only publicly available **data**, ensuring compliance with **Twitter**'s terms of service.

Users can also customize their **data** **extract**ion to focus on specific keywords or hashtags. This makes it an ideal solution for businesses looking to streamline their **Twitter** marketing efforts.

With its advanced features, the scraper ensures that you get accurate and up-to-date information for your campaigns.

Twitter B2B Email Scraper is a powerful tool designed to help businesses extract email addresses from Twitter profiles efficiently. It simplifies the process of finding business contacts for lead generation and marketing purposes.

With the growing importance of social media in professional networking, tools like Twitter email extraction software are essential for businesses. This scraper enables users to gather valuable contact information from Twitter users in just a few clicks.

Our Twitter business contact extractor is perfect for B2B lead generation, helping you connect with potential clients and partners. It is designed to save time and effort by automating the email extraction process.

### Support and feedback

- **Bug reports**: Open a ticket in the repository Issues section
- **Custom features**: Contact our enterprise support team
  *Email: dev.scraperengine@gmail.com *
### Extractable Data Table 📊
| Data Type | Description |
| --- | --- |
| Email addresses | Extract publicly available email addresses from Twitter profiles. |
| Usernames | Retrieve Twitter usernames for identifying and contacting users. |
| Profile names | Collect the display names of Twitter users for better personalization. |
| Bio information | Gather publicly visible bio details to understand user interests. |
| Location | Extract location data shared on user profiles for geographic targeting. |
| Website links | Retrieve website URLs linked in Twitter profiles for additional contact options. |
| Followers and following count | Collect follower metrics to assess user influence. |
| Tweets and hashtags | Extract recent tweets and hashtags for content analysis and trends. |

### Key Features of **Twitter** Email Scraper

Here are the **standout features** that make the **Twitter** Email Scraper a **top-tier tool** for **marketers**, **agencies**, and **researchers**:

- ⭐ Extracts publicly available email addresses from **Twitter** profiles quickly and efficiently
- ⭐ Supports advanced filtering options to target specific keywords hashtags or industries
- ⭐ Provides accurate and up-to-date data for B2B lead generation and marketing campaigns
- ⭐ User-friendly interface designed for both beginners and advanced users
- ⭐ Offers batch processing to extract data from multiple profiles simultaneously
- ⭐ Ensures compliance with **Twitter**s terms of service and ethical data collection practices
- ⭐ Includes export options for CSV Excel and other formats for easy data management
- ⭐ **Regular** updates to ensure compatibility with **Twitter**s platform changes
- ⭐ **Customizable** scraping settings to meet specific business needs
- ⭐ **High**-speed data extraction to save time and maximize productivity
- ⭐ **Secure** and reliable tool with data privacy measures in place
- ⭐ **Comprehensive** support and documentation for seamless user experience

### How to use **Twitter** Email Scraper 🚀

Follow this **simple, step-by-step guide** to start extracting **Twitter** emails today:

1. ✅ Log in to the **Twitter** B2B Email Scraper with your account credentials
2. ✅ Enter the keywords hashtags or profile URLs you want to target for email extraction
3. ✅ Set your filtering options such as location industry or follower count
4. ✅ **Configure** the scraper settings to specify the data types you want to extract
5. ✅ **Start** the scraping process and monitor the progress in real-time
6. ✅ Once the scraping is complete review the extracted data in the dashboard
7. ✅ **Export** the data to your preferred format such as CSV or Excel for further use
8. ✅ Use the extracted data for your B2B lead generation or marketing campaigns

### Use Cases 🎯

B2B Lead Generation
🎯 **Identify** potential clients by extracting business emails from **Twitter** profiles
🎯 Reach out to prospects with personalized email campaigns

Recruitment and Talent Sourcing
🎯 **Find** and contact potential candidates through their public **Twitter** profiles
🎯 **Use** bio and location data to target professionals in specific industries

Social Media Marketing
🎯 Gather contact details of influencers and businesses for collaborations
🎯 **Analyze** tweets and hashtags to identify trending topics and opportunities

Market Research
🎯 **Collect** data on competitors and industry trends from **Twitter** profiles
🎯 **Use** location and bio details to understand target demographics

### Why choose us? 💎

Our **Twitter** B2B Email Scraper is designed to provide businesses with a **reliable** and efficient solution for extracting emails from **Twitter** profiles. With **advanced** filtering options, you can target specific keywords, hashtags, or industries to find the most relevant contacts.

The tool is easy to use, making it accessible for both beginners and experienced marketers. We prioritize accuracy and compliance, ensuring that all data collected is publicly available and adheres to **Twitter**'s terms of service.

Our scraper is **regular**ly updated to stay compatible with **Twitter**'s platform changes, so you can rely on it for consistent performance. Additionally, we offer comprehensive support and documentation to help you make the most of the tool.

Whether you're focused on B2B lead generation, recruitment, or market research, our email scraper for **Twitter** is the ideal solution. Choose us to save time, streamline your data collection, and achieve your business goals efficiently.

### **Twitter** Email Scraper Scalability 📈

The **Twitter** B2B Email Scraper is built to handle both small-scale and **large-scale** data extraction projects. With batch processing capabilities, you can extract data from multiple profiles simultaneously, saving time and effort.

The tool is optimized for high-speed performance, ensuring that even large datasets are processed quickly and **efficient**ly. Whether you're targeting a handful of profiles or thousands, the scraper adapts to your needs without compromising on accuracy.

Our **customizable** settings allow you to scale your data collection efforts as your business grows. Additionally, the scraper supports exporting data in various formats, making it easy to integrate with your existing tools and workflows.

With its robust infrastructure, the **Twitter** B2B Email Scraper is a scalable solution for businesses of all sizes.

### **Twitter** Email Scraper Legal Guidelines ⚖️

**Yes**—scraping **Twitter** is **legal** as long as you follow **ethical** and **compliant** practices. The **Twitter** Email Scraper extracts only **publicly available** information from **public** **Twitter** profiles, making it **safe** and **compliant** for **research**, **marketing**, and **analysis**.

#### Legal & Ethical Guidelines
⚖️ **Ensure** that all data extracted is publicly available and complies with **Twitter**s terms of service
⚖️ **Do not** use the scraper to access private or restricted information on **Twitter** profiles
⚖️ **Avoid** using the tool for spam or unsolicited email campaigns as this violates ethical guidelines
⚖️ Respect user privacy and do not share extracted data without proper consent
⚖️ **Use** the scraper only for legitimate business purposes such as B2B lead generation or market research
⚖️ Stay updated on **Twitter**s policies to ensure ongoing compliance with their platform rules
⚖️ Limit the frequency of scraping to avoid overloading **Twitter**s servers or triggering account restrictions
⚖️ Always verify the accuracy of extracted data before using it for business purposes

### Input Parameters 🧩
📦 Example Input (JSON)
```json
{
  "keywords": ["Twitter B2B Email Scraper"],
  "country": "Global",
  "maxEmailNumbers": 20,
  "platform": "Twitter",
  "engine": "legacy"
}
````

### Input Table

| Data Type | Description |
| --- | --- |
| keywords | Keywords to find relevant profiles |
| country | Country setting (Global) |
| maxEmailNumbers | Maximum emails to collect (default 20) |
| platform | Platform to scrape (Twitter) |
| engine | Engine type (legacy) |
| proxyConfiguration | Optional proxy settings |

### Output Format 📤

📝 Example Output (JSON)

```json
[
  {
    "network": "Twitter",
    "keyword": "Twitter B2B Email Scraper",
    "title": "Google's Single-Benefit Marketing Strategy for Chrome ...",
    "description": "✓For years, once we created a Gmail account, we couldn't change the username (the part before @ gmail.com ). ... Grand Rapids Marketing Co. Read more",
    "url": "https://www.linkedin.com/posts/phill-agnew_heres-how-google-marketed-chrome-browser-activity-7404878510214914048-dLxI",
    "email": "before@gmail.com"
  }
]
```

### Output Table

| Data Type | Description |
| --- | --- |
| network | Identifies Twitter as the source |
| keyword | Keyword that triggered the result (Twitter B2B Email Scraper) |
| title | Profile title or username |
| description | Public bio snippet with contact info |
| url | Direct Twitter profile link |
| email | Extracted email address |

### FAQ ❓

#### What is the Twitter B2B **Email Scraper**?

It is a tool designed to extract **publicly available** email addresses and other data from Twitter profiles for B2B lead generation and marketing.

#### How does the scraper ensure compliance with Twitter's terms of service?

The scraper only extracts **publicly available** data and adheres to ethical data collection practices.

#### Can I use the scraper to access **private** Twitter data?

**No**, the scraper is designed to extract only **publicly available** information from Twitter profiles.

#### What formats can I **export** the **extract**ed data to?

You can export the data to formats such as **CSV** and Excel for easy integration with your workflows.

#### Is the scraper suitable for **large-scale** data **extract**ion?

**Yes**, the scraper is optimized for high-speed performance and can handle large datasets efficiently.

#### Can I customize the data **extract**ion process?

**Yes**, you can set filters based on keywords, hashtags, location, and other criteria to target specific data.

#### Is the tool user-friendly for **beginners**?

**Yes**, the scraper features an intuitive interface designed for users of all skill levels.

#### How often is the scraper updated?

The scraper is regularly updated to ensure compatibility with Twitter's platform changes.

#### What kind of support is available for users?

We provide comprehensive support and documentation to assist users in making the most of the tool.

#### Can I use the scraper for purposes other than B2B **lead generation**?

**Yes**, the scraper can also be used for recruitment, market research, and social media marketing.

#### Does the tool store the **extract**ed data?

**No**, the tool does not store any data. All extracted information is available for download and remains under your control.

#### Is there a **limit** to the number of profiles I can scrape?

The scraper supports both small-scale and large-scale data extraction, depending on your requirements.

#### How do I start using the Twitter B2B **Email Scraper**?

Simply log in, set your filters, and start the scraping process to extract the data you need.

#### Can I try the scraper before purchasing?

**Yes**, we offer a free trial to help you evaluate the tool's features and performance.

#### Is the Twitter B2B **Email Scraper** **secure**?

**Yes**, the tool is designed with data privacy and security measures to protect your information.

# Actor input Schema

## `keywords` (type: `array`):

List of keywords to search for on Twitter (e.g., \['marketing', 'founder', 'business']). The actor will search Google for Twitter profiles/posts containing these keywords and extract email addresses.

## `platform` (type: `string`):

Select platform.

## `location` (type: `string`):

Optional: Add location to search query (e.g., 'London', 'New York'). Leave empty to search globally.

## `emailDomains` (type: `array`):

Optional: Filter results to only include emails from specific domains (e.g., \['@gmail.com', '@outlook.com']). Leave empty to collect all email domains.

## `maxEmails` (type: `integer`):

Maximum number of emails to collect per keyword (default: 20).

## `engine` (type: `string`):

Choose scraping engine. 🚀 Cost Effective (New): Uses residential proxies with async requests for faster, cheaper scraping. 🔧 Legacy: Uses GOOGLE\_SERP proxy with traditional selectors - more reliable but slower and more expensive.

## `proxyConfiguration` (type: `object`):

Choose which proxies to use. By default, no proxy is used. If Google rejects or blocks the request, the actor will automatically fallback to datacenter proxy, then residential proxy with 3 retries.

## Actor input object example

```json
{
  "keywords": [
    "marketing"
  ],
  "platform": "Twitter",
  "location": "",
  "emailDomains": [
    "@gmail.com"
  ],
  "maxEmails": 20,
  "engine": "legacy",
  "proxyConfiguration": {
    "useApifyProxy": false
  }
}
```

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "keywords": [
        "marketing"
    ],
    "emailDomains": [
        "@gmail.com"
    ],
    "proxyConfiguration": {
        "useApifyProxy": false
    }
};

// Run the Actor and wait for it to finish
const run = await client.actor("scraper-engine/twitter-b2b-email-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "keywords": ["marketing"],
    "emailDomains": ["@gmail.com"],
    "proxyConfiguration": { "useApifyProxy": False },
}

# Run the Actor and wait for it to finish
run = client.actor("scraper-engine/twitter-b2b-email-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "keywords": [
    "marketing"
  ],
  "emailDomains": [
    "@gmail.com"
  ],
  "proxyConfiguration": {
    "useApifyProxy": false
  }
}' |
apify call scraper-engine/twitter-b2b-email-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=scraper-engine/twitter-b2b-email-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Twitter B2b Email Scraper",
        "description": "Twitter B2B Email Scraper 📧🐦 extracts business emails, profile details, and contact data from Twitter accounts. Ideal for lead generation, outreach, and market research. Fast, scalable, and built for reliable B2B data collection and automation workflows. 🚀📊",
        "version": "0.1",
        "x-build-id": "LZfh76jDsjQJwsQZY"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/scraper-engine~twitter-b2b-email-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-scraper-engine-twitter-b2b-email-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/scraper-engine~twitter-b2b-email-scraper/runs": {
            "post": {
                "operationId": "runs-sync-scraper-engine-twitter-b2b-email-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/scraper-engine~twitter-b2b-email-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-scraper-engine-twitter-b2b-email-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "keywords"
                ],
                "properties": {
                    "keywords": {
                        "title": "Keywords",
                        "type": "array",
                        "description": "List of keywords to search for on Twitter (e.g., ['marketing', 'founder', 'business']). The actor will search Google for Twitter profiles/posts containing these keywords and extract email addresses.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "platform": {
                        "title": "Platform",
                        "enum": [
                            "Twitter"
                        ],
                        "type": "string",
                        "description": "Select platform.",
                        "default": "Twitter"
                    },
                    "location": {
                        "title": "Location Filter",
                        "type": "string",
                        "description": "Optional: Add location to search query (e.g., 'London', 'New York'). Leave empty to search globally.",
                        "default": ""
                    },
                    "emailDomains": {
                        "title": "Email Domains Filter",
                        "type": "array",
                        "description": "Optional: Filter results to only include emails from specific domains (e.g., ['@gmail.com', '@outlook.com']). Leave empty to collect all email domains.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "maxEmails": {
                        "title": "Maximum Emails per Keyword",
                        "minimum": 1,
                        "maximum": 5000,
                        "type": "integer",
                        "description": "Maximum number of emails to collect per keyword (default: 20).",
                        "default": 20
                    },
                    "engine": {
                        "title": "Engine",
                        "enum": [
                            "legacy"
                        ],
                        "type": "string",
                        "description": "Choose scraping engine. 🚀 Cost Effective (New): Uses residential proxies with async requests for faster, cheaper scraping. 🔧 Legacy: Uses GOOGLE_SERP proxy with traditional selectors - more reliable but slower and more expensive.",
                        "default": "legacy"
                    },
                    "proxyConfiguration": {
                        "title": "Proxy Configuration",
                        "type": "object",
                        "description": "Choose which proxies to use. By default, no proxy is used. If Google rejects or blocks the request, the actor will automatically fallback to datacenter proxy, then residential proxy with 3 retries."
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
