# Twitter B2b Email Scraper (`scrapier/twitter-b2b-email-scraper`) Actor

- **URL**: https://apify.com/scrapier/twitter-b2b-email-scraper.md
- **Developed by:** [Scrapier](https://apify.com/scrapier) (community)
- **Categories:** Automation, Lead generation, Social media
- **Stats:** 2 total users, 1 monthly users, 100.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

$24.99/month + usage

To use this Actor, you pay a monthly rental fee to the developer. The rent is subtracted from your prepaid usage every month after the free trial period.You also pay for the Apify platform usage, which gets cheaper the higher Apify subscription plan you have.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#rental-actors

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

### **Twitter** Email Scraper 📱

**Twitter** B2B Email Scraper allows users to **extract** a variety of **data** points from public **Twitter** profiles. This includes **contact** information, user bio details, and other relevant **data** for B2B marketing.

The tool is designed to gather publicly available information efficiently and accurately. By **extract**ing this **data**, businesses can create tailored outreach campaigns and improve their lead generation efforts.

The scraper focuses on providing high-quality, actionable **data** that aligns with your business needs. It is an essential tool for companies looking to leverage **Twitter** for B2B networking and marketing strategies.

Twitter B2B Email Scraper is a powerful tool designed to extract email addresses and other essential contact information from Twitter profiles. It enables businesses to streamline their lead generation process and enhance B2B marketing efforts.

By utilizing advanced scraping techniques, this tool simplifies the process of gathering valuable data from Twitter for business purposes. It is an efficient solution for companies looking to automate their Twitter lead generation strategies.

With the Twitter email extraction tool, you can access accurate and up-to-date contact information from public Twitter profiles. This ensures your B2B outreach campaigns are targeted and effective.

### Support and feedback

- **Bug reports**: Open a ticket in the repository Issues section
- **Custom features**: Contact our enterprise support team
  *Email: scrapier.io@gmail.com *
### Extractable Data Table 📊
| Data Type | Description |
| --- | --- |
| Email addresses | Extract publicly available email addresses from Twitter profiles for B2B outreach. |
| Usernames | Retrieve Twitter usernames to identify and engage with potential leads. |
| Profile bios | Capture bio information to understand user interests and professional details. |
| Follower counts | Access follower statistics to assess the influence of potential leads. |
| Location data | Extract location details provided in profiles for geographic targeting. |
| Website links | Collect website URLs shared in Twitter bios for additional contact opportunities. |
| Profile images | Download profile images for visual identification of users. |
| Tweet content | Scrape recent tweets to analyze user activity and interests. |

### Key Features of **Twitter** Email Scraper

Here are the **standout features** that make the **Twitter** Email Scraper a **top-tier tool** for **marketers**, **agencies**, and **researchers**:

- ⭐ Automates the extraction of email addresses from public **Twitter** profiles
- ⭐ Supports bulk data scraping for efficient lead generation at scale
- ⭐ Provides accurate and up-to-date contact information for B2B marketing campaigns
- ⭐ Includes filters to target specific industries locations or user demographics
- ⭐ Ensures compliance with **Twitter**s terms of service and ethical data usage practices
- ⭐ Offers customizable scraping parameters for tailored data collection
- ⭐ Integrates seamlessly with CRM tools and marketing platforms
- ⭐ Features a user-friendly interface for easy operation by non-technical users
- ⭐ Delivers fast and reliable performance for time-sensitive projects
- ⭐ Includes robust data export options in formats like CSV and JSON

### How to use **Twitter** Email Scraper 🚀

Follow this **simple, step-by-step guide** to start extracting **Twitter** emails today:

1. ✅ **Sign up** for an account on the **Twitter** B2B Email Scraper platform
2. ✅ Log in to the dashboard and navigate to the scraping tool section
3. ✅ Enter the target keywords hashtags or profile URLs for data extraction
4. ✅ Set your scraping parameters such as location industry or follower count
5. ✅ **Start** the scraping process and monitor its progress in real-time
6. ✅ Once completed review the extracted data in the results section
7. ✅ **Export** the data to your preferred format such as CSV or JSON
8. ✅ **Integrate** the exported data into your CRM or marketing tools for further use
9. ✅ Refine your scraping criteria and repeat the process as needed
10. ✅ Contact customer support for assistance with advanced features or troubleshooting

### Use Cases 🎯

B2B Lead Generation
🎯 **Identify** potential clients by extracting email addresses from **Twitter** profiles
🎯 Build a database of leads for targeted outreach campaigns

Market Research
🎯 **Analyze** user bios and tweets to understand industry trends
🎯 Gather data on competitors and their follower demographics

Networking Opportunities
🎯 Discover professionals in your industry for collaboration
🎯 **Use** location-based filters to find local business contacts

Content Marketing
🎯 **Identify** influencers and thought leaders in your niche
🎯 Extract tweet content for sentiment analysis and content ideas

### Why choose us? 💎

**Twitter** B2B Email Scraper stands out as a **reliable** and efficient tool for extracting valuable data from **Twitter** profiles. Our software is designed with **user-friendly** features that cater to both technical and non-technical users.

By leveraging **advanced** scraping technology, we ensure accurate and up-to-date data for your B2B marketing efforts. Our tool supports bulk data extraction, saving you time and effort in gathering leads.

We prioritize compliance with **Twitter**'s terms of service and ethical data usage practices, giving you peace of mind. Additionally, our platform offers seamless integration with CRM tools, making it easy to manage and utilize the extracted data.

Whether you're a marketer, sales professional, or researcher, our scraper is tailored to meet your specific needs. Choose **Twitter** B2B Email Scraper for a **reliable**, **scalable**, and efficient solution to your lead generation challenges.

### **Twitter** Email Scraper Scalability 📈

**Twitter** B2B Email Scraper is built to handle data extraction at scale, making it suitable for businesses of all sizes. Whether you're targeting a small niche audience or a broad market, our tool can accommodate your needs.

The software supports bulk scraping, allowing you to gather large volumes of data quickly and **efficient**ly. Our **advanced** algorithms ensure that the extracted data remains accurate and relevant, even when scaling up operations.

Additionally, the platform is designed to handle high-demand tasks without compromising performance. With flexible scraping parameters, you can customize the tool to meet your specific requirements.

This scalability makes **Twitter** B2B Email Scraper an ideal choice for growing businesses and enterprises looking to expand their reach.

### **Twitter** Email Scraper Legal Guidelines ⚖️

**Yes**—scraping **Twitter** is **legal** as long as you follow **ethical** and **compliant** practices. The **Twitter** Email Scraper extracts only **publicly available** information from **public** **Twitter** profiles, making it **safe** and **compliant** for **research**, **marketing**, and **analysis**.

#### Legal & Ethical Guidelines
⚖️ **Ensure** compliance with **Twitter**s terms of service when using the scraper
⚖️ **Only** extract data that is publicly available on **Twitter** profiles
⚖️ **Avoid** scraping sensitive or private information without user consent
⚖️ **Use** the extracted data solely for legitimate business purposes
⚖️ **Do not** share or sell the extracted data to unauthorized third parties
⚖️ Follow all applicable data protection and privacy laws in your jurisdiction
⚖️ Inform users about your data collection practices if required by law
⚖️ Regularly review and update your scraping practices to maintain compliance

### Input Parameters 🧩
📦 Example Input (JSON)
```json
{
  "keywords": ["Twitter B2B Email Scraper"],
  "country": "Global",
  "maxEmailNumbers": 20,
  "platform": "Twitter",
  "engine": "legacy"
}
````

### Input Table

| Data Type | Description |
| --- | --- |
| keywords | Keywords to find relevant profiles |
| country | Country setting (Global) |
| maxEmailNumbers | Maximum emails to collect (default 20) |
| platform | Platform to scrape (Twitter) |
| engine | Engine type (legacy) |
| proxyConfiguration | Optional proxy settings |

### Output Format 📤

📝 Example Output (JSON)

```json
[
  {
    "network": "Twitter",
    "keyword": "Twitter B2B Email Scraper",
    "title": "Google's Single-Benefit Marketing Strategy for Chrome ...",
    "description": "✓For years, once we created a Gmail account, we couldn't change the username (the part before @ gmail.com ). ... Grand Rapids Marketing Co. Read more",
    "url": "https://www.linkedin.com/posts/phill-agnew_heres-how-google-marketed-chrome-browser-activity-7404878510214914048-dLxI",
    "email": "before@gmail.com"
  }
]
```

### Output Table

| Data Type | Description |
| --- | --- |
| network | Identifies Twitter as the source |
| keyword | Keyword that triggered the result (Twitter B2B Email Scraper) |
| title | Profile title or username |
| description | Public bio snippet with contact info |
| url | Direct Twitter profile link |
| email | Extracted email address |

### FAQ ❓

#### What is Twitter B2B **Email Scraper**?

Twitter B2B Email Scraper is a tool designed to extract email addresses and other data from public Twitter profiles for B2B marketing.

#### How does the scraper work?

The scraper uses advanced algorithms to collect **publicly available** data from Twitter profiles based on your specified criteria.

#### Is the tool compliant with Twitter's terms of service?

**Yes**, the scraper is designed to ensure **compliance** with Twitter's terms of service and ethical data usage practices.

#### What data can I **extract** using this tool?

You can extract email addresses, usernames, profile bios, follower counts, location data, website links, and more.

#### Is the **extract**ed data accurate?

**Yes**, the tool ensures that the extracted data is accurate and up-to-date based on **publicly available** information.

#### Can I use the tool for bulk data scraping?

**Yes**, the scraper supports bulk data extraction for efficient lead generation at scale.

#### What **export** formats are available?

The extracted data can be exported in formats such as **CSV** and **JSON** for easy integration with other tools.

#### Do I need technical skills to use the scraper?

**No**, the tool is designed with a **user-friendly** interface that requires no technical expertise.

#### Is **customer support** available?

**Yes**, our customer support team is available to assist you with any issues or questions.

#### Can I customize the scraping parameters?

**Yes**, the tool allows you to customize parameters such as keywords, locations, and industries for tailored data collection.

#### What industries can benefit from this tool?

Any industry that relies on B2B marketing and lead generation can benefit from the scraper.

#### Is the tool suitable for small businesses?

**Yes**, the scraper is scalable and suitable for **businesses** of all sizes, including small **businesses**.

#### How frequently is the data updated?

The extracted data is updated in real-time based on the latest **publicly available** information on Twitter.

#### Can I scrape **private** Twitter profiles?

**No**, the tool only extracts data from public Twitter profiles to ensure **compliance** with privacy laws.

#### What happens if I encounter issues with the scraper?

You can contact our support team for troubleshooting and assistance with any issues you encounter.

# Actor input Schema

## `keywords` (type: `array`):

List of keywords to search for on Twitter (e.g., \['marketing', 'founder', 'business']). The actor will search Google for Twitter profiles/posts containing these keywords and extract email addresses.

## `platform` (type: `string`):

Select platform.

## `location` (type: `string`):

Optional: Add location to search query (e.g., 'London', 'New York'). Leave empty to search globally.

## `emailDomains` (type: `array`):

Optional: Filter results to only include emails from specific domains (e.g., \['@gmail.com', '@outlook.com']). Leave empty to collect all email domains.

## `maxEmails` (type: `integer`):

Maximum number of emails to collect per keyword (default: 20).

## `engine` (type: `string`):

Choose scraping engine. 🚀 Cost Effective (New): Uses residential proxies with async requests for faster, cheaper scraping. 🔧 Legacy: Uses GOOGLE\_SERP proxy with traditional selectors - more reliable but slower and more expensive.

## `proxyConfiguration` (type: `object`):

Choose which proxies to use. By default, no proxy is used. If Google rejects or blocks the request, the actor will automatically fallback to datacenter proxy, then residential proxy with 3 retries.

## Actor input object example

```json
{
  "keywords": [
    "marketing"
  ],
  "platform": "Twitter",
  "location": "",
  "emailDomains": [
    "@gmail.com"
  ],
  "maxEmails": 20,
  "engine": "legacy",
  "proxyConfiguration": {
    "useApifyProxy": false
  }
}
```

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "keywords": [
        "marketing"
    ],
    "emailDomains": [
        "@gmail.com"
    ],
    "proxyConfiguration": {
        "useApifyProxy": false
    }
};

// Run the Actor and wait for it to finish
const run = await client.actor("scrapier/twitter-b2b-email-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "keywords": ["marketing"],
    "emailDomains": ["@gmail.com"],
    "proxyConfiguration": { "useApifyProxy": False },
}

# Run the Actor and wait for it to finish
run = client.actor("scrapier/twitter-b2b-email-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "keywords": [
    "marketing"
  ],
  "emailDomains": [
    "@gmail.com"
  ],
  "proxyConfiguration": {
    "useApifyProxy": false
  }
}' |
apify call scrapier/twitter-b2b-email-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=scrapier/twitter-b2b-email-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Twitter B2b Email Scraper",
        "version": "0.1",
        "x-build-id": "eHp7xrgmrlaoWGT7U"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/scrapier~twitter-b2b-email-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-scrapier-twitter-b2b-email-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/scrapier~twitter-b2b-email-scraper/runs": {
            "post": {
                "operationId": "runs-sync-scrapier-twitter-b2b-email-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/scrapier~twitter-b2b-email-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-scrapier-twitter-b2b-email-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "keywords"
                ],
                "properties": {
                    "keywords": {
                        "title": "Keywords",
                        "type": "array",
                        "description": "List of keywords to search for on Twitter (e.g., ['marketing', 'founder', 'business']). The actor will search Google for Twitter profiles/posts containing these keywords and extract email addresses.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "platform": {
                        "title": "Platform",
                        "enum": [
                            "Twitter"
                        ],
                        "type": "string",
                        "description": "Select platform.",
                        "default": "Twitter"
                    },
                    "location": {
                        "title": "Location Filter",
                        "type": "string",
                        "description": "Optional: Add location to search query (e.g., 'London', 'New York'). Leave empty to search globally.",
                        "default": ""
                    },
                    "emailDomains": {
                        "title": "Email Domains Filter",
                        "type": "array",
                        "description": "Optional: Filter results to only include emails from specific domains (e.g., ['@gmail.com', '@outlook.com']). Leave empty to collect all email domains.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "maxEmails": {
                        "title": "Maximum Emails per Keyword",
                        "minimum": 1,
                        "maximum": 5000,
                        "type": "integer",
                        "description": "Maximum number of emails to collect per keyword (default: 20).",
                        "default": 20
                    },
                    "engine": {
                        "title": "Engine",
                        "enum": [
                            "legacy"
                        ],
                        "type": "string",
                        "description": "Choose scraping engine. 🚀 Cost Effective (New): Uses residential proxies with async requests for faster, cheaper scraping. 🔧 Legacy: Uses GOOGLE_SERP proxy with traditional selectors - more reliable but slower and more expensive.",
                        "default": "legacy"
                    },
                    "proxyConfiguration": {
                        "title": "Proxy Configuration",
                        "type": "object",
                        "description": "Choose which proxies to use. By default, no proxy is used. If Google rejects or blocks the request, the actor will automatically fallback to datacenter proxy, then residential proxy with 3 retries."
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
