# Twitter X Posts Scraper (`scrapemesh/twitter-x-posts-scraper`) Actor

🐦 Twitter X Posts Scraper extracts public posts at scale—text, timestamps, likes, reposts, replies, hashtags, mentions, media & links. 🔎 Ideal for social listening, trend tracking, sentiment & competitor analysis. ⚡ Structured, ready-to-use data for marketing & research.

- **URL**: https://apify.com/scrapemesh/twitter-x-posts-scraper.md
- **Developed by:** [ScrapeMesh](https://apify.com/scrapemesh) (community)
- **Categories:** Automation, Lead generation, Social media
- **Stats:** 2 total users, 1 monthly users, 100.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

$19.99/month + usage

To use this Actor, you pay a monthly rental fee to the developer. The rent is subtracted from your prepaid usage every month after the free trial period.You also pay for the Apify platform usage, which gets cheaper the higher Apify subscription plan you have.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#rental-actors

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

A Twitter Posts Scraper helps you extract tweets, media, engagement stats, and author details from public Twitter/X posts at scale. Whether you're using an Apify Twitter post scraper, a Python script, or a GitHub tool, it streamlines Twitter post scraping for research, analytics, and automation—making data collection faster, cleaner, and far more efficient.

### 🌟 What does Twitter Posts Scraper do? 

A Twitter Posts Scraper (also known as a Twitter tweet scraper, X Twitter scraper, or Twitter post scraper API) lets you automatically extract public tweet data from Twitter/X. It captures post text, media, timestamps, engagement metrics, author info, and more—perfect for analysis, reporting, or automation.

This tool is essential for researchers, developers, and marketers who need fast, structured tweet data without manual copying. Whether you rely on Twitter posts scraper GitHub, Twitter scraper Python, or Apify Twitter post scraper, it provides a streamlined way to scrape Twitter posts while maintaining accuracy and scalability.

* * *

### 📦 What Twitter Posts Scraper Can Extract?

Below is an organized table of all the data types your Twitter Posts Scraper can retrieve.

#### 📊 Extractable Tweet Data

| Data Type | Description |
| --- | --- |
| Post Text | Full text of the tweet |
| Post URL | Direct link to the tweet |
| Timestamp | Exact posting time |
| Author Name / Handle | Profile details of the tweet owner |
| Followers / Following | Author’s audience metrics |
| Engagement Stats | Likes, replies, quotes, reposts |
| Media (images/videos/GIFs) | Extracted attachments |
| Hashtags & Mentions | Structured keyword data |
| Views Count | Total post impressions |
| Quoted / Replied Post Data | Contextual conversation info |
| Tagged Users | Any user tagged in the post |
| Profile Image Link | Author’s profile photo |
| Bookmarks | Bookmark count when available |

* * *

#### ✨ Key Features of Twitter Posts Scraper

A modern Twitter Posts Scraper delivers far more than simple text extraction. Below are its standout features:

#### ⭐ Feature Highlights

*   🔍 Full Tweet Extraction — Capture text, media, links, and engagement data from any public tweet.  
      
    
*   🧵 Quoted & Replied Post Detection — Extract context-rich conversation threads automatically.  
      
    
*   ⚙️ Bulk Scraping Support — Add thousands of tweet URLs for large-scale research or monitoring.  
      
    
*   📸 Media-Aware Processing — Works seamlessly with images, GIFs, and videos.  
      
    
*   💾 Flexible Export — Download results in JSON, CSV, or Excel formats.  
      
    
*   🔁 Automatic Retry System — Handles timeouts and network errors without losing your progress.  
      
    
*   🚨 Resurrection for Failed Runs — Continue scraping from the last checkpoint after interruptions.  
      
    
*   🧩 Integrations-Ready — Works smoothly with APIs, automation workflows, CRMs, and analytics tools.  
      
    
*   🧪 Developer Friendly — Perfect for Twitter scraper Python, GitHub-based scrapers, or automation apps.  
      
    
*   🔒 Proxy Support — Helps you stay within rate limits and reduces blocking.  
      
    
*   🖥️ Multi-Platform Options — Use it via Apify UI, API, browser plugins, or Twitter scraper Chrome extensions.  
      
    

* * *

### 🛠️ How to Use Twitter Posts Scraper

Here’s a simple, user-friendly step-by-step process to run your scraper.

#### 📘 Step-by-Step Guide

1.  Log in to Apify  
    Create a free account or sign in to run the scraper.  
      
    
2.  Select the Actor  
    Search for Twitter Posts Scraper or Twitter post scraper Apify in the store.  
      
    

Enter Input Data  
Add your tweet URLs to the postUrls field.  
Example:  
  
https://x.com/FabrizioRomano/status/1683559267524136962

3.    
    
4.  Choose Optional Settings  
      
    

*   Include original post  
      
    
*   Set results limit  
      
    
*   Extract media and user tags  
      
    
*   Add extended author metadata  
      
    

6.  Run the Scraper  
    Processing begins immediately, extracting tweets and conversation details.  
      
    
7.  Download Results  
    Export results as JSON, Excel, or CSV. Useful for Python scripts, BI tools, and dashboards.  
      
    

This workflow integrates smoothly with Twitter post scraping, GitHub apps, or API-based pipelines.

* * *

### 🎯 Use Cases

A Twitter Posts Scraper is essential for data-driven decision-making across multiple industries.

#### 🌍 Popular Use Cases

*   📈 Social Media Analytics — Track engagement patterns, hashtags, and influencers.  
      
    
*   📰 Journalism & Research — Gather factual tweet data for news stories or academic reports.  
      
    
*   🧭 Trend Monitoring — Observe trending topics, memes, and conversations in real-time.  
      
    
*   🧲 Lead Generation & Market Research — Analyze audience reactions and sentiments.  
      
    
*   🤖 AI Training Data — Build supervised datasets for NLP and sentiment models.  
      
    
*   📊 Competitor Intelligence — Understand posting habits and content strategies.  
      
    
*   🛡️ Brand Monitoring — Track mentions, replies, and customer feedback.  
      
    

Using a Twitter tweet scraper or free Twitter scraper solution can significantly improve productivity and data accuracy.

* * *

### 💙 Why Choose Us?

We provide an industry-leading Twitter Posts Scraper, trusted by researchers, analysts, and enterprise teams.

#### 🌟 Top Reasons to Choose Our Tool

*   ⚡ Fast & Scalable — Scrapes thousands of posts quickly and reliably.  
      
    
*   🧹 Clean Structured Output — Requires zero post-processing.  
      
    
*   🔐 Safe & Ethical — Designed to respect Twitter/X’s public data ecosystem.  
      
    
*   💰 Cost Efficient — Low compute usage and high performance.  
      
    
*   🤝 Excellent Support — Direct access to expert help whenever needed.  
      
    
*   🧩 Integrations-Ready — Works with APIs, Python scripts, GitHub automation, and BI tools.  
      
    
*   🧪 Developer Approved — Ideal for twitter scraper GitHub, twitter scraper python, or browser-based workflows.  
      
    

* * *

### 📈 How Many Results Can You Scrape with Twitter Posts Scraper?

One of the strongest advantages of our Twitter Posts Scraper is its ability to run at scale—whether you need 10 tweets or 100,000 tweets.

#### 🚀 Performance & Scalability Highlights

*   Extract hundreds of tweets per minute, depending on rate limits.  
      
    
*   Automatically detects and extracts:  
      
    

*   text  
      
    
*   media  
      
    
*   hashtags  
      
    
*   impressions  
      
    
*   author details  
      
    

*   Optimized compute consumption (often 0.02–0.03 CU per 100 tweets).  
      
    
*   Works with bulk lists of tweet URLs, making it ideal for enterprise data collection.  
      
    
*   Designed to handle multi-threaded scraping with minimal errors.  
      
    
*   Compatible with tools like twitter scraper Chrome, Python pipelines, and GitHub workflows.  
      
    

From sentiment analysis to market intelligence, this scraper handles heavy workloads with speed and accuracy.

* * *

### ⚖️ Is it Legal to Scrape Twitter Posts?

Scraping public Twitter/X data is generally legal as long as it follows ethical and platform-compliant guidelines.

#### 📚 Compliance & Policy Notes

*   ✔ Allowed to scrape public information only  
      
    
*   ✔ Must follow Twitter scraping policy and Terms of Service  
      
    
*   ❌ Avoid scraping private accounts or bypassing protections  
      
    
*   ✔ Use scrapers responsibly and avoid sending excessive requests  
      
    
*   ❌ Never misuse scraped data or violate privacy regulations  
      
    

To answer common questions:

*   “Does Twitter allow scraping?” — Public data scraping is allowed when compliant.  
      
    
*   “Is Twitter scraping legal?” — Yes, when used ethically and within local laws.  
      
    
*   “What is Twitter scraping?” — It’s the automated extraction of publicly available Twitter data for research and analysis.  
      
    

* * *

### 🔧 Input Parameters

#### 🧩 JSON Example
```json
{
  "startUrls": [
    { "url": "https://x.com/elonmusk" },
    { "url": "https://x.com/BarackObama" },
    { "url": "elonmusk" },
    { "url": "@username" }
  ],
  "sortOrder": "recent",
  "maxTweets": 100,
  "maxComments": 0,
  "proxyConfiguration": {
    "useApifyProxy": false
  }
}
````

### 📤 Output Format

#### 🧩 JSON Example

```json
[
  {
    "id": "1988877569597260072",
    "url": "https://x.com/elonmusk/status/1988877569597260072",
    "user_posted": "elonmusk",
    "name": "Elon Musk",
    "description": "@tetsuoai Long press on any image to turn it into a video in less than 30 seconds https://t.co/Nsp7Ba0flp",
    "date_posted": "2025-11-13T07:52:18.000Z",
    "likes": 1729,
    "replies": 554,
    "reposts": 368,
    "quotes": 38,
    "views": "1399060",
    "bookmarks": 213,
    "is_verified": true,
    "followers": 229031060,
    "following": 1226,
    "posts_count": 89153,
    "profile_image_link": "https://pbs.twimg.com/profile_images/1983681414370619392/oTT3nm5Z_normal.jpg",
    "biography": "",
    "hashtags": null,
    "tagged_users": ["tetsuoai"],
    "photos": null,
    "videos": [
      "https://video.twimg.com/amplify_video/1988877511368019968/vid/avc1/576x856/34pcJSQSXqqM4JRQ.mp4?tag=23"
    ],
    "quoted_post": {
      "data_posted": null,
      "description": null,
      "post_id": null,
      "profile_id": null,
      "profile_name": null,
      "url": null,
      "videos": null
    },
    "external_url": null,
    "input": {
      "url": "https://x.com/elonmusk/status/1988877569597260072/"
    }
  }
]
```

### ❓ FAQ (SEO-Optimized)

#### 1️⃣ What is a Twitter Posts Scraper?

A tool that extracts public tweets, media, and engagement details automatically.

#### 2️⃣ Is Twitter scraping legal?

Yes—public data scraping is allowed when following ethical and policy guidelines.

#### 3️⃣ Can I scrape tweets for free?

Yes, many twitter posts scraper free options exist, though with limitations.

#### 4️⃣ Do you need coding skills to use it?

No. You can use an online scraper or Apify interface without coding.

#### 5️⃣ Can this work with Python?

Absolutely—ideal for twitter scraper python automation.

#### 6️⃣ Can I extract images and videos?

Yes, all media is included in the output.

#### 7️⃣ Does this tool support bulk scraping?

Yes—upload unlimited tweet URLs.

#### 8️⃣ Is it safe to use?

Yes, when complying with Twitter scraping policy and laws.

### 🎉 Conclusion

A powerful Twitter Posts Scraper simplifies the entire process of collecting tweets, analyzing engagement, and building datasets—all while following Twitter scraping policy guidelines. Whether you prefer online tools, Python scripts, APIs, or GitHub scrapers, it provides a reliable, scalable, and compliant way to extract high-quality Twitter data for research, marketing, and business insights.

# Actor input Schema

## `startUrls` (type: `array`):

What to scrape — add one value per line ✍️

✅ Examples:
• Profile: `https://x.com/username` or `https://twitter.com/username`
• Username: `username` or `@username`
• Keyword / search-style input (as supported by the actor)

📌 Tip: use clear handles or full profile URLs for best results.

## `sortOrder` (type: `string`):

How to order scraped tweets before saving 📊

• `recent` — newest first 🕐
• `oldest` — oldest first 📅
• `popular` — most liked first ❤️

## `maxTweets` (type: `integer`):

Upper limit of tweets to collect per profile / input 🎯

⏱️ Higher values take longer.
📎 Allowed range matches this input schema (see min/max below).

## `proxyConfiguration` (type: `object`):

Apify proxy settings for Twitter / X 🌐

🔓 Default: no Apify proxy.
🔄 The actor can step through fallback proxies if requests fail (datacenter, then residential retries).

💡 Enable when you see blocks, timeouts, or empty results.

## Actor input object example

```json
{
  "startUrls": [
    "elonmusk"
  ],
  "sortOrder": "recent",
  "maxTweets": 10,
  "proxyConfiguration": {
    "useApifyProxy": false
  }
}
```

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "startUrls": [
        "elonmusk"
    ],
    "proxyConfiguration": {
        "useApifyProxy": false
    }
};

// Run the Actor and wait for it to finish
const run = await client.actor("scrapemesh/twitter-x-posts-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "startUrls": ["elonmusk"],
    "proxyConfiguration": { "useApifyProxy": False },
}

# Run the Actor and wait for it to finish
run = client.actor("scrapemesh/twitter-x-posts-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "startUrls": [
    "elonmusk"
  ],
  "proxyConfiguration": {
    "useApifyProxy": false
  }
}' |
apify call scrapemesh/twitter-x-posts-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=scrapemesh/twitter-x-posts-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Twitter X Posts Scraper",
        "description": "🐦 Twitter X Posts Scraper extracts public posts at scale—text, timestamps, likes, reposts, replies, hashtags, mentions, media & links. 🔎 Ideal for social listening, trend tracking, sentiment & competitor analysis. ⚡ Structured, ready-to-use data for marketing & research.",
        "version": "0.1",
        "x-build-id": "zGESvdKJxDrW36hXb"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/scrapemesh~twitter-x-posts-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-scrapemesh-twitter-x-posts-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/scrapemesh~twitter-x-posts-scraper/runs": {
            "post": {
                "operationId": "runs-sync-scrapemesh-twitter-x-posts-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/scrapemesh~twitter-x-posts-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-scrapemesh-twitter-x-posts-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "startUrls"
                ],
                "properties": {
                    "startUrls": {
                        "title": "🔗 Twitter URLs, Usernames, or Keywords",
                        "type": "array",
                        "description": "What to scrape — add one value per line ✍️\n\n✅ Examples:\n• Profile: `https://x.com/username` or `https://twitter.com/username`\n• Username: `username` or `@username`\n• Keyword / search-style input (as supported by the actor)\n\n📌 Tip: use clear handles or full profile URLs for best results.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "sortOrder": {
                        "title": "📊 Sort Order",
                        "enum": [
                            "recent",
                            "popular",
                            "oldest"
                        ],
                        "type": "string",
                        "description": "How to order scraped tweets before saving 📊\n\n• `recent` — newest first 🕐\n• `oldest` — oldest first 📅\n• `popular` — most liked first ❤️",
                        "default": "recent"
                    },
                    "maxTweets": {
                        "title": "🎯 Max Tweets per User",
                        "minimum": 1,
                        "maximum": 100,
                        "type": "integer",
                        "description": "Upper limit of tweets to collect per profile / input 🎯\n\n⏱️ Higher values take longer.\n📎 Allowed range matches this input schema (see min/max below).",
                        "default": 10
                    },
                    "proxyConfiguration": {
                        "title": "🛡️ Proxy Configuration",
                        "type": "object",
                        "description": "Apify proxy settings for Twitter / X 🌐\n\n🔓 Default: no Apify proxy.\n🔄 The actor can step through fallback proxies if requests fail (datacenter, then residential retries).\n\n💡 Enable when you see blocks, timeouts, or empty results."
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
