# Alibaba Scraper (`scraperx/alibaba-scraper`) Actor

Alibaba Scraper

- **URL**: https://apify.com/scraperx/alibaba-scraper.md
- **Developed by:** [ScraperX](https://apify.com/scraperx) (community)
- **Categories:** Automation
- **Stats:** 2 total users, 1 monthly users, 100.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

from $5.00 / 1,000 results

This Actor is paid per event and usage. You are charged both the fixed price for specific events and for Apify platform usage.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

### Alibaba Scraper

Alibaba Scraper is a fast, reliable Alibaba product scraper that collects structured product listings from public trade/search pages. It reads Alibaba’s embedded window.__page__data_sse*._offer_list JSON to extract titles, prices, supplier info, images, and more — ideal for teams that need an Alibaba data extractor to scale cataloging, price checks, and supplier research. Built for marketers, data analysts, and researchers, this Alibaba scraping tool handles bulk URLs and streams results to a live dataset so you can scrape Alibaba product listings efficiently at scale.

### What data / output can you get?

This actor outputs clean, structured product rows to your dataset as each page completes. The fields come directly from Alibaba’s offer list JSON and include product, pricing, and supplier metadata.

| Data type | Description | Example value |
| --- | --- | --- |
| title | Product title shown in results | "Men’s Waterproof Hiking Jacket" |
| productUrl | Direct link to the product page | "https://www.alibaba.com/product-detail/1600981574830.html" |
| price | Displayed price or price range | "$12.50 - $18.90" |
| companyName | Supplier/company name | "Guangzhou Outdoor Gear Co., Ltd." |
| countryCode | Supplier country code | "CN" |
| mainImage | Main result image URL | "https://s.alicdn.com/abc/main.jpg" |
| multiImage | Array of additional image URLs | ["https://s.alicdn.com/abc/1.jpg","https://s.alicdn.com/abc/2.jpg"] |
| reviewScore | Average rating score | "4.8" |
| reviewCount | Number of reviews | "125" |
| soldOrder | Orders sold count | "560" |
| badges | Seller or listing badges | ["Trade Assurance","Verified"] |
| loopSellingPoints | Selling point highlights | ["Waterproof","Breathable"] |

You can export results to JSON, CSV, or Excel directly from the Dataset tab in Apify.

### Key features

Get dependable Alibaba product data extraction without manual copy-paste. Results stream into your dataset in near real-time, with smart request handling for consistency.

| Feature | Description |
| --- | --- |
| 🚀 Bulk URL processing | Add many Alibaba search or category URLs and process them in one run. |
| 📈 Max Items control | Set how many result pages to open per URL via maxItems (1–5000). Legacy maxPages is supported. |
| 📡 Live dataset streaming | Products are pushed to the dataset as soon as each page finishes loading. |
| 🧭 Browser-like requests | Uses curl_cffi with Chrome impersonation for stable, browser-like HTTP behavior. |
| 🛡 Optional Apify Proxy | Enable proxyConfiguration.useApifyProxy for a managed fallback when needed. |
| 🧩 Robust JSON parsing | Reads the embedded window.__page__data_sse*._offer_list JSON for structured, consistent results. |
| ⚙️ Sensible concurrency | Runs multiple page loads (up to 8 in parallel) for balanced speed and stability. |

### How to use Alibaba Scraper - step by step

1. Sign in to Apify and open the Alibaba Scraper actor.
2. Paste one or more Alibaba search or category URLs into the Page URLs (bulk) field (string list). You can add plain strings or objects with a url key.
3. Set Max Items to control how many result pages to open per URL (default 10, up to 5000). Start small (e.g., 1–3) to preview.
4. Optionally open Proxy configuration and enable useApifyProxy if your network conditions require a managed proxy fallback.
5. Click Start. The run log will show progress; items are pushed to the dataset live as each page completes.
6. Monitor the Dataset tab to see products populate in real time.
7. Export your results in JSON, CSV, or Excel for analysis or downstream use.

Pro Tip: Need to automate at scale? Trigger this actor programmatically and pipe the dataset to your analytics stack for ongoing price monitoring or catalog updates using your preferred workflow tools.

### Use cases

| Use case name | Description |
| --- | --- |
| Supplier discovery for sourcing | Aggregate suppliers across multiple Alibaba searches to shortlist vendors faster with companyName, countryCode, and badges. |
| Price monitoring for category trends | Track price ranges (price) across pages to spot shifts, compare listings, and inform purchasing decisions. |
| Product catalog building for e‑commerce | Collect titles, images, and product links (title, mainImage, productUrl) to seed catalogs and product research. |
| Market research & benchmarking | Analyze reviewScore, reviewCount, and soldOrder to benchmark demand and quality across similar products. |
| Bulk data ingestion for analytics | Use bulk URLs and live dataset streaming to build datasets for dashboards and BI without manual scraping. |
| Data enrichment for internal tools | Combine productUrl and supplierHref with your systems for enrichment or lead qualification workflows. |
| Academic or non‑profit research | Export structured Alibaba product data for studies on market availability, pricing, and supply chains. |

### Why choose Alibaba Scraper?

Built for precision, automation, and reliability — this Alibaba catalog scraper focuses on structured output and smooth bulk runs.

- ✅ Accurate, structured output from Alibaba’s embedded _offer_list JSON
- 🌍 Works on public pages without login, ideal for global research
- ⚡ Scales across many URLs with live result streaming to your dataset
- 💻 Developer-friendly: predictable fields for easy pipelines and data joins
- 🔒 Ethical-by-design: targets public listing data only
- 💰 Cost visibility: pay-per-result via charged event row_result
- 🔗 Easy exports: download JSON, CSV, or Excel from the dataset

In short, it’s a dependable Alibaba product scraper vs. unstable browser extensions — purpose-built for consistent data extraction and automation.

### Is it legal / ethical to use Alibaba Scraper?

Yes — when used responsibly. This actor collects data from publicly visible Alibaba listing pages only and does not access private or authenticated content.

Guidelines for compliant use:
- Scrape public information only and avoid personal data.
- Respect platform terms and applicable laws (e.g., GDPR, CCPA).
- Use proxy settings responsibly if required by your environment.
- Validate your use case with your legal team for edge cases.

### Input parameters & output format

#### Example JSON input
```json
{
  "urls": [
    { "url": "https://www.alibaba.com/trade/search?keywords=jacket&page=1" },
    "https://www.alibaba.com/trade/search?keywords=backpack&page=1"
  ],
  "maxItems": 3,
  "proxyConfiguration": { "useApifyProxy": false }
}
````

- Plain strings in urls are supported, as well as objects with a url property.
- maxItems controls how many result pages per URL are fetched (alias: maxPages).

#### Parameters

| Field | Required | Description |
| --- | --- | --- |
| urls | Yes | List of Alibaba URLs to process. Accepts plain strings or objects like { "url": "https://..." }. One URL per line. |
| maxItems | No | How many result pages to open per URL. Default 10, min 1, max 5000. Legacy alias maxPages is also accepted. |
| proxyConfiguration | No | Optional Apify proxy settings. Set useApifyProxy to true to enable managed proxy fallback if needed. |

#### Example JSON output

```json
[
  {
    "badges": ["Trade Assurance", "Verified"],
    "certifications": ["CE"],
    "chatToken": "",
    "companyId": "1234567890",
    "companyLogo": "https://s.alicdn.com/logo/comp.png",
    "companyName": "Guangzhou Outdoor Gear Co., Ltd.",
    "contactSupplier": "https://www.alibaba.com/contact/supplier/abc",
    "countryCode": "CN",
    "customGroup": "",
    "displayStarLevel": "5",
    "eurl": "https://www.alibaba.com/abc/eurl",
    "goldSupplierYears": "6",
    "id": "offer_1600981574830",
    "isShowAd": false,
    "loopSellingPoints": ["Waterproof", "Breathable"],
    "lyb": false,
    "mainImage": "https://s.alicdn.com/images/main.jpg",
    "moq": "2 pieces",
    "moqV2": "2",
    "multiImage": [
      "https://s.alicdn.com/images/1.jpg",
      "https://s.alicdn.com/images/2.jpg"
    ],
    "pcLoopSellingPoints": ["In stock", "Fast dispatch"],
    "price": "$12.50 - $18.90",
    "productId": "1600981574830",
    "productScore": "92",
    "productUrl": "https://www.alibaba.com/product-detail/1600981574830.html",
    "reviewCount": "125",
    "reviewScore": "4.8",
    "shippingScore": "A",
    "showAddToCart": false,
    "showCrown": false,
    "soldOrder": "560",
    "supplierHomeHref": "https://guangzhou-gear.en.alibaba.com",
    "supplierHref": "https://www.alibaba.com/supplier/abc",
    "supplierService": "On-time delivery",
    "supplierServiceScore": "A",
    "title": "Men’s Waterproof Hiking Jacket",
    "tmlid": "",
    "trackInfo": ""
  }
]
```

Each dataset item corresponds to a single product result. Fields may be empty when not present on the page; arrays default to \[] and booleans to false.

### FAQ

#### Do I need to log in or add cookies to scrape?

No. The actor fetches public Alibaba listing pages without login. It uses browser-like HTTP requests with Chrome impersonation to read the embedded offer list JSON.

#### How much does it cost to run?

This actor uses pay-per-event pricing. You’re charged a tiny start fee per run and $0.005 per result (charged\_event\_name "row\_result"). See the run details on Apify for your exact totals.

#### How many pages can I scrape per URL?

You can set maxItems from 1 up to 5000 pages per URL. Start low to validate output, then scale as needed. A legacy alias maxPages is also supported.

#### Can I run it on many URLs at once?

Yes. Paste multiple Alibaba search or category URLs in the urls input. The actor will process them with sensible parallelism and push results live to the dataset.

#### What fields does it extract?

It captures product and supplier metadata from Alibaba’s \_offer\_list JSON, including title, productUrl, price, companyName, countryCode, mainImage, multiImage, reviewScore, reviewCount, soldOrder, badges, and more as shown in the output example.

#### Does it work as an Alibaba scraper Python integration?

Yes. You can trigger this Apify actor via the Apify API from Python or any language, then download the dataset as JSON, CSV, or Excel for downstream processing.

#### Is this an Alibaba scraper Chrome extension?

No. This is a cloud-based Apify actor. It runs headless on servers, streams results to a dataset, and avoids the instability of manual browser extensions.

#### Can I use proxies?

Yes. Proxies are optional. Set proxyConfiguration.useApifyProxy to true to enable Apify’s managed proxy fallback when needed for more stable access.

### Closing CTA / Final thoughts

Alibaba Scraper is built for fast, structured Alibaba product data extraction at scale. With bulk URL input, maxItems control up to 5000 pages per URL, and live dataset streaming, it’s ideal for marketers, researchers, and analysts who need reliable cataloging, price tracking, and supplier discovery. Developers can integrate runs and exports into pipelines programmatically for automation. Start extracting cleaner, structured product data — and turn Alibaba’s public listings into actionable insights.

# Actor input Schema

## `urls` (type: `array`):

**Add every Alibaba page you want to collect from** — one URL per line. 📝

✅ **Works well:** search results, category browse pages, and similar listing views you open in a normal browser.

🧭 **Tip:** Copy the address bar from Chrome or Edge after you’ve set filters (keywords, category, etc.) — those settings travel with the link.

📦 **Bulk:** Paste many URLs at once; each is processed in order.

## `maxItems` (type: `integer`):

How many **result pages** to open **per URL** in your list (each page lists many products). 📚

📊 **Dataset updates live** — rows appear in the output table as each page finishes.

🎯 Try **1–3** first, then raise when you’re happy.

⏱️ Higher values mean more requests and longer runs.

## `proxyConfiguration` (type: `object`):

**Optional** — leave off for a simple start. 🔓

When enabled here, Apify’s managed proxies can help with **stable access** in challenging network conditions. Settings apply through the standard Apify proxy picker below.

🙈 **Privacy:** Your search URLs and results stay in your Apify run; configure proxies only if your workflow needs them.

💬 **Not sure?** Keep defaults and only change this if a run fails or your team asks for proxy usage.

## Actor input object example

```json
{
  "urls": [
    "https://www.alibaba.com/trade/search?fsb=y&IndexArea=product_en&categoryId=127734135&has4Tab=true&keywords=Men%27s+Jackets&originKeywords=Men%27s+Jackets&productId=1600981574830&tab=all&&page=1&spm=undefined.pagination.0"
  ],
  "maxItems": 10,
  "proxyConfiguration": {
    "useApifyProxy": false
  }
}
```

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "urls": [
        "https://www.alibaba.com/trade/search?fsb=y&IndexArea=product_en&categoryId=127734135&has4Tab=true&keywords=Men%27s+Jackets&originKeywords=Men%27s+Jackets&productId=1600981574830&tab=all&&page=1&spm=undefined.pagination.0"
    ],
    "proxyConfiguration": {
        "useApifyProxy": false
    }
};

// Run the Actor and wait for it to finish
const run = await client.actor("scraperx/alibaba-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "urls": ["https://www.alibaba.com/trade/search?fsb=y&IndexArea=product_en&categoryId=127734135&has4Tab=true&keywords=Men%27s+Jackets&originKeywords=Men%27s+Jackets&productId=1600981574830&tab=all&&page=1&spm=undefined.pagination.0"],
    "proxyConfiguration": { "useApifyProxy": False },
}

# Run the Actor and wait for it to finish
run = client.actor("scraperx/alibaba-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "urls": [
    "https://www.alibaba.com/trade/search?fsb=y&IndexArea=product_en&categoryId=127734135&has4Tab=true&keywords=Men%27s+Jackets&originKeywords=Men%27s+Jackets&productId=1600981574830&tab=all&&page=1&spm=undefined.pagination.0"
  ],
  "proxyConfiguration": {
    "useApifyProxy": false
  }
}' |
apify call scraperx/alibaba-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=scraperx/alibaba-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Alibaba Scraper",
        "description": "Alibaba Scraper",
        "version": "0.1",
        "x-build-id": "IQjDDmAEQJflPPoCy"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/scraperx~alibaba-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-scraperx-alibaba-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/scraperx~alibaba-scraper/runs": {
            "post": {
                "operationId": "runs-sync-scraperx-alibaba-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/scraperx~alibaba-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-scraperx-alibaba-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "urls"
                ],
                "properties": {
                    "urls": {
                        "title": "🔗 Page URLs (bulk)",
                        "type": "array",
                        "description": "**Add every Alibaba page you want to collect from** — one URL per line. 📝\n\n✅ **Works well:** search results, category browse pages, and similar listing views you open in a normal browser.\n\n🧭 **Tip:** Copy the address bar from Chrome or Edge after you’ve set filters (keywords, category, etc.) — those settings travel with the link.\n\n📦 **Bulk:** Paste many URLs at once; each is processed in order.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "maxItems": {
                        "title": "🔢 Max Items",
                        "minimum": 1,
                        "maximum": 5000,
                        "type": "integer",
                        "description": "How many **result pages** to open **per URL** in your list (each page lists many products). 📚\n\n📊 **Dataset updates live** — rows appear in the output table as each page finishes.\n\n🎯 Try **1–3** first, then raise when you’re happy.\n\n⏱️ Higher values mean more requests and longer runs.",
                        "default": 10
                    },
                    "proxyConfiguration": {
                        "title": "🛡️ Proxy configuration",
                        "type": "object",
                        "description": "**Optional** — leave off for a simple start. 🔓\n\nWhen enabled here, Apify’s managed proxies can help with **stable access** in challenging network conditions. Settings apply through the standard Apify proxy picker below.\n\n🙈 **Privacy:** Your search URLs and results stay in your Apify run; configure proxies only if your workflow needs them.\n\n💬 **Not sure?** Keep defaults and only change this if a run fails or your team asks for proxy usage."
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
