# Mistral AI Models Scraper (`automation-lab/mistral-models-scraper`) Actor

Scrape all Mistral AI models — API identifiers, context window, capabilities, categories, and deprecation info from docs.mistral.ai.

- **URL**: https://apify.com/automation-lab/mistral-models-scraper.md
- **Developed by:** [Stas Persiianenko](https://apify.com/automation-lab) (community)
- **Categories:** AI
- **Stats:** 2 total users, 1 monthly users, 100.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

Pay per event

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.
Since this Actor supports Apify Store discounts, the price gets lower the higher subscription plan you have.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Mistral AI Models Scraper

Extract a complete, structured list of all Mistral AI models — including API identifiers, context window sizes, capabilities, open-weight status, deprecation info, and model categories — straight from [docs.mistral.ai](https://docs.mistral.ai/getting-started/models/models_overview/).

### 🤖 What does it do?

This actor scrapes the official Mistral AI documentation and returns a structured dataset of every Mistral model — from flagship frontier models like **Mistral Large 3** and **Magistral** to specialized tools like **Codestral**, **Voxtral**, and legacy models. For each model you get:

- 🆔 **API identifier** — the exact string to use in your API calls (e.g. `mistral-small-2603`)
- 📅 **Version** — release date code (e.g. `26.03`)
- 🪟 **Context window** — how many tokens the model can process
- 🏷️ **Latest alias** — the `mistral-large-latest` style pointer alias
- 📂 **Category** — Featured / Generalist / Specialist / Other / Legacy
- ✅ **Open-weight status** — whether model weights are publicly available
- ⚠️ **Deprecation info** — deprecation date, retirement date, and recommended replacement

The actor combines two scraping strategies: it parses the **React Server Component (RSC) payload** embedded in the overview page for comprehensive legacy model data, and fetches individual model card pages for active model details. No Playwright, no browser — pure HTTP.

### 👥 Who is it for?

**AI developers and engineers** comparing Mistral models for API integration who need to know which model ID to call and what the context limits are.

**LLM cost analysts** tracking Mistral's model lineup to calculate token costs and choose the right tier for their workloads.

**AI researchers** monitoring new releases, deprecations, and open-weight model availability from Mistral AI.

**DevOps and MLOps teams** maintaining API integrations who need programmatic access to the current model list to keep configurations up to date.

**AI comparison tools** that aggregate model specs across providers (Groq, DeepInfra, Fireworks, Together AI, etc.) to give users a unified view.

### 💡 Why use this scraper?

Mistral doesn't provide a public unauthenticated REST API to list all models. Their `/v1/models` endpoint requires an API key. This actor fetches the same data that's publicly visible on the documentation website — no API key needed, no rate limits to worry about.

You get **all 59+ models** in one clean dataset: current models, deprecated models (with their replacement recommendations), retired models, and everything in between. Scheduling the actor daily keeps your tooling automatically in sync when Mistral releases a new model or retires an old one.

### 📊 Data you will extract

| Field | Description | Example |
|-------|-------------|---------|
| `modelId` | Primary API identifier | `mistral-small-2603` |
| `modelName` | Human-readable name | `Mistral Small 4` |
| `description` | Short model description | `Our powerful hybrid model...` |
| `version` | Release version code | `26.03` |
| `apiIdentifiers` | All API name aliases (comma-separated) | `mistral-small-2603, mistral-small-latest` |
| `latestAlias` | The `-latest` pointer alias | `mistral-small-latest` |
| `category` | Model category | `Generalist` |
| `section` | Section on the docs page | `Frontier Models` |
| `isOpenWeight` | Whether model weights are public | `true` |
| `contextLength` | Context window size | `256k` |
| `inputCapabilities` | Supported input types | `text, image` |
| `outputCapabilities` | Supported output types | `text` |
| `features` | Supported API features | `function-calling, structured-outputs` |
| `status` | Active / Deprecated / Retired | `Active` |
| `deprecationDate` | When deprecation starts | `March 31, 2026` |
| `retirementDate` | When model is retired | `April 30, 2026` |
| `replacementModel` | Recommended replacement | `Mistral Nemo 12B` |
| `modelUrl` | Link to model card | `https://docs.mistral.ai/models/...` |
| `scrapedAt` | ISO timestamp of scrape | `2026-04-26T09:00:00.000Z` |

### 💰 How much does it cost to scrape Mistral AI models?

This is a very lightweight actor. It makes approximately 60 HTTP requests (one overview page + one model card per model). No proxies needed. No browser rendering.

| Tier | Active models only (~23) | All models (~59) |
|------|--------------------------|------------------|
| Free | ~$0.012 | ~$0.023 |
| Bronze | ~$0.011 | ~$0.020 |
| Diamond | ~$0.007 | ~$0.009 |

The $0.005 start fee covers the overview page fetch. Each model extracted costs a fraction of a cent. A daily scheduled run costs under **$1/month**.

> ℹ️ You can run this actor on Apify's **Free plan** — the default input will complete well within the free compute limits. Start by clicking **Try for free** on the actor's Store page.

### 🚀 How to use this actor

#### Step 1 — Open the actor

Go to [Mistral AI Models Scraper](https://apify.com/automation-lab/mistral-models-scraper) on Apify Store.

#### Step 2 — Configure input

The actor works with zero configuration. Click **Start** to run with defaults.

To exclude deprecated/retired legacy models, uncheck **Include deprecated/legacy models**.

#### Step 3 — Run and download

Click **Start** and the actor completes in under 60 seconds. Download your data as JSON, CSV, or Excel from the **Dataset** tab.

#### Step 4 — Schedule for freshness

Use Apify's scheduling to run daily or weekly, and your downstream tooling always has the latest Mistral model list.

### ⚙️ Input parameters

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `includeDeprecated` | Boolean | `true` | Include legacy, deprecated, and retired models in output |
| `maxConcurrency` | Integer | `5` | Parallel model card page fetches (1–20) |
| `maxRequestRetries` | Integer | `3` | Retry attempts for failed HTTP requests |

### 📤 Output example

```json
{
  "modelId": "mistral-small-2603",
  "modelName": "Mistral Small 4",
  "description": "Our powerful hybrid model unifying instruct, reasoning, and coding capabilities in a single model. 119B parameters with 6.5B active.",
  "version": "26.03",
  "apiIdentifiers": "mistral-small-2603, mistral-small-latest",
  "latestAlias": "mistral-small-latest",
  "category": "Generalist",
  "section": "Frontier Models",
  "isOpenWeight": true,
  "contextLength": "256k",
  "inputCapabilities": null,
  "outputCapabilities": null,
  "features": null,
  "status": "Active",
  "deprecationDate": null,
  "retirementDate": null,
  "replacementModel": null,
  "modelUrl": "https://docs.mistral.ai/models/model-cards/mistral-small-4-0-26-03",
  "scrapedAt": "2026-04-26T09:05:22.373Z"
}
````

### 🧠 Tips and tricks

- **Filter to active models only** — set `includeDeprecated: false` to get just the 23 currently active models. This is faster and cheaper.
- **Finding the right model ID** — the `apiIdentifiers` field contains all valid API names. Use the versioned ID (e.g. `mistral-small-2603`) for stable integrations; use the `latestAlias` (e.g. `mistral-small-latest`) if you always want the newest version.
- **Checking for deprecations** — sort or filter by `status` to find models entering deprecation. The `replacementModel` field tells you where to migrate.
- **Context window comparisons** — filter for models where `contextLength` equals `256k` to find all long-context options.
- **Scheduling for monitoring** — schedule daily runs and use the `scrapedAt` timestamp to compare consecutive runs for changes.

### 🔗 Integrations

#### 🤖 Build a model comparison tool

Run this actor alongside [Groq Models Scraper](https://apify.com/automation-lab/groq-models-scraper) and [DeepInfra Models Scraper](https://apify.com/automation-lab/deepinfra-models-scraper). Push all datasets into a single database and build an always-fresh cross-provider model catalog that your team can query by context window, feature support, or cost.

#### 📊 Feed into a Google Sheet

Use [Apify's Google Sheets integration](https://apify.com/store?search=google+sheets) to automatically push updated model data to a spreadsheet. Share it with your team so everyone knows which Mistral model IDs are active.

#### 🔔 Alert on model deprecations

Use Apify's scheduling + webhooks to run this actor daily. Compare the latest output against the previous run (use the Apify Dataset API to fetch the last N runs). Fire a Slack or email notification whenever a model's `status` changes from `Active` to `Deprecated`.

#### 🛠️ Keep API configs in sync

Integrate this actor into your CI/CD pipeline. Before deploying, fetch the current model list and validate that your configured model IDs still exist and are not deprecated. Fail the build if a model is found to be retiring within 30 days.

### 🔌 API usage

#### Node.js

```js
import { ApifyClient } from 'apify-client';

const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });

const run = await client.actor('automation-lab/mistral-models-scraper').call({
    includeDeprecated: true,
});

const { items } = await client.dataset(run.defaultDatasetId).listItems();
console.log(`Scraped ${items.length} Mistral AI models`);
items.filter(m => m.status === 'Active').forEach(m => {
    console.log(`${m.modelName}: ${m.apiIdentifiers} (${m.contextLength})`);
});
```

#### Python

```python
from apify_client import ApifyClient

client = ApifyClient(token="YOUR_API_TOKEN")

run = client.actor("automation-lab/mistral-models-scraper").call(run_input={
    "includeDeprecated": True
})

items = client.dataset(run["defaultDatasetId"]).list_items().items
active = [m for m in items if m["status"] == "Active"]
print(f"Found {len(active)} active Mistral models")
for model in active:
    print(f"{model['modelName']}: {model['apiIdentifiers']}")
```

#### cURL

```bash
## Start the actor
curl -X POST "https://api.apify.com/v2/acts/automation-lab~mistral-models-scraper/runs?token=YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"includeDeprecated": true}'

## Fetch results (replace RUN_ID with the run ID from the response above)
curl "https://api.apify.com/v2/datasets/RUN_ID/items?token=YOUR_API_TOKEN"
```

### 🤖 MCP (Model Context Protocol) integration

Use this actor directly inside **Claude**, **Cursor**, **VS Code**, or any MCP-compatible AI assistant to query Mistral model data in natural language.

#### Claude Code / CLI setup

```bash
claude mcp add --transport http apify "https://mcp.apify.com?tools=automation-lab/mistral-models-scraper"
```

#### Claude Desktop / Cursor / VS Code (JSON config)

```json
{
  "mcpServers": {
    "apify": {
      "type": "http",
      "url": "https://mcp.apify.com?tools=automation-lab/mistral-models-scraper",
      "headers": {
        "Authorization": "Bearer YOUR_APIFY_TOKEN"
      }
    }
  }
}
```

#### Example prompts to try

- *"List all active Mistral models with their API identifiers and context windows"*
- *"Which Mistral models are being deprecated in 2026 and what should I migrate to?"*
- *"Find all open-weight Mistral models I can run locally"*
- *"Compare Mistral Small 4 and Mistral Large 3 context window sizes"*
- *"Which Mistral models support function calling?"*

### ⚖️ Legality

This actor scrapes publicly available information from [docs.mistral.ai](https://docs.mistral.ai) — the official Mistral AI documentation website. The data is the same as what you'd see visiting the page in a browser. No authentication is required or bypassed. The actor respects the site's server by using reasonable concurrency limits.

Always review the [Mistral AI Terms of Service](https://mistral.ai/terms/) and [Privacy Policy](https://mistral.ai/privacy/) before using scraped data in commercial products.

### ❓ FAQ

#### What models does this actor scrape?

All models listed on the [Mistral AI models overview page](https://docs.mistral.ai/getting-started/models/models_overview/), including active frontier models, specialist models, other models, and the full legacy/deprecated history. As of April 2026, that's 59+ models.

#### Does this include API pricing data?

Pricing data for legacy models is partially available in the underlying RSC data, but is not currently included in the output schema. The output focuses on model identification, capabilities, and lifecycle data. For pricing, check [docs.mistral.ai](https://docs.mistral.ai/getting-started/models/models_overview/) directly.

#### Why do some models have `null` for contextLength?

Some specialist models (audio transcription, OCR, TTS) don't have a traditional token context window and don't display one on their model cards. For those, `contextLength` will be `null`.

#### The actor returned fewer than 59 models — what happened?

Mistral regularly adds new models to their catalog. If a new model card page returns an error on first fetch, the actor will retry up to `maxRequestRetries` times. If the page structure changes significantly, the actor may skip some models. Check the actor logs for warnings about failed fetches.

#### I need the context window in tokens, not "128k"

The actor returns the context length as displayed on the Mistral docs page (e.g. `128k`, `256k`, `32k`). To convert: `128k = 128,000 tokens`. No rounding or conversion is applied.

### 🔗 Related scrapers

- [Groq Models Scraper](https://apify.com/automation-lab/groq-models-scraper) — All models available on Groq's API with speeds and pricing
- [DeepInfra Models Scraper](https://apify.com/automation-lab/deepinfra-models-scraper) — DeepInfra model catalog with pricing
- [Fireworks AI Models Scraper](https://apify.com/automation-lab/fireworks-ai-scraper) — Fireworks AI model list
- [Cloudflare Workers AI Models Scraper](https://apify.com/automation-lab/cloudflare-workers-ai-scraper) — Cloudflare AI model catalog

# Actor input Schema

## `includeDeprecated` (type: `boolean`):

If checked, the results include all deprecated and retired models in addition to currently active models.

## `maxConcurrency` (type: `integer`):

Number of model card pages to fetch in parallel. Lower values are gentler to the server.

## `maxRequestRetries` (type: `integer`):

Number of retry attempts for failed HTTP requests.

## Actor input object example

```json
{
  "includeDeprecated": true,
  "maxConcurrency": 5,
  "maxRequestRetries": 3
}
```

# Actor output Schema

## `overview` (type: `string`):

No description

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "includeDeprecated": true,
    "maxConcurrency": 5,
    "maxRequestRetries": 3
};

// Run the Actor and wait for it to finish
const run = await client.actor("automation-lab/mistral-models-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "includeDeprecated": True,
    "maxConcurrency": 5,
    "maxRequestRetries": 3,
}

# Run the Actor and wait for it to finish
run = client.actor("automation-lab/mistral-models-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "includeDeprecated": true,
  "maxConcurrency": 5,
  "maxRequestRetries": 3
}' |
apify call automation-lab/mistral-models-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=automation-lab/mistral-models-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Mistral AI Models Scraper",
        "description": "Scrape all Mistral AI models — API identifiers, context window, capabilities, categories, and deprecation info from docs.mistral.ai.",
        "version": "0.1",
        "x-build-id": "mc8Wi768FjMSA66F8"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/automation-lab~mistral-models-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-automation-lab-mistral-models-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/automation-lab~mistral-models-scraper/runs": {
            "post": {
                "operationId": "runs-sync-automation-lab-mistral-models-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/automation-lab~mistral-models-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-automation-lab-mistral-models-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "properties": {
                    "includeDeprecated": {
                        "title": "🗂️ Include deprecated/legacy models",
                        "type": "boolean",
                        "description": "If checked, the results include all deprecated and retired models in addition to currently active models.",
                        "default": true
                    },
                    "maxConcurrency": {
                        "title": "Max concurrent requests",
                        "minimum": 1,
                        "maximum": 20,
                        "type": "integer",
                        "description": "Number of model card pages to fetch in parallel. Lower values are gentler to the server.",
                        "default": 5
                    },
                    "maxRequestRetries": {
                        "title": "Max request retries",
                        "minimum": 1,
                        "maximum": 10,
                        "type": "integer",
                        "description": "Number of retry attempts for failed HTTP requests.",
                        "default": 3
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
