# NVIDIA NGC Model Catalog Scraper (`automation-lab/nvidia-ngc-scraper`) Actor

Scrape 900+ GPU-optimized AI/ML models from the NVIDIA NGC catalog. Filter by keyword, application category, or framework. Returns model name, publisher, framework, precision, version, size, labels, and catalog URL.

- **URL**: https://apify.com/automation-lab/nvidia-ngc-scraper.md
- **Developed by:** [Stas Persiianenko](https://apify.com/automation-lab) (community)
- **Categories:** AI, Developer tools
- **Stats:** 2 total users, 1 monthly users, 100.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

Pay per event

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.
Since this Actor supports Apify Store discounts, the price gets lower the higher subscription plan you have.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## NVIDIA NGC Model Catalog Scraper

### What does it do?

The **NVIDIA NGC Model Catalog Scraper** extracts structured data from [NVIDIA's NGC catalog](https://catalog.ngc.nvidia.com/models) — the official repository of GPU-optimized AI/ML models published by NVIDIA and its partners. With 900+ pre-trained models across every major AI domain, the NGC catalog is the definitive source for production-ready deep learning models optimized for NVIDIA hardware.

This actor fetches every model's key metadata: name, publisher, application category, ML framework, precision type, model format, version, file size, labels, description, and catalog URL — all delivered as clean structured JSON, ready to integrate with your pipelines.

No API key or authentication required. The actor calls NVIDIA's public REST API directly.

### Who is it for?

🔬 **AI/ML Researchers** who need to audit the NGC catalog for models in their domain (NLP, computer vision, speech, healthcare) or track when new models are published.

🏗️ **MLOps Engineers** who want to automate model discovery, maintain an internal registry of available NVIDIA models, or set up scheduled monitoring for new additions to the catalog.

📊 **Data Scientists** building model comparison dashboards, benchmarking frameworks, or exploring what pre-trained models are available for their use case before committing to training from scratch.

🧑‍💼 **Product Managers & Technical Writers** at AI companies who need up-to-date competitor model intelligence or want to document which NVIDIA models are available for their product.

🤖 **AI Automation Engineers** who want to feed the NGC catalog into AI agents, RAG pipelines, or knowledge bases that need to reason about available GPU-optimized models.

### Why use it?

The NVIDIA NGC catalog doesn't offer an export feature. You can browse models in the web UI one by one, but there's no CSV download, no bulk API explorer, and no way to filter the full catalog programmatically without writing your own API client.

This actor handles pagination (38+ pages), client-side filtering by keyword, category, and framework, and normalizes the raw API response into clean, flat JSON suitable for spreadsheets, databases, or downstream AI pipelines — in under a minute.

### What data does it extract?

| Field | Description | Example |
|-------|-------------|---------|
| `name` | Model slug identifier | `bertlargeuncased` |
| `displayName` | Human-readable model name | `BERT Large Uncased` |
| `publisher` | Publisher organization | `NVIDIA`, `Meta`, `MONAI` |
| `orgName` | NGC organization name | `nvidia` |
| `teamName` | Team within the org | `nemo`, `riva`, `tao` |
| `application` | Application category | `Speech To Text`, `Classification` |
| `framework` | ML framework | `PyTorch with NeMo`, `TensorRT` |
| `precision` | Model precision | `FP32`, `FP16`, `AMP`, `OTHER` |
| `modelFormat` | Model format | `SavedModel`, `TLT`, `RIVA`, `Bundle` |
| `latestVersion` | Latest version string | `1.0.0`, `deployable_v2.0` |
| `latestVersionSizeBytes` | Model file size in bytes | `1248444838` |
| `latestVersionSizeMb` | Model file size in MB | `1190.61` |
| `labels` | Tags and keywords | `["NLP", "BERT", "PyTorch"]` |
| `shortDescription` | Brief model description | `BERT Large Uncased trained on...` |
| `isPublic` | Whether the model is public | `true` |
| `canGuestDownload` | Whether guests can download | `true` |
| `logoUrl` | Logo image URL | `https://...` |
| `builtBy` | Who built the model | `aiapps`, `NVIDIA` |
| `catalogUrl` | Direct link to model page | `https://catalog.ngc.nvidia.com/...` |
| `createdDate` | Model creation date (ISO 8601) | `2021-03-10T03:31:51.797Z` |
| `updatedDate` | Last update date (ISO 8601) | `2024-11-12T17:56:32.338Z` |

### How much does it cost to scrape the NVIDIA NGC catalog?

The actor uses pay-per-event pricing — you only pay for the models you actually extract. There's a small one-time start fee per run, plus a per-model charge.

Typical costs:
- **20 models** (single keyword search): ~$0.025
- **100 models** (one category): ~$0.11
- **Full catalog** (~926 models): ~$0.94

All models are retrieved via NVIDIA's public REST API — no browser, no proxy required. Runs complete in seconds to a few minutes depending on result count.

#### Free plan estimate

New Apify accounts include free monthly compute credits. At typical pricing, you can scrape hundreds of NGC models per month within the free tier.

### How to use this actor

#### Step 1: Configure your search

Open the actor and fill in the **Search keyword** field (optional). For example, type `bert` to find all BERT-related models, or leave it blank to retrieve the full catalog.

#### Step 2: Apply category or framework filters (optional)

- **Application category**: filter to a specific domain like `Speech To Text`, `Classification`, `Object Detection`, or `Healthcare`.
- **ML Framework**: filter to a specific framework like `PyTorch`, `NeMo`, `TensorRT`, `MONAI`, or `TAO Toolkit`.

Both filters are case-insensitive substring matches.

#### Step 3: Set your result limit

Set **Max results** to the number of models you want. Use a large number (e.g. `10000`) to retrieve all matching models without a cap.

#### Step 4: Run and download

Click **Start** and wait for the run to complete (usually under 60 seconds). Download results as JSON, CSV, or Excel from the **Dataset** tab.

### Input parameters

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `searchQuery` | String | `""` | Filter by keyword (searches name, display name, description) |
| `application` | String | `""` | Filter by application category (e.g. `Classification`, `Speech To Text`) |
| `framework` | String | `""` | Filter by ML framework (e.g. `PyTorch`, `NeMo`, `TensorRT`) |
| `maxResults` | Integer | `100` | Maximum number of models to return |
| `maxRequestRetries` | Integer | `3` | Retry attempts for failed API requests |

### Output example

```json
{
  "name": "bertlargeuncased",
  "displayName": "Bertlargeuncased",
  "publisher": "NVIDIA",
  "orgName": "nvidia",
  "teamName": "nemo",
  "application": "OTHER",
  "framework": "PyTorch with NeMo",
  "precision": "AMP",
  "modelFormat": "SavedModel",
  "latestVersion": "1.0.0rc1",
  "latestVersionSizeBytes": 1248444838,
  "latestVersionSizeMb": 1190.61,
  "labels": ["NLP", "Natural Language Processing", "BERT", "Bertlargeuncased"],
  "shortDescription": "BERT Large Uncased trained on English Wikipedia and BookCorpus",
  "isPublic": true,
  "canGuestDownload": true,
  "logoUrl": "https://assets.nvidiagrid.net/ngc/logos/Nemo.png",
  "builtBy": "",
  "catalogUrl": "https://catalog.ngc.nvidia.com/orgs/nvidia/models/bertlargeuncased",
  "createdDate": "2021-03-10T03:31:51.797Z",
  "updatedDate": "2023-04-04T19:23:11.786Z"
}
````

### Tips & tricks

🔍 **Combine filters for precision**: Use `searchQuery: "conformer"` + `framework: "NeMo"` + `application: "Speech"` to narrow down to exactly the models you need.

📅 **Monitor for new models**: Schedule this actor to run weekly and compare the output against your previous snapshot. New models show up with a recent `createdDate`.

📊 **Size-aware budgeting**: Use `latestVersionSizeMb` to estimate download storage requirements before pulling models. A typical PyTorch model ranges from 50 MB to 10+ GB.

🏷️ **Use labels for discovery**: The `labels` field contains NVIDIA's own taxonomy. Search for `"NSPECT"` IDs to find models that have been inspected by NVIDIA's security team.

⚡ **Fast runs with filters**: Using keyword or category filters reduces both run time and cost since the actor stops paginating once it hits your `maxResults` limit.

### Integrations

#### Export to Google Sheets for team collaboration

Run the actor → click **Export to Google Sheets** in the dataset view → share the sheet with your team. Ideal for ML teams maintaining a shared model registry.

#### Scheduled model monitoring with webhooks

Set up a weekly schedule → configure a webhook to POST results to Slack or email when the run completes. Your team gets notified when new NVIDIA models are available.

#### Feed into a RAG knowledge base

Use the [Apify API](#api-usage) to retrieve the dataset JSON → chunk model descriptions → embed with OpenAI → store in Pinecone or Weaviate. Your AI assistant can now answer "which NVIDIA NeMo models support speech synthesis in French?"

#### CI/CD model validation pipeline

Integrate with GitHub Actions: run the actor before deployment → verify your selected model ID still exists in the catalog → fail the pipeline if the model was deprecated.

### API usage

#### Node.js

```javascript
import { ApifyClient } from 'apify-client';

const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });

const run = await client.actor('automation-lab/nvidia-ngc-scraper').call({
    searchQuery: 'bert',
    framework: 'PyTorch',
    maxResults: 50,
});

const { items } = await client.dataset(run.defaultDatasetId).listItems();
console.log(`Found ${items.length} NVIDIA NGC models`);
```

#### Python

```python
from apify_client import ApifyClient

client = ApifyClient("YOUR_API_TOKEN")

run = client.actor("automation-lab/nvidia-ngc-scraper").call(run_input={
    "searchQuery": "bert",
    "framework": "PyTorch",
    "maxResults": 50,
})

items = list(client.dataset(run["defaultDatasetId"]).iterate_items())
print(f"Found {len(items)} NVIDIA NGC models")
```

#### cURL

```bash
curl -X POST \
  "https://api.apify.com/v2/acts/automation-lab~nvidia-ngc-scraper/runs?token=YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "searchQuery": "bert",
    "framework": "PyTorch",
    "maxResults": 50
  }'
```

### Using with AI assistants (MCP)

You can connect this actor to Claude, Cursor, VS Code, and other AI tools via the **Apify MCP server**. This lets your AI assistant query the NVIDIA NGC catalog on your behalf.

#### Claude Code (CLI)

```bash
claude mcp add --transport http apify "https://mcp.apify.com?tools=automation-lab/nvidia-ngc-scraper"
```

#### Claude Desktop / Cursor / VS Code

Add to your MCP config file (`claude_desktop_config.json` or equivalent):

```json
{
  "mcpServers": {
    "apify": {
      "type": "http",
      "url": "https://mcp.apify.com?tools=automation-lab/nvidia-ngc-scraper",
      "headers": {
        "Authorization": "Bearer YOUR_API_TOKEN"
      }
    }
  }
}
```

#### Example prompts for your AI assistant

- *"Find all NVIDIA NGC models that use the NeMo framework for speech recognition"*
- *"List all classification models in the NGC catalog updated after 2024"*
- *"What NVIDIA models are available for object detection with FP16 precision?"*
- *"Show me the 10 largest NGC models by file size"*

### Legality

This actor accesses NVIDIA's publicly available NGC catalog API (`api.ngc.nvidia.com/v2/models`). All data extracted is publicly accessible without authentication. Use of the NGC catalog data is subject to [NVIDIA's Terms of Service](https://www.nvidia.com/en-us/about-nvidia/terms-of-use/). This actor is not affiliated with or endorsed by NVIDIA Corporation.

Always ensure your use of the extracted data complies with applicable terms of service and data usage policies.

### FAQ

**Q: Does this actor require an NVIDIA API key?**
A: No. The NGC model catalog's list endpoint is publicly accessible without any authentication. The actor fetches data using NVIDIA's public REST API.

**Q: How many models are available in the NGC catalog?**
A: At time of writing, there are 926+ models. The catalog grows regularly as NVIDIA and partners publish new models. The actor fetches a live count from the API and paginates through all results.

**Q: Can I filter by publisher (e.g., only Meta or MONAI models)?**
A: Currently, filtering is available by search keyword, application category, and ML framework. Publisher filtering can be applied by using a keyword that matches the publisher name (e.g., `searchQuery: "meta"` will find models published by Meta).

**Q: The actor returned fewer results than expected. Why?**
A: If you applied filters, the result count reflects how many models matched your filters — not the total catalog size. Try broadening your filters or removing them to retrieve more results. Also check that `maxResults` is set high enough.

**Q: I'm getting errors on some pages. What should I do?**
A: The actor automatically retries failed requests (default: 3 retries with backoff). If errors persist, try increasing `maxRequestRetries` to 5. Transient errors from the NVIDIA API are usually self-resolving within seconds.

### Related scrapers

Explore more AI/ML data scrapers from automation-lab:

- [Hugging Face Model Scraper](https://apify.com/automation-lab/hugging-face-scraper) — scrape models, datasets, and spaces from Hugging Face Hub
- [arXiv Paper Scraper](https://apify.com/automation-lab/arxiv-scraper) — extract research papers and abstracts from arXiv

# Actor input Schema

## `searchQuery` (type: `string`):

Filter models by keyword. Searches across model name, display name, and description. Leave empty to retrieve all models.

## `application` (type: `string`):

Filter models by application category (e.g. 'Classification', 'Speech To Text', 'Object Detection'). Leave empty for all categories.

## `framework` (type: `string`):

Filter by ML framework (e.g. 'PyTorch', 'NeMo', 'TensorRT', 'MONAI'). Leave empty for all frameworks.

## `maxResults` (type: `integer`):

Maximum number of models to return. Use 0 or a large number to retrieve all matching models.

## `maxRequestRetries` (type: `integer`):

Number of retry attempts for failed API requests.

## Actor input object example

```json
{
  "searchQuery": "bert",
  "maxResults": 20,
  "maxRequestRetries": 3
}
```

# Actor output Schema

## `overview` (type: `string`):

No description

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "searchQuery": "bert",
    "application": "",
    "framework": "",
    "maxResults": 20,
    "maxRequestRetries": 3
};

// Run the Actor and wait for it to finish
const run = await client.actor("automation-lab/nvidia-ngc-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "searchQuery": "bert",
    "application": "",
    "framework": "",
    "maxResults": 20,
    "maxRequestRetries": 3,
}

# Run the Actor and wait for it to finish
run = client.actor("automation-lab/nvidia-ngc-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "searchQuery": "bert",
  "application": "",
  "framework": "",
  "maxResults": 20,
  "maxRequestRetries": 3
}' |
apify call automation-lab/nvidia-ngc-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=automation-lab/nvidia-ngc-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "NVIDIA NGC Model Catalog Scraper",
        "description": "Scrape 900+ GPU-optimized AI/ML models from the NVIDIA NGC catalog. Filter by keyword, application category, or framework. Returns model name, publisher, framework, precision, version, size, labels, and catalog URL.",
        "version": "0.1",
        "x-build-id": "Xyi0rSwbPsTG1YB1m"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/automation-lab~nvidia-ngc-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-automation-lab-nvidia-ngc-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/automation-lab~nvidia-ngc-scraper/runs": {
            "post": {
                "operationId": "runs-sync-automation-lab-nvidia-ngc-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/automation-lab~nvidia-ngc-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-automation-lab-nvidia-ngc-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "properties": {
                    "searchQuery": {
                        "title": "🔍 Search keyword",
                        "type": "string",
                        "description": "Filter models by keyword. Searches across model name, display name, and description. Leave empty to retrieve all models."
                    },
                    "application": {
                        "title": "📂 Application category",
                        "type": "string",
                        "description": "Filter models by application category (e.g. 'Classification', 'Speech To Text', 'Object Detection'). Leave empty for all categories."
                    },
                    "framework": {
                        "title": "⚙️ ML Framework",
                        "type": "string",
                        "description": "Filter by ML framework (e.g. 'PyTorch', 'NeMo', 'TensorRT', 'MONAI'). Leave empty for all frameworks."
                    },
                    "maxResults": {
                        "title": "📊 Max results",
                        "minimum": 1,
                        "type": "integer",
                        "description": "Maximum number of models to return. Use 0 or a large number to retrieve all matching models.",
                        "default": 100
                    },
                    "maxRequestRetries": {
                        "title": "Max request retries",
                        "minimum": 1,
                        "maximum": 10,
                        "type": "integer",
                        "description": "Number of retry attempts for failed API requests.",
                        "default": 3
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
