# Lever ATS Job Scraper (`vnx0/lever-ats-job-scraper`) Actor

Scrape job listings from any company using Lever ATS. Extract titles, descriptions, departments, locations, salary ranges, workplace types, and application links from 5,000+ Lever-powered career pages. Fast JSON API with filtering, deduplication, and bulk multi-company scraping. Get started free.

- **URL**: https://apify.com/vnx0/lever-ats-job-scraper.md
- **Developed by:** [Vnx0](https://apify.com/vnx0) (community)
- **Categories:** Integrations, Jobs, Lead generation
- **Stats:** 2 total users, 1 monthly users, 100.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

$1.00 / 1,000 jobs scrapeds

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Lever ATS Job Scraper

**Extract job listings from any company using Lever ATS** — titles, descriptions, departments, teams, locations, salary ranges, workplace types, commitment, structured requirements, and application links. Scrape thousands of Lever-powered career pages in minutes with structured, ready-to-use data.

### What is Lever ATS Job Scraper?

Lever ATS Job Scraper is an Apify actor that extracts structured job listing data from companies that use [Lever](https://www.lever.co/) as their applicant tracking system. Over 5,000 companies — including Spotify, Netflix, Shopify, and Stripe — use Lever to power their career pages.

Simply provide a company's Lever career page URL (like `https://jobs.lever.co/spotify`) or their company slug, and get back every open job listing with full metadata. Scrape multiple companies in a single run, filter by department, team, location, or commitment, and export results to JSON, CSV, or connect to 1,500+ apps via Apify integrations.

### Why scrape Lever job listings?

- **Recruitment research and talent sourcing** — discover open positions across multiple companies, track hiring trends, and identify which teams are actively growing
- **Competitor job posting analysis** — monitor competitor hiring patterns, salary ranges, and required skills to benchmark your own talent strategy
- **Job market intelligence** — build dashboards of job market data by department, location, or commitment type across hundreds of companies
- **Job board aggregation** — pull listings from thousands of Lever-powered career pages to build a job search engine or aggregator
- **Skill demand tracking** — extract requirements and qualifications from job descriptions to analyze which skills are trending in the market
- **Lead generation for recruiters** — identify companies actively hiring in specific roles or technologies and reach out at the right time

### Features

- **25+ data fields per job** — title, description, department, team, location, salary range, workplace type, commitment, structured requirements, application links, and more
- **7 server-side filters** — filter by department, team, location, commitment, workplace type (remote/hybrid/onsite), and keyword search
- **Multi-company scraping** — scrape dozens of companies in a single run by providing multiple URLs or slugs
- **Deduplication** — only fetch new or updated jobs across runs, avoiding duplicate data in your pipeline
- **4 description formats** — choose between plain text, HTML, both, or skip descriptions entirely to save storage
- **Salary range extraction** — capture salary data when companies include it (currency, interval, min, max)
- **Structured lists** — get requirements, responsibilities, and benefits as structured data, not just blobs of text
- **Minimal compute costs** — optimized extraction that keeps your Apify compute charges extremely low
- **Zero configuration** — works out of the box with no special setup required

### What data does this actor extract?

| Field | Type | Description |
|-------|------|-------------|
| Job ID | String | Unique posting identifier (UUID) |
| Job title | String | Position title (e.g., "Senior Backend Engineer") |
| Department | String | Department name (e.g., "Engineering") |
| Team | String | Team name (e.g., "Platform", "Backend") |
| Location | String | Primary job location |
| All locations | Array | Multiple applicable locations |
| Commitment | String | Employment type (Full-time, Part-time, Intern, Contract) |
| Workplace type | String | Remote, hybrid, onsite, or unspecified |
| Country | String | ISO 3166-1 alpha-2 country code |
| Salary range | Object | Currency, interval, min, and max salary (when available) |
| Description | String | Full job description (plain text or HTML) |
| Structured lists | Array | Requirements, responsibilities, and benefits as structured sections |
| Additional info | String | Benefits, perks, and extra details |
| Job URL | String | Full URL to the job listing page |
| Apply URL | String | Direct link to the application form |
| Posted date | String | When the job was first posted (ISO 8601) |
| Company slug | String | Lever company identifier |

### How to use

1. **Add your URLs** — paste Lever career page URLs (e.g., `https://jobs.lever.co/spotify`) or just provide company slugs (e.g., `spotify`, `shopify`, `coinbase`)
2. **Configure filters** — optionally filter by department, team, location, commitment, or workplace type
3. **Set your preferences** — choose description format, enable deduplication, set max items per company
4. **Run the actor** — results are saved to the dataset and can be exported as JSON, CSV, or Excel

#### Supported input formats

| Format | Example |
|--------|---------|
| Full career page URL | `https://jobs.lever.co/spotify` |
| Career page URL with trailing slash | `https://jobs.lever.co/spotify/` |
| Company slug | `spotify` |
| Multiple URLs | `["https://jobs.lever.co/spotify", "https://jobs.lever.co/shopify"]` |
| Multiple slugs | `["spotify", "shopify", "coinbase", "stripe"]` |

### Input

See the **Input** tab for full configuration options. Key settings:

| Setting | Description | Default |
|---------|-------------|---------|
| Start URLs | Lever career page URLs to scrape | Required |
| Company slugs | Alternative: just provide slugs | `[]` |
| Department | Filter by department | All |
| Team | Filter by team | All |
| Location | Filter by location | All |
| Commitment | Filter by commitment type | All |
| Workplace type | Remote, hybrid, or onsite | All |
| Keyword | Search in title or description | None |
| Max items per company | Limit results per company (0 = all) | 0 |
| Description format | Plain text, HTML, both, or none | Plain text |
| Deduplicate | Skip already-scraped jobs | Enabled |
| Structured lists | Include requirements/responsibilities lists | Enabled |
| Proxy | Optional connection configuration | Disabled |

### Output

Each job listing is pushed to the dataset as a structured JSON object. Download the dataset in JSON, CSV, or Excel format, or connect to 1,500+ apps via Apify integrations.

#### Sample output

```json
{
  "id": "1ff4a4e3-897c-4eab-9ee2-aa7d1d07a9d6",
  "companySlug": "spotify",
  "title": "Senior Backend Engineer",
  "url": "https://jobs.lever.co/spotify/1ff4a4e3-...",
  "applyUrl": "https://jobs.lever.co/spotify/1ff4a4e3-.../apply",
  "department": "Engineering",
  "team": "Backend",
  "location": "Stockholm",
  "allLocations": ["Stockholm", "London"],
  "commitment": "Full-time",
  "workplaceType": "hybrid",
  "country": "SE",
  "createdAt": "2025-08-04T17:58:37.000Z",
  "description": "We are looking for a Senior Backend Engineer...",
  "lists": [
    {
      "text": "What You'll Do",
      "content": "<li>Design and build scalable services</li><li>Collaborate with cross-functional teams</li>"
    },
    {
      "text": "Who You Are",
      "content": "<li>5+ years of backend experience</li><li>Proficient in Python or Java</li>"
    }
  ],
  "additional": "We offer competitive salary, equity, and benefits...",
  "salaryRange": {
    "currency": "SEK",
    "interval": "yearly",
    "min": 650000,
    "max": 900000
  }
}
````

### How much does it cost?

This actor uses **Pay Per Event** pricing at **$0.001 per job scraped** ($1 per 1,000 jobs). You only pay for what you extract — Apify compute costs are billed separately.

| Jobs scraped | Cost |
|-------------|------|
| 100 | $0.10 |
| 1,000 | $1.00 |
| 10,000 | $10.00 |
| 100,000 | $100.00 |

> New Apify accounts get **$5 free credit** to start. Compute costs are minimal thanks to optimized extraction — most runs cost under $0.01 in compute.

### Integrations

Connect your scraped job data to 1,500+ apps:

| Integration | Use Case |
|-------------|----------|
| **Make (Integromat)** | Automate job posting workflows and notifications |
| **Zapier** | Trigger actions when new jobs are posted |
| **Slack** | Send new job alerts to recruiting channels |
| **Airbyte** | Sync job data to your data warehouse |
| **Google Sheets** | Build live job tracking spreadsheets |
| **GitHub** | Store job data in repositories |
| **Webhooks** | Push real-time job updates to your API |
| **Email** | Schedule daily or weekly job digest emails |

### Tips for best results

- **Use company slugs instead of full URLs** — faster to set up and less prone to formatting errors
- **Combine filters for precision** — use department + location together to find exactly the roles you need
- **Enable deduplication for scheduled runs** — set up recurring runs and only get new or changed job listings
- **Use "none" description format for metadata-only** — saves bandwidth and storage when you only need job titles, locations, and links
- **Scrape multiple companies at once** — paste up to 50 URLs in a single run for efficient bulk job market research
- **Use keyword filter for targeted search** — search for "engineer", "remote", "python" across all companies simultaneously

### Use cases

#### For recruiters and hiring managers

Track competitor hiring patterns, discover which companies are actively growing specific teams, and identify talent pools by scraping job listings across your industry. Get structured data on open positions — titles, requirements, salary ranges, and application links — delivered directly to your preferred tool.

#### For job board operators

Aggregate job listings from thousands of Lever-powered career pages to build a comprehensive job search engine or niche job board. With multi-company support and deduplication, keeping your board fresh across recurring runs is effortless.

#### For data scientists and researchers

Collect structured job market data across companies, industries, and geographies to analyze hiring trends, salary benchmarks, and skill demand over time. Export to JSON or CSV and plug directly into your analysis pipeline.

#### For sales and business development

Identify companies actively hiring in roles related to your product or service — a strong signal of budget and need. Monitor target accounts for new openings and time your outreach perfectly.

### FAQ

**What is Lever ATS?**

Lever is an applicant tracking system (ATS) used by over 5,000 companies to manage their hiring process. Companies using Lever host their career pages on `jobs.lever.co`. This actor extracts job listings from any Lever-powered career page.

**Do I need a Lever account to use this actor?**

No. You don't need any accounts, credentials, or special access. Just provide a company's career page URL or slug.

**Can I scrape private or internal job postings?**

No. This actor only extracts publicly available job listings — the same jobs visible to anyone visiting the company's career page.

**How many companies can I scrape in one run?**

There is no hard limit. You can provide multiple URLs or company slugs, and each company's jobs are retrieved efficiently. It's designed to handle dozens of companies in a single run.

**How do I find a company's Lever slug?**

Visit their careers page. If the URL is `https://jobs.lever.co/spotify`, the slug is `spotify`. Some companies use custom domains (like `careers.company.com`), but you can find their Lever slug by searching for "lever" in the page source.

**What does deduplication do?**

When enabled, the actor remembers which job IDs it has already scraped using persistent storage. On subsequent runs, previously scraped jobs are skipped — only new or changed listings are pushed to the dataset. This is useful for scheduled/recurring runs.

**Can I filter for remote-only jobs?**

Yes. Set the workplace type filter to "remote" to get only remote job listings. You can also combine it with location, department, or keyword filters.

**What happens if a company slug doesn't exist?**

The actor logs a warning and skips to the next company. It never crashes on invalid inputs — partial results are always returned.

### Support

If you encounter any issues or have feature requests, please visit the **Issues** tab and report the problem. We actively monitor and respond to all issues.

# Actor input Schema

## `startUrls` (type: `array`):

URLs of Lever-hosted career pages (e.g., https://jobs.lever.co/spotify). The scraper extracts the company slug from each URL automatically.

## `companySlugs` (type: `array`):

Alternative to URLs — provide company slugs (e.g., 'spotify', 'netflix', 'stripe'). Each slug is converted to an API call automatically.

## `department` (type: `string`):

Filter jobs by department (e.g., 'Engineering', 'Marketing'). Leave empty for all departments.

## `team` (type: `string`):

Filter jobs by team (e.g., 'Backend', 'Design'). Leave empty for all teams.

## `location` (type: `string`):

Filter jobs by location (e.g., 'London', 'New York'). Leave empty for all locations.

## `commitment` (type: `string`):

Filter by commitment type (e.g., 'Full-time', 'Part-time', 'Intern', 'Contract'). Leave empty for all.

## `workplaceType` (type: `string`):

Filter by workplace arrangement.

## `keyword` (type: `string`):

Search jobs by keyword in title or description. Applied client-side after fetching from the API.

## `maxItems` (type: `integer`):

Maximum number of jobs to scrape per company. Set to 0 to scrape all available jobs.

## `descriptionFormat` (type: `string`):

Choose how to include the job description in the output. 'Plain text' is recommended for most use cases.

## `deduplicate` (type: `boolean`):

Skip jobs that were already scraped in previous runs. Uses the key-value store to track seen job IDs.

## `includeLists` (type: `boolean`):

Include structured requirement/responsibility lists extracted from job postings.

## `proxyConfiguration` (type: `object`):

Lever's API doesn't require a proxy. Only enable this for high-volume scraping or if you encounter rate limiting.

## Actor input object example

```json
{
  "startUrls": [
    {
      "url": "https://jobs.lever.co/spotify"
    }
  ],
  "companySlugs": [],
  "department": "",
  "team": "",
  "location": "",
  "commitment": "",
  "keyword": "",
  "maxItems": 0,
  "descriptionFormat": "plainText",
  "deduplicate": true,
  "includeLists": true,
  "proxyConfiguration": {
    "useApifyProxy": false
  }
}
```

# Actor output Schema

## `dataset` (type: `string`):

Dataset containing all scraped job listings

## `runSummary` (type: `string`):

Summary statistics for this scraping run

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "startUrls": [
        {
            "url": "https://jobs.lever.co/spotify"
        }
    ],
    "companySlugs": [],
    "workplaceType": ""
};

// Run the Actor and wait for it to finish
const run = await client.actor("vnx0/lever-ats-job-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "startUrls": [{ "url": "https://jobs.lever.co/spotify" }],
    "companySlugs": [],
    "workplaceType": "",
}

# Run the Actor and wait for it to finish
run = client.actor("vnx0/lever-ats-job-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "startUrls": [
    {
      "url": "https://jobs.lever.co/spotify"
    }
  ],
  "companySlugs": [],
  "workplaceType": ""
}' |
apify call vnx0/lever-ats-job-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=vnx0/lever-ats-job-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Lever ATS Job Scraper",
        "description": "Scrape job listings from any company using Lever ATS. Extract titles, descriptions, departments, locations, salary ranges, workplace types, and application links from 5,000+ Lever-powered career pages. Fast JSON API with filtering, deduplication, and bulk multi-company scraping. Get started free.",
        "version": "0.1",
        "x-build-id": "dYHsH23u8YDhlJ8Hi"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/vnx0~lever-ats-job-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-vnx0-lever-ats-job-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/vnx0~lever-ats-job-scraper/runs": {
            "post": {
                "operationId": "runs-sync-vnx0-lever-ats-job-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/vnx0~lever-ats-job-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-vnx0-lever-ats-job-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "startUrls"
                ],
                "properties": {
                    "startUrls": {
                        "title": "Lever career page URLs",
                        "type": "array",
                        "description": "URLs of Lever-hosted career pages (e.g., https://jobs.lever.co/spotify). The scraper extracts the company slug from each URL automatically.",
                        "items": {
                            "type": "object",
                            "required": [
                                "url"
                            ],
                            "properties": {
                                "url": {
                                    "type": "string",
                                    "title": "URL of a web page",
                                    "format": "uri"
                                }
                            }
                        }
                    },
                    "companySlugs": {
                        "title": "Company slugs",
                        "type": "array",
                        "description": "Alternative to URLs — provide company slugs (e.g., 'spotify', 'netflix', 'stripe'). Each slug is converted to an API call automatically.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "department": {
                        "title": "Department",
                        "type": "string",
                        "description": "Filter jobs by department (e.g., 'Engineering', 'Marketing'). Leave empty for all departments.",
                        "default": ""
                    },
                    "team": {
                        "title": "Team",
                        "type": "string",
                        "description": "Filter jobs by team (e.g., 'Backend', 'Design'). Leave empty for all teams.",
                        "default": ""
                    },
                    "location": {
                        "title": "Location",
                        "type": "string",
                        "description": "Filter jobs by location (e.g., 'London', 'New York'). Leave empty for all locations.",
                        "default": ""
                    },
                    "commitment": {
                        "title": "Commitment",
                        "type": "string",
                        "description": "Filter by commitment type (e.g., 'Full-time', 'Part-time', 'Intern', 'Contract'). Leave empty for all.",
                        "default": ""
                    },
                    "workplaceType": {
                        "title": "Workplace type",
                        "enum": [
                            "",
                            "remote",
                            "hybrid",
                            "onsite"
                        ],
                        "type": "string",
                        "description": "Filter by workplace arrangement."
                    },
                    "keyword": {
                        "title": "Keyword search",
                        "type": "string",
                        "description": "Search jobs by keyword in title or description. Applied client-side after fetching from the API.",
                        "default": ""
                    },
                    "maxItems": {
                        "title": "Max jobs per company",
                        "minimum": 0,
                        "type": "integer",
                        "description": "Maximum number of jobs to scrape per company. Set to 0 to scrape all available jobs.",
                        "default": 0
                    },
                    "descriptionFormat": {
                        "title": "Description format",
                        "enum": [
                            "plainText",
                            "html",
                            "both",
                            "none"
                        ],
                        "type": "string",
                        "description": "Choose how to include the job description in the output. 'Plain text' is recommended for most use cases.",
                        "default": "plainText"
                    },
                    "deduplicate": {
                        "title": "Deduplicate jobs",
                        "type": "boolean",
                        "description": "Skip jobs that were already scraped in previous runs. Uses the key-value store to track seen job IDs.",
                        "default": true
                    },
                    "includeLists": {
                        "title": "Include structured lists",
                        "type": "boolean",
                        "description": "Include structured requirement/responsibility lists extracted from job postings.",
                        "default": true
                    },
                    "proxyConfiguration": {
                        "title": "Proxy configuration",
                        "type": "object",
                        "description": "Lever's API doesn't require a proxy. Only enable this for high-volume scraping or if you encounter rate limiting.",
                        "default": {
                            "useApifyProxy": false
                        }
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
