# Google Jobs Scraper (`automation-lab/google-jobs-scraper`) Actor

Search Google Jobs and extract job listings — titles, companies, locations, full descriptions, salaries, apply URLs, qualifications, and more. Export to JSON, CSV, Excel.

- **URL**: https://apify.com/automation-lab/google-jobs-scraper.md
- **Developed by:** [Stas Persiianenko](https://apify.com/automation-lab) (community)
- **Categories:** Jobs
- **Stats:** 20 total users, 9 monthly users, 97.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

Pay per event

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.
Since this Actor supports Apify Store discounts, the price gets lower the higher subscription plan you have.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Google Jobs Scraper

Search Google Jobs by keyword and extract structured job listings — titles, companies, locations, descriptions, salaries, employment types, apply URLs, and qualification highlights. Supports location filtering, country, and language settings.

### What does Google Jobs Scraper do?

Google Jobs Scraper searches Google's job aggregation engine and extracts detailed, structured data from every listing. Google Jobs pulls listings from thousands of job boards (Indeed, LinkedIn, Glassdoor, ZipRecruiter, company career pages) into one unified interface — this scraper turns those results into clean, exportable data.

Enter your job search keywords, and the scraper opens Google Jobs in a real browser, scrolls through results, clicks into each listing to capture full details, and returns structured data including descriptions, salary ranges, qualifications, responsibilities, benefits, and direct apply URLs.

The scraper uses Playwright with residential proxies to navigate Google Jobs exactly like a real user would, ensuring you get the same comprehensive results Google shows to job seekers.

### Who is Google Jobs Scraper for?

- 🏢 **Recruiters and HR teams** tracking job market trends and competitor postings across industries
- 📊 **Compensation analysts** collecting salary data across roles, locations, and companies
- 🔍 **Job aggregator platforms** building comprehensive job databases from Google's curated listings
- 🧪 **Labor market researchers** studying employment trends, skill demand, and hiring patterns
- 🤖 **Job board startups** sourcing listings from Google's aggregated feed of thousands of boards
- 💼 **Career coaches** analyzing what employers look for in specific roles and industries
- 📈 **Data analysts** building datasets for workforce analytics, salary benchmarking, and market intelligence
- 🏛️ **Policy researchers** studying employment patterns, remote work trends, and regional job availability

### Why use Google Jobs Scraper?

- 🌐 **Aggregated from thousands of sources** — Google Jobs combines listings from Indeed, LinkedIn, Glassdoor, ZipRecruiter, and company career pages in one place
- 📋 **Full job details** — title, company, location, description, salary, employment type, qualifications, responsibilities, and benefits
- 🔗 **Direct apply URLs** — get the original job posting link to apply directly on the source site
- 🌍 **Location + language filtering** — target specific countries, cities, and languages for localized results
- 📦 **Multiple queries per run** — search for "software engineer", "data analyst", and "product manager" in a single run
- 💰 **Pay per result** — only pay for jobs actually extracted, no flat subscription fees
- ⚡ **Batch processing** — extract up to 200 jobs per query across multiple search terms

### How much does it cost to scrape Google Jobs?

Google Jobs Scraper uses pay-per-event pricing. You only pay for what you use:

| Event | Price |
|-------|-------|
| Run started (one-time) | $0.035 |
| Per job scraped (Free tier) | $0.006 |
| Per job scraped (Bronze) | $0.0054 |
| Per job scraped (Silver) | $0.0048 |
| Per job scraped (Gold) | $0.0039 |
| Per job scraped (Platinum) | $0.003 |
| Per job scraped (Diamond) | $0.0024 |

**Example costs (Free tier):**
- 10 jobs for 1 keyword: $0.035 + (10 x $0.006) = **$0.095**
- 50 jobs for 2 keywords: $0.035 + (100 x $0.006) = **$0.635**
- 200 jobs for 1 keyword: $0.035 + (200 x $0.006) = **$1.235**

Apify Free plan users get $5/month in free credits — enough for ~833 job listings.

### Data you can extract from Google Jobs

| Field | Type | Description |
|-------|------|-------------|
| `title` | string | Job title (e.g. "Senior Software Engineer") |
| `company` | string | Employer name |
| `location` | string | Job location (city, state, or "Remote") |
| `description` | string | Full job description text |
| `salary` | string | Salary range if available (e.g. "$120K–$160K a year") |
| `employmentType` | string | Full-time, Part-time, Contract, Internship |
| `datePosted` | string | When the job was posted (e.g. "3 days ago") |
| `sourceUrl` | string | URL of the original job posting |
| `sourceDomain` | string | Domain of the source site (e.g. "linkedin.com") |
| `applyUrl` | string | Direct link to apply for the job |
| `highlights.qualifications` | string[] | Required qualifications and skills |
| `highlights.responsibilities` | string[] | Key job responsibilities |
| `highlights.benefits` | string[] | Listed benefits and perks |
| `query` | string | The search keyword that returned this result |
| `scrapedAt` | string | ISO 8601 timestamp when the data was collected |

### How to scrape Google Jobs step by step

1. Go to [Google Jobs Scraper](https://apify.com/automation-lab/google-jobs-scraper) on Apify Store
2. Click **Try for free**
3. Enter your job search keywords (e.g., "software engineer", "data analyst")
4. Optionally add a location filter (e.g., "New York", "Remote", "London")
5. Set the maximum number of jobs per query (default: 20)
6. Adjust country and language if needed (default: US, English)
7. Click **Start** and wait for results
8. Download your data as JSON, CSV, Excel, or connect it to your workflow

### Input configuration

| Field | Type | Description | Default |
|-------|------|-------------|---------|
| `queries` | string[] | Job search keywords (e.g., "software engineer", "nurse") | *required* |
| `location` | string | Location filter appended to each query (e.g., "New York", "Remote") | none |
| `maxResults` | integer | Maximum jobs to extract per query (1–200) | 20 |
| `country` | string | Country code for localized results (e.g., "us", "uk", "de") | "us" |
| `language` | string | Language code for results (e.g., "en", "de", "fr") | "en" |
| `maxRequestRetries` | integer | Number of retry attempts for failed requests (1–10) | 3 |

#### Example input

```json
{
    "queries": ["software engineer", "data analyst"],
    "location": "New York",
    "maxResults": 20,
    "country": "us",
    "language": "en"
}
````

### Output example

```json
{
    "title": "Senior Software Engineer",
    "company": "Google",
    "location": "New York, NY",
    "description": "We're looking for a Senior Software Engineer to join our Cloud Platform team. You will design and build large-scale distributed systems, mentor junior engineers, and drive technical strategy...",
    "salary": "$160,000–$210,000 a year",
    "employmentType": "Full-time",
    "datePosted": "3 days ago",
    "sourceUrl": "https://careers.google.com/jobs/results/1234567890",
    "sourceDomain": "careers.google.com",
    "applyUrl": "https://careers.google.com/jobs/results/1234567890/apply",
    "highlights": {
        "qualifications": [
            "Bachelor's degree in Computer Science or equivalent",
            "5+ years of experience in software development",
            "Proficiency in Python, Java, or Go",
            "Experience with distributed systems and cloud infrastructure"
        ],
        "responsibilities": [
            "Design and implement scalable backend services",
            "Mentor junior team members and conduct code reviews",
            "Collaborate with product teams to define technical requirements",
            "Drive architectural decisions for new features"
        ],
        "benefits": [
            "Competitive salary and equity",
            "Health, dental, and vision insurance",
            "Flexible remote work policy",
            "Annual education stipend"
        ]
    },
    "query": "software engineer",
    "scrapedAt": "2026-03-27T14:30:00.000Z"
}
```

### Tips for best results

- 🎯 **Use specific job titles** — "frontend react developer" returns more relevant results than just "developer"
- 📍 **Add location for local jobs** — use the location field for city-specific searches like "Austin, TX" or "London"
- 🌐 **Try "Remote" as location** — filter for remote-only positions by setting location to "Remote"
- 📏 **Start small** — test with 10 results first, then scale up once you verify the output matches your needs
- 🔄 **Batch related queries** — add multiple job titles in one run to build a comprehensive dataset efficiently
- 💡 **Use country codes** — set `country` to "uk" for British job listings or "de" for German ones
- 🗣️ **Match language to country** — set `language` to "de" when searching German jobs for localized titles and descriptions
- 📊 **Check salary fields** — not all listings include salary data; Google shows it when the source provides it

### Integrations

Connect Google Jobs data to your existing tools and workflows:

- 📊 **Google Sheets** — automatically export job listings to a spreadsheet for tracking and analysis
- 🔗 **Zapier / Make** — trigger workflows when new jobs matching your criteria appear
- 💾 **Webhooks** — push results to your own API endpoint in real time
- 📁 **S3 / Google Cloud Storage** — store large job datasets for batch processing
- 📈 **Power BI / Tableau** — import structured job data for salary benchmarking dashboards
- 🤖 **Slack notifications** — get alerts when new jobs matching your keywords are posted
- 📧 **Email alerts** — receive daily digests of new job listings via Apify integrations

### Using Google Jobs Scraper with the API

#### Node.js

```javascript
import { ApifyClient } from 'apify-client';

const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });

const run = await client.actor('automation-lab/google-jobs-scraper').call({
    queries: ['software engineer', 'data analyst'],
    location: 'New York',
    maxResults: 50,
    country: 'us',
});

const { items } = await client.dataset(run.defaultDatasetId).listItems();
console.log(`Found ${items.length} job listings`);
items.forEach((job) => {
    console.log(`${job.title} at ${job.company} — ${job.salary || 'Salary not listed'}`);
});
```

#### Python

```python
from apify_client import ApifyClient

client = ApifyClient('YOUR_API_TOKEN')

run = client.actor('automation-lab/google-jobs-scraper').call(run_input={
    'queries': ['software engineer', 'data analyst'],
    'location': 'New York',
    'maxResults': 50,
    'country': 'us',
})

items = client.dataset(run['defaultDatasetId']).list_items().items
print(f'Found {len(items)} job listings')
for job in items:
    print(f"{job['title']} at {job['company']} — {job.get('salary', 'Salary not listed')}")
```

#### cURL

```bash
curl "https://api.apify.com/v2/acts/automation-lab~google-jobs-scraper/runs" \
  -X POST \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_TOKEN" \
  -d '{
    "queries": ["software engineer"],
    "location": "New York",
    "maxResults": 20,
    "country": "us"
  }'
```

### Using with MCP (Model Context Protocol)

#### Claude Code

Add Google Jobs Scraper to your Claude Code setup:

```bash
claude mcp add --transport http apify "https://mcp.apify.com?tools=automation-lab/google-jobs-scraper"
```

Then ask Claude: *"Search Google Jobs for 'product manager' in San Francisco and show me the top 20 listings with salaries"*

#### Claude Desktop

Add this to your Claude Desktop `claude_desktop_config.json`:

```json
{
    "mcpServers": {
        "apify": {
            "url": "https://mcp.apify.com?tools=automation-lab/google-jobs-scraper"
        }
    }
}
```

Then ask:

- *"Find remote data engineering jobs and list them with salary ranges"*
- *"What are companies hiring for 'machine learning engineer' in New York?"*
- *"Search for entry-level marketing jobs in London and show me the qualifications required"*

### Is it legal to scrape Google Jobs?

Google Jobs Scraper accesses only publicly available content — the same job listings any visitor sees when searching on Google. Web scraping of public data is generally considered legal, as established by the U.S. Ninth Circuit's ruling in *hiQ Labs v. LinkedIn* (2022).

This scraper does not:

- Access private or restricted content
- Bypass authentication or paywalls
- Collect personal data beyond what is publicly displayed
- Circumvent any technical protection measures

The data extracted consists of job postings that employers have intentionally made public. Always review and comply with applicable laws in your jurisdiction before scraping.

### FAQ

**How many jobs can I extract per query?**
Up to 200 jobs per query. Google Jobs loads results progressively as you scroll, so larger requests take proportionally longer. For most use cases, 20–50 jobs per query provides a good balance of coverage and speed.

**Why am I getting fewer results than the max I set?**
Google may have fewer matching jobs for your query and location combination. The scraper extracts everything Google shows — if Google only has 15 matching jobs, that's all you'll get.

**Why is the salary field empty for some jobs?**
Google Jobs only displays salary information when the source site provides it. Many employers don't include salary ranges in their listings. This is a limitation of the source data, not the scraper.

**Does it support non-English job searches?**
Yes. Set the `country` and `language` fields to match your target market. For example, use `country: "de"` and `language: "de"` for German job listings.

**The scraper returned 0 results — what happened?**
This usually means Google showed a CAPTCHA or verification page. The scraper automatically retries with a fresh session. If it persists, try running again — Google's bot detection is session-dependent and residential proxies typically resolve this.

**Can I filter by employment type (full-time, part-time, remote)?**
Include these terms in your search query, e.g., "software engineer remote" or "part-time data entry". Google Jobs interprets these naturally, just like searching on Google directly.

**Can I schedule recurring job searches?**
Yes. Use Apify's built-in scheduling to run searches hourly, daily, or weekly. This is ideal for monitoring job markets, tracking new postings, or building time-series datasets of job availability.

**What job boards does Google Jobs pull from?**
Google aggregates listings from thousands of sources including Indeed, LinkedIn, Glassdoor, ZipRecruiter, Monster, company career pages, staffing agencies, and government job sites. You get a unified view without scraping each board individually.

### Related scrapers

- [Indeed Scraper](https://apify.com/automation-lab/indeed-scraper) — scrape job listings directly from Indeed
- [LinkedIn Jobs Scraper](https://apify.com/automation-lab/linkedin-jobs-scraper) — extract job postings from LinkedIn Jobs
- [Glassdoor Jobs Scraper](https://apify.com/automation-lab/glassdoor-jobs-scraper) — scrape jobs and company data from Glassdoor
- [Greenhouse Jobs Scraper](https://apify.com/automation-lab/greenhouse-jobs-scraper) — extract job listings from Greenhouse-powered career pages
- [Naukri Scraper](https://apify.com/automation-lab/naukri-scraper) — scrape job listings from Naukri.com (India)
- [Google Search Scraper](https://apify.com/automation-lab/google-search-scraper) — scrape general Google search results

# Actor input Schema

## `queries` (type: `array`):

Job search queries for Google Jobs. Each query runs a separate search. Use job titles like "software engineer", "data analyst", or "registered nurse".

## `location` (type: `string`):

Optional location filter appended to each query (e.g. "New York", "London", "Remote"). Leave empty for default Google results.

## `maxResults` (type: `integer`):

Maximum number of job listings to extract per search query. Higher values require more scrolling and time.

## `country` (type: `string`):

Country code for localized Google Jobs results (e.g. "us", "uk", "de", "fr"). Affects job availability and language.

## `language` (type: `string`):

Language code for search results (e.g. "en", "de", "fr"). Controls the language of job titles and descriptions.

## `maxRequestRetries` (type: `integer`):

Number of retry attempts for failed requests. Increase if you experience frequent timeouts.

## Actor input object example

```json
{
  "queries": [
    "software engineer"
  ],
  "maxResults": 10,
  "country": "us",
  "language": "en",
  "maxRequestRetries": 3
}
```

# Actor output Schema

## `overview` (type: `string`):

No description

## `details` (type: `string`):

No description

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "queries": [
        "software engineer"
    ],
    "maxResults": 10,
    "country": "us",
    "language": "en"
};

// Run the Actor and wait for it to finish
const run = await client.actor("automation-lab/google-jobs-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "queries": ["software engineer"],
    "maxResults": 10,
    "country": "us",
    "language": "en",
}

# Run the Actor and wait for it to finish
run = client.actor("automation-lab/google-jobs-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "queries": [
    "software engineer"
  ],
  "maxResults": 10,
  "country": "us",
  "language": "en"
}' |
apify call automation-lab/google-jobs-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=automation-lab/google-jobs-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Google Jobs Scraper",
        "description": "Search Google Jobs and extract job listings — titles, companies, locations, full descriptions, salaries, apply URLs, qualifications, and more. Export to JSON, CSV, Excel.",
        "version": "0.1",
        "x-build-id": "2ftNdX2UpkVFVmyzV"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/automation-lab~google-jobs-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-automation-lab-google-jobs-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/automation-lab~google-jobs-scraper/runs": {
            "post": {
                "operationId": "runs-sync-automation-lab-google-jobs-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/automation-lab~google-jobs-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-automation-lab-google-jobs-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "queries"
                ],
                "properties": {
                    "queries": {
                        "title": "Job Search Queries",
                        "type": "array",
                        "description": "Job search queries for Google Jobs. Each query runs a separate search. Use job titles like \"software engineer\", \"data analyst\", or \"registered nurse\".",
                        "items": {
                            "type": "string"
                        }
                    },
                    "location": {
                        "title": "Location",
                        "type": "string",
                        "description": "Optional location filter appended to each query (e.g. \"New York\", \"London\", \"Remote\"). Leave empty for default Google results."
                    },
                    "maxResults": {
                        "title": "Max Jobs per Query",
                        "minimum": 1,
                        "maximum": 200,
                        "type": "integer",
                        "description": "Maximum number of job listings to extract per search query. Higher values require more scrolling and time.",
                        "default": 20
                    },
                    "country": {
                        "title": "Country",
                        "type": "string",
                        "description": "Country code for localized Google Jobs results (e.g. \"us\", \"uk\", \"de\", \"fr\"). Affects job availability and language.",
                        "default": "us"
                    },
                    "language": {
                        "title": "Language",
                        "type": "string",
                        "description": "Language code for search results (e.g. \"en\", \"de\", \"fr\"). Controls the language of job titles and descriptions.",
                        "default": "en"
                    },
                    "maxRequestRetries": {
                        "title": "Max Retries",
                        "minimum": 1,
                        "maximum": 10,
                        "type": "integer",
                        "description": "Number of retry attempts for failed requests. Increase if you experience frequent timeouts.",
                        "default": 3
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
