# Workday Jobs Scraper (`automation-lab/workday-jobs-scraper`) Actor

Extract job listings from any company using Workday ATS. Covers 10,000+ enterprise employers including Walmart, Target, and Fortune 500 companies. Get titles, locations, descriptions, compensation, and employment type. Export as JSON, CSV, or Excel.

- **URL**: https://apify.com/automation-lab/workday-jobs-scraper.md
- **Developed by:** [Stas Persiianenko](https://apify.com/automation-lab) (community)
- **Categories:** Jobs
- **Stats:** 2 total users, 1 monthly users, 100.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

Pay per event

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.
Since this Actor supports Apify Store discounts, the price gets lower the higher subscription plan you have.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

### What does Workday Jobs Scraper do?

**Workday Jobs Scraper** extracts job listings from any company that uses [Workday](https://www.workday.com/) as their Applicant Tracking System (ATS). Workday powers career sites for **10,000+ enterprise companies** including Walmart, Target, and thousands more Fortune 500 employers.

Paste a Workday career site URL, and the scraper returns structured job data: titles, locations, descriptions, compensation, employment type, and more. The easiest way to try it is to click **Start** with the prefilled Walmart URL.

This scraper uses Workday's **public JSON API** directly — no browser automation, no proxy needed, no login required. It is significantly **cheaper and faster** than universal ATS scrapers that charge $0.012/job.

### Who is Workday Jobs Scraper for?

**HR and Talent Intelligence Teams**
- Monitor competitor hiring patterns and headcount changes
- Track job posting volumes across departments and locations
- Identify emerging roles and skill requirements in your industry

**Recruiters and Staffing Agencies**
- Aggregate job listings from multiple enterprise clients
- Build candidate matching databases from fresh job data
- Track new openings across target companies in real time

**Data Analysts and Researchers**
- Analyze labor market trends across enterprise employers
- Study compensation data and job distribution patterns
- Build datasets for workforce analytics and academic research

**Job Board Operators and Aggregators**
- Feed Workday listings into your job board or aggregation platform
- Keep listings fresh with scheduled daily scraping runs
- Enrich existing job data with full descriptions and metadata

### Why use Workday Jobs Scraper?

- **Pure HTTP API** — no browser overhead, runs on minimal memory (256 MB)
- **No proxy required** — Workday's public API does not block requests
- **No login or API key needed** — works out of the box
- **75% cheaper** than competing universal ATS scrapers ($0.003/job vs $0.012/job)
- **Fast** — scrapes 100 jobs in under 30 seconds
- **Full job details** — descriptions, compensation, employment type, categories
- **API access** — call via REST API, schedule runs, export to JSON/CSV/Excel
- **Integrations** — connect to Google Sheets, Slack, Zapier, Make, and 5,000+ apps

### What data can you extract?

| Field | Description |
|-------|-------------|
| `title` | Job title |
| `company` | Company tenant name |
| `location` | Primary job location |
| `postedDate` | When the job was posted |
| `jobId` | Workday job path identifier |
| `url` | Direct link to the job posting |
| `description` | Full HTML job description |
| `compensation` | Salary/pay range (when available) |
| `employmentType` | Full-time, part-time, contract, etc. |
| `category` | Job category or family |
| `requisitionId` | Internal requisition ID |
| `timeType` | Time type (full-time/part-time) |
| `locations` | All job locations (array) |
| `remoteType` | Remote, on-site, or hybrid |

### How much does it cost to scrape Workday jobs?

This Actor uses **pay-per-event** pricing — you pay only for what you scrape.
No monthly subscription. All platform costs are **included**.

| | Free | Starter ($29/mo) | Scale ($199/mo) | Business ($999/mo) |
|---|---|---|---|---|
| **Per job** | $0.0035 | $0.003 | $0.0023 | $0.0018 |
| **1,000 jobs** | $3.50 | $3.00 | $2.30 | $1.80 |

Higher-tier plans get additional volume discounts.

A small start fee of $0.005 applies per run.

**Real-world cost examples:**

| Query | Results | Duration | Cost (Free tier) |
|---|---|---|---|
| Walmart software engineers | 20 jobs | ~5s | ~$0.075 |
| Target all jobs | 100 jobs | ~20s | ~$0.355 |
| Walmart all jobs | 200 jobs | ~40s | ~$0.705 |

On the **free plan** ($5 credit), you can scrape approximately **1,400 job listings**.

### How to scrape Workday job listings

1. Go to [Workday Jobs Scraper](https://apify.com/automation-lab/workday-jobs-scraper) on Apify Store
2. Click **Start** to run with the prefilled Walmart example
3. Or enter your target company's Workday career site URL
4. Optionally add search keywords and location filters
5. Set the maximum number of jobs to scrape
6. Click **Start** and wait for results
7. Download your data as JSON, CSV, or Excel

**Example input for Walmart software engineering jobs:**
```json
{
    "companyUrl": "https://walmart.wd5.myworkdayjobs.com/WalmartExternal",
    "searchQuery": "software engineer",
    "maxJobs": 50,
    "includeDescription": true
}
````

**Example input for Target analyst positions:**

```json
{
    "companyUrl": "https://target.wd5.myworkdayjobs.com/targetcareers",
    "searchQuery": "analyst",
    "location": "Minneapolis",
    "maxJobs": 100
}
```

### Input parameters

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `companyUrl` | string | *required* | Workday career site URL (e.g., `https://walmart.wd5.myworkdayjobs.com/WalmartExternal`) |
| `searchQuery` | string | `""` | Filter jobs by keywords |
| `location` | string | `""` | Filter by location text |
| `maxJobs` | integer | `50` | Maximum number of jobs to scrape (1-5,000) |
| `includeDescription` | boolean | `true` | Fetch full HTML job descriptions (slower but more data) |

### Output example

```json
{
    "title": "Senior Software Engineer",
    "company": "walmart",
    "location": "IN KA BANGALORE Home Office PW II",
    "postedDate": "Posted 30+ Days Ago",
    "jobId": "/job/IN-KA-BANGALORE-Home-Office-PW-II/Senior-Software-Engineer---Java_R-1642453",
    "url": "https://walmart.wd5.myworkdayjobs.com/job/IN-KA-BANGALORE-Home-Office-PW-II/Senior-Software-Engineer---Java_R-1642453",
    "description": "<h2>Position Summary...</h2><p>Responsible for coding, unit testing...</p>",
    "compensation": null,
    "employmentType": "Full time",
    "category": null,
    "requisitionId": "R-1642453",
    "timeType": "Full time",
    "locations": ["IN KA BANGALORE Home Office PW II"],
    "remoteType": null
}
```

### Tips for best results

- **Start small** — run with 10-20 jobs first to verify the URL works, then scale up
- **Finding the right URL** — visit the company's career page and look for a URL containing `.myworkdayjobs.com`. The URL format is: `https://{company}.wd{N}.myworkdayjobs.com/{board}`
- **Skip descriptions for speed** — set `includeDescription` to `false` to skip detail API calls (much faster for large batches)
- **Schedule regular runs** — set up daily or weekly scraping to track new job postings
- **Not all companies use Workday** — if you get errors, the company may use a different ATS (try [Greenhouse Jobs Scraper](https://apify.com/automation-lab/greenhouse-jobs-scraper) instead)
- **Some companies restrict API access** — a small number of Workday tenants return 401/422 errors due to custom security settings

### Integrations

**Workday Jobs Scraper + Google Sheets**
Export job listings directly to a Google Sheet for team collaboration. Set up a daily schedule to keep your hiring tracker updated automatically.

**Workday Jobs Scraper + Slack/Discord**
Get notified when new jobs matching your criteria are posted. Use webhooks to send alerts to a Slack channel for your recruiting team.

**Workday Jobs Scraper + Zapier/Make**
Build automated workflows that route new job listings to your ATS, CRM, or candidate database. Trigger actions based on job attributes like location or department.

**Scheduled monitoring**
Run the scraper daily to track hiring trends, detect new openings, and monitor competitor headcount changes over time.

### Using the Apify API

#### Node.js

```javascript
import { ApifyClient } from 'apify-client';

const client = new ApifyClient({ token: 'YOUR_APIFY_TOKEN' });

const run = await client.actor('automation-lab/workday-jobs-scraper').call({
    companyUrl: 'https://walmart.wd5.myworkdayjobs.com/WalmartExternal',
    searchQuery: 'software engineer',
    maxJobs: 50,
});

const { items } = await client.dataset(run.defaultDatasetId).listItems();
console.log(items);
```

#### Python

```python
from apify_client import ApifyClient

client = ApifyClient("YOUR_APIFY_TOKEN")

run = client.actor("automation-lab/workday-jobs-scraper").call(run_input={
    "companyUrl": "https://walmart.wd5.myworkdayjobs.com/WalmartExternal",
    "searchQuery": "software engineer",
    "maxJobs": 50,
})

items = client.dataset(run["defaultDatasetId"]).list_items().items
print(items)
```

#### cURL

```bash
curl -X POST "https://api.apify.com/v2/acts/automation-lab~workday-jobs-scraper/runs?token=YOUR_APIFY_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "companyUrl": "https://walmart.wd5.myworkdayjobs.com/WalmartExternal",
    "searchQuery": "software engineer",
    "maxJobs": 50
  }'
```

### Use with AI agents via MCP

Workday Jobs Scraper is available as a tool for AI assistants that support the [Model Context Protocol (MCP)](https://docs.apify.com/platform/integrations/mcp).

Add the Apify MCP server to your AI client — this gives you access to all Apify actors, including this one:

#### Setup for Claude Code

```bash
claude mcp add --transport http apify "https://mcp.apify.com?tools=automation-lab/workday-jobs-scraper"
```

#### Setup for Claude Desktop, Cursor, or VS Code

Add this to your MCP config file:

```json
{
    "mcpServers": {
        "apify": {
            "url": "https://mcp.apify.com?tools=automation-lab/workday-jobs-scraper"
        }
    }
}
```

Your AI assistant will use OAuth to authenticate with your Apify account on first use.

#### Example prompts

Once connected, try asking your AI assistant:

- "Use automation-lab/workday-jobs-scraper to find all software engineering jobs at Walmart"
- "Scrape data analyst positions from Target's Workday career site and export to a spreadsheet"
- "Monitor Walmart's hiring for machine learning roles and notify me weekly about new postings"

Learn more in the [Apify MCP documentation](https://docs.apify.com/platform/integrations/mcp).

### Is it legal to scrape Workday job listings?

This scraper accesses only **publicly available** job listing data through Workday's public career site API. It does not bypass any authentication, access private data, or violate any terms of service.

Web scraping of publicly accessible data is generally considered legal, as confirmed by the US Ninth Circuit Court ruling in *hiQ Labs v. LinkedIn* (2022). The scraper collects the same information any job seeker can view by visiting the company's career page.

Always ensure your use case complies with applicable laws and regulations, including GDPR when processing EU residents' data.

### FAQ

**How fast is the Workday Jobs Scraper?**
Without descriptions, it can scrape 100 jobs in about 5 seconds (single API call per 20 jobs). With full descriptions enabled, expect ~30 seconds for 100 jobs due to individual detail API calls.

**How much does it cost to scrape 1,000 Workday jobs?**
On the free plan: approximately $3.50 (1,000 x $0.0035 per job + $0.005 start fee). Paid plans start at $3.00 per 1,000 jobs.

**How do I find a company's Workday career site URL?**
Search Google for `{company name} workday careers` or look for URLs containing `.myworkdayjobs.com` on the company's careers page. The URL format is `https://{company}.wd{N}.myworkdayjobs.com/{board}`.

**Why am I getting errors for some companies?**
Some Workday tenants restrict API access (HTTP 401 or 422 errors). This is a security setting configured by the company — not all Workday career sites allow public API access. Most large enterprises keep their API public.

**Why are some fields null (compensation, category)?**
Not all companies populate every field in Workday. Compensation data, job categories, and remote type depend on what the employer configures. The scraper returns all available data.

### Other job scrapers

- [Greenhouse Jobs Scraper](https://apify.com/automation-lab/greenhouse-jobs-scraper) — scrape jobs from 220,000+ companies using Greenhouse ATS
- [Google Jobs Scraper](https://apify.com/automation-lab/google-jobs-scraper) — scrape job listings from Google Jobs search
- [Glassdoor Jobs Scraper](https://apify.com/automation-lab/glassdoor-jobs-scraper) — scrape job listings and company data from Glassdoor
- [LinkedIn Jobs Scraper](https://apify.com/automation-lab/linkedin-jobs-scraper) — scrape job postings from LinkedIn

# Actor input Schema

## `companyUrl` (type: `string`):

Enter the Workday career site URL. Examples:

- https://walmart.wd5.myworkdayjobs.com/WalmartExternal
- https://microsoft.wd5.myworkdayjobs.com/Global
- https://apple.wd1.myworkdayjobs.com/en-US/External

## `searchQuery` (type: `string`):

Filter jobs by keywords (e.g. "software engineer", "data analyst"). Leave empty to get all jobs.

## `location` (type: `string`):

Filter by location text (e.g. "San Francisco", "Remote", "London"). Applied as a search facet on the Workday API.

## `maxJobs` (type: `integer`):

Maximum number of job listings to scrape. Set lower for faster, cheaper runs.

## `includeDescription` (type: `boolean`):

Fetch the full HTML job description for each listing (requires one extra API call per job — slower but more data).

## Actor input object example

```json
{
  "companyUrl": "https://walmart.wd5.myworkdayjobs.com/WalmartExternal",
  "maxJobs": 20,
  "includeDescription": true
}
```

# Actor output Schema

## `overview` (type: `string`):

No description

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "companyUrl": "https://walmart.wd5.myworkdayjobs.com/WalmartExternal",
    "maxJobs": 20
};

// Run the Actor and wait for it to finish
const run = await client.actor("automation-lab/workday-jobs-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "companyUrl": "https://walmart.wd5.myworkdayjobs.com/WalmartExternal",
    "maxJobs": 20,
}

# Run the Actor and wait for it to finish
run = client.actor("automation-lab/workday-jobs-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "companyUrl": "https://walmart.wd5.myworkdayjobs.com/WalmartExternal",
  "maxJobs": 20
}' |
apify call automation-lab/workday-jobs-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=automation-lab/workday-jobs-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Workday Jobs Scraper",
        "description": "Extract job listings from any company using Workday ATS. Covers 10,000+ enterprise employers including Walmart, Target, and Fortune 500 companies. Get titles, locations, descriptions, compensation, and employment type. Export as JSON, CSV, or Excel.",
        "version": "0.1",
        "x-build-id": "wPIfAQCH21YFvrjTS"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/automation-lab~workday-jobs-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-automation-lab-workday-jobs-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/automation-lab~workday-jobs-scraper/runs": {
            "post": {
                "operationId": "runs-sync-automation-lab-workday-jobs-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/automation-lab~workday-jobs-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-automation-lab-workday-jobs-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "companyUrl"
                ],
                "properties": {
                    "companyUrl": {
                        "title": "🏢 Workday Career Site URL",
                        "type": "string",
                        "description": "Enter the Workday career site URL. Examples:\n- https://walmart.wd5.myworkdayjobs.com/WalmartExternal\n- https://microsoft.wd5.myworkdayjobs.com/Global\n- https://apple.wd1.myworkdayjobs.com/en-US/External"
                    },
                    "searchQuery": {
                        "title": "🔍 Search Keywords",
                        "type": "string",
                        "description": "Filter jobs by keywords (e.g. \"software engineer\", \"data analyst\"). Leave empty to get all jobs."
                    },
                    "location": {
                        "title": "📍 Location Filter",
                        "type": "string",
                        "description": "Filter by location text (e.g. \"San Francisco\", \"Remote\", \"London\"). Applied as a search facet on the Workday API."
                    },
                    "maxJobs": {
                        "title": "🔢 Max Jobs",
                        "minimum": 1,
                        "maximum": 5000,
                        "type": "integer",
                        "description": "Maximum number of job listings to scrape. Set lower for faster, cheaper runs.",
                        "default": 50
                    },
                    "includeDescription": {
                        "title": "📄 Include Full Description",
                        "type": "boolean",
                        "description": "Fetch the full HTML job description for each listing (requires one extra API call per job — slower but more data).",
                        "default": true
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
