# JobsDB Scraper — Hong Kong & Thailand Job Listings (`blackfalcondata/jobsdb-scraper`) Actor

JobsDB scraper for Hong Kong (hk.jobsdb.com) and Thailand (th.jobsdb.com) job listings. Extract salaries, employer profiles, and full job descriptions with multi-geo filters for keyword and classification.

- **URL**: https://apify.com/blackfalcondata/jobsdb-scraper.md
- **Developed by:** [Black Falcon Data](https://apify.com/blackfalcondata) (community)
- **Categories:** Jobs, Lead generation, Automation
- **Stats:** 2 total users, 1 monthly users, 100.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

from $2.00 / 1,000 results

This Actor is paid per event and usage. You are charged both the fixed price for specific events and for Apify platform usage.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

### What does JobsDB Scraper do?

JobsDB Scraper extracts structured job data from [jobsdb.com](https://hk.jobsdb.com) across Hong Kong (`hk.jobsdb.com`) and Thailand (`th.jobsdb.com`) — including salary data, employer profiles, contact details, work arrangements, and full job descriptions. It supports keyword search, location filters, classification and salary-range filters, and controllable result limits, so you can run the same query consistently over time. The actor also offers detail enrichment (full descriptions, company metadata, and screening questions) where the source provides them.

### Key features

- **Two-market coverage** — one actor covers both JobsDB markets: Hong Kong (`hk.jobsdb.com`) and Thailand (`th.jobsdb.com`).
- **Incremental mode** — recurring runs emit and charge only for listings that are new or whose tracked content changed. First run builds the baseline state; subsequent runs emit only new or changed records.
- **Detail enrichment** — full descriptions, company metadata, screening questions, and contact information where the source provides them.
- **Multi-geo salary filters** — classification, sub-classification, salary range, work type, work arrangement, and date-range filters passed through to the platform's native search.
- **Compact mode** — AI-agent and MCP-friendly payloads with core fields only.

### What data can you extract from jobsdb.com?

Each result includes Core listing fields (`jobId`, `seekJobId`, `title`, `canonicalUrl`, `advertiserId`, `location`, `locationCountry`, and `locationState`, and more), detail fields when enrichment is enabled (`roleId`, `description`, `descriptionHtml`, `descriptionMarkdown`, and `descriptionLength`), contact and apply information (`phoneNumber`, `applyUrl`, and `extractedEmails`), and company metadata (`company`, `companyUrl`, `companyIndustry`, and `companySize`). In standard mode, all fields are always present — unavailable data points are returned as `null`, never omitted. In compact mode, only core fields are returned.

Enable detail enrichment in the input to get richer fields such as full descriptions, company metadata, and contact information where the source provides them.

### Input

The main inputs are a search keyword, an optional location filter, and a result limit. Additional filters and options are available in the input schema.

Key parameters:

- **`query`** — Job search keywords. Use JSON array for multi-query.
- **`country`** — Which JobsDB market to search. (default: `"HK"`)
- **`location`** — City, state, or region. Use JSON array for multi-location.
- **`startUrls`** — Direct search or job detail URLs.
- **`maxResults`** — Maximum total job listings to return (0 = unlimited). (default: `25`)
- **`maxPages`** — Maximum SERP pages to scrape per search source. (default: `5`)
- **`sortMode`** — Sort results by relevance or date.
- **`dateRange`** — Filter jobs posted within a time range (e.g. '1', '3', '7', '14', '31').
- **`workType`** — Filter by work type code (e.g. '242' for Full Time on JobsDB).
- **`workArrangement`** — Filter by work arrangement code (e.g. '2' for Remote).
- **`classification`** — Filter by job classification/category code (e.g. '6281' for IT).
- **`subClassification`** — Filter by job sub-classification code (e.g. '6287' for Developers/Programmers under IT).
- ...and 11 more parameters

### Input examples

**Basic search** — Keyword-driven search with a result cap.

→ Full payload per result — all standard fields populated where the source provides them.

```json
{
  "query": "software engineer",
  "maxResults": 50
}
````

**Incremental tracking** — Only emit jobs that changed since the previous run with this `stateKey`.

→ First run builds the baseline state. Subsequent runs emit only records that are new or whose tracked content changed. Set `emitUnchanged: true` to include unchanged records as well.

```json
{
  "query": "software engineer",
  "maxResults": 200,
  "incrementalMode": true,
  "stateKey": "software-engineer-tracker"
}
```

**Compact output for AI agents** — Return only core fields for AI-agent and MCP workflows.

→ Small payload with the most important fields — ideal for piping into LLMs without token overhead.

```json
{
  "query": "software engineer",
  "maxResults": 50,
  "compact": true
}
```

### Output

Each run produces a dataset of structured job records. Results can be downloaded as JSON, CSV, or Excel from the Dataset tab in Apify Console.

### Example job record

```json
{
  "jobId": "2b7a71af2490afabab6291a716744969d6cd615751992d35500c20c07514c0de",
  "seekJobId": "91615653",
  "title": "Software Engineer",
  "canonicalUrl": "https://hk.jobsdb.com/job/91615653",
  "company": "Process Automation International Ltd",
  "companyUrl": null,
  "advertiserId": "61276721",
  "location": "Tai Po District",
  "locationCountry": "HK",
  "locationState": "Tai Po District",
  "locationSuburb": null,
  "locationPostcode": null,
  "salaryText": null,
  "salaryMin": null,
  "salaryMax": null,
  "salaryCurrency": "HKD",
  "salaryType": null,
  "employmentType": "Full time",
  "workArrangement": "onsite",
  "category": "Information & Communication Technology",
  "subCategory": "Engineering - Software",
  "roleId": "software-engineer",
  "teaser": "Responsibilities\n1. Perform Programming Development Duties\n2. Handle System Design of various application\n3. Perform System Testing, Documentation",
  "bulletPoints": [
    "Higher Diploma or Bachelor’s degree in Computer Science, IT, or related fields",
    "Hands-on program coding in C, VB, Java, Node.js and C#",
    "Knowledge of RDBMS, ideally MS SQL"
  ],
  "description": "Responsibilities\n\nPerform Programming Development Duties\n\nHandle System Design of various application\n\nPerform System Testing, Documentation\n\nRequirements:\n\nHigher Diploma or Bachelor’s degree in Comp..."
}
```

### Incremental fields

When `incremental: true`, each record also carries:

- `changeType` — one of `NEW`, `UPDATED`, `UNCHANGED`, `REAPPEARED`, `EXPIRED`. Default output covers `NEW` / `UPDATED` / `REAPPEARED`; set `emitUnchanged: true` or `emitExpired: true` to opt into the others.
- `firstSeenAt`, `lastSeenAt` — ISO-8601 timestamps tracking the listing across runs.
- `isRepost`, `repostOfId`, `repostDetectedAt` — populated when a new listing matches the tracked content of a previously expired one. Set `skipReposts: true` to drop detected reposts from the output.

### How to scrape jobsdb.com

1. Go to [JobsDB Scraper](https://apify.com/blackfalcondata/jobsdb-scraper?fpr=1h3gvi) in Apify Console.
2. Enter a search keyword and optional location filter.
3. Set `maxResults` to control how many results you need.
4. Enable `includeDetails` if you need full descriptions, contact info, or company data.
5. Click **Start** and wait for the run to finish.
6. Export the dataset as JSON, CSV, or Excel.

### Use cases

- Extract job data from jobsdb.com for market research and competitive analysis.
- Track salary trends across regions and categories over time.
- Monitor new and changed listings on scheduled runs without processing the full dataset every time.
- Build outreach lists using contact details and apply URLs from listings.
- Research company hiring patterns, employer profiles, and industry distribution.
- Feed structured data into AI agents, MCP tools, and automated pipelines using compact mode.
- Export clean, structured data to dashboards, spreadsheets, or data warehouses.

### How much does it cost to scrape jobsdb.com?

JobsDB Scraper uses [pay-per-event](https://docs.apify.com/platform/actors/paid-actors/pay-per-event) pricing. You pay a small fee when the run starts and then for each result that is actually produced.

- **Run start:** $0.01 per run
- **Per result:** $0.002 per job record

Example costs:

- 10 results: **$0.03**
- 100 results: **$0.21**
- 500 results: **$1.01**

#### Example: recurring monitoring savings

These examples compare full re-scrapes with incremental runs at different churn rates. Churn is the share of listings that are new or whose tracked content changed since the previous run. Actual churn depends on your query breadth, source activity, and polling frequency — the scenarios below are examples, not predictions.

Example setup: 100 results per run, daily polling (30 runs/month). Event-pricing examples scale linearly with result count.

| Churn rate | Full re-scrape run cost | Incremental run cost | Savings vs full re-scrape | Monthly cost after baseline |
|---|---:|---:|---:|---:|
| 5% — stable niche query | $0.21 | $0.02 | $0.19 (90%) | $0.60 |
| 15% — moderate broad query | $0.21 | $0.04 | $0.17 (81%) | $1.20 |
| 30% — high-volume aggregator | $0.21 | $0.07 | $0.14 (67%) | $2.10 |

Full re-scrape monthly cost at daily polling: $6.30. First month with incremental costs $0.79 / $1.37 / $2.24 for the 5% / 15% / 30% scenarios because the first run builds baseline state at full cost before incremental savings apply.

Platform usage (compute and proxies) is billed separately by Apify based on actual consumption. Incremental runs consume less on result processing, though fixed per-run overhead stays the same.

### FAQ

#### How many results can I get from jobsdb.com?

The number of results depends on the search query and available listings on jobsdb.com. Use the `maxResults` parameter to control how many results are returned per run.

#### Does JobsDB Scraper support recurring monitoring?

Yes. Enable incremental mode to only receive new or changed listings on subsequent runs. This is ideal for scheduled monitoring where you want to track changes over time without re-processing the full dataset.

#### Can I integrate JobsDB Scraper with other apps?

Yes. JobsDB Scraper works with Apify's [integrations](https://apify.com/integrations?fpr=1h3gvi) to connect with tools like Zapier, Make, Google Sheets, Slack, and more. You can also use webhooks to trigger actions when a run completes.

#### Can I use JobsDB Scraper with the Apify API?

Yes. You can start runs, manage inputs, and retrieve results programmatically through the [Apify API](https://docs.apify.com/api/v2). Client libraries are available for JavaScript, Python, and other languages.

#### Can I use JobsDB Scraper through an MCP Server?

Yes. Apify provides an [MCP Server](https://apify.com/apify/actors-mcp-server?fpr=1h3gvi) that lets AI assistants and agents call this actor directly. Use compact mode and `descriptionMaxLength` to keep payloads manageable for LLM context windows.

#### Is it legal to scrape jobsdb.com?

This actor extracts publicly available data from jobsdb.com. Web scraping of public information is generally considered legal, but you should always review the target site's terms of service and ensure your use case complies with applicable laws and regulations, including GDPR where relevant.

#### Your feedback

If you have questions, need a feature, or found a bug, please [open an issue](https://apify.com/blackfalcondata/jobsdb-scraper/issues?fpr=1h3gvi) on the actor's page in Apify Console. Your feedback helps us improve.

### You might also like

- [Actiris Brussels Job Scraper](https://apify.com/blackfalcondata/actiris-scraper?fpr=1h3gvi) — Scrape all active job listings from actiris.brussels — official Brussels public employment service..
- [Adzuna Job Scraper](https://apify.com/blackfalcondata/adzuna-scraper?fpr=1h3gvi) — Scrape adzuna.com - the global job board with 20+ country markets. Structured salary.
- [APEC.fr Scraper - French Executive Jobs](https://apify.com/blackfalcondata/apec-scraper?fpr=1h3gvi) — Scrape apec.fr - French executive job listings with salary ranges, company, location, skills,.
- [Arbeitsagentur Scraper - German Jobs](https://apify.com/blackfalcondata/arbeitsagentur-scraper?fpr=1h3gvi) — Scrape arbeitsagentur.de - Germany’s official employment portal with 1M+ listings. Contact data,.
- [AutoScout24 Scraper](https://apify.com/blackfalcondata/autoscout24-scraper?fpr=1h3gvi) — Scrape autoscout24.com - Europe's largest used car marketplace with 770K+ listings. Structured.
- [Bayt.com Scraper - Jobs from the Middle East](https://apify.com/blackfalcondata/bayt-scraper?fpr=1h3gvi) — Scrape bayt.com - the leading Middle East job board. Salary data, experience requirements.
- [Bilbasen Scraper - Denmark’s Car Marketplace](https://apify.com/blackfalcondata/bilbasen-scraper?fpr=1h3gvi) — Scrape bilbasen.dk - Denmark’s largest car marketplace. Full vehicle specifications, seller.
- [Bumeran Scraper](https://apify.com/blackfalcondata/bumeran-scraper?fpr=1h3gvi) — Scrape bumeran.com.ar - the largest job board across 8 LATAM countries. Work modality, contract.
- [JobStreet Scraper](https://apify.com/blackfalcondata/jobstreet-scraper?fpr=1h3gvi) — sibling actor covering Malaysia, Singapore, Indonesia, and the Philippines on the same SEEK platform.
- [SEEK Scraper](https://apify.com/blackfalcondata/seek-scraper?fpr=1h3gvi) — sibling actor covering Australia and New Zealand on the SEEK platform.

### Getting started with Apify

New to Apify? [Create a free account with $5 credit](https://console.apify.com/sign-up?fpr=1h3gvi) — no credit card required.

1. Sign up — $5 platform credit included
2. Open this actor and configure your input
3. Click **Start** — export results as JSON, CSV, or Excel

Need more later? [See Apify pricing](https://apify.com/pricing?fpr=1h3gvi).

# Actor input Schema

## `query` (type: `string`):

Job search keywords. Use JSON array for multi-query.

## `country` (type: `string`):

Which JobsDB market to search.

## `location` (type: `string`):

City, state, or region. Use JSON array for multi-location.

## `startUrls` (type: `array`):

Direct search or job detail URLs.

## `maxResults` (type: `integer`):

Maximum total job listings to return (0 = unlimited).

## `maxPages` (type: `integer`):

Maximum SERP pages to scrape per search source.

## `sortMode` (type: `string`):

Sort results by relevance or date.

## `dateRange` (type: `string`):

Filter jobs posted within a time range (e.g. '1', '3', '7', '14', '31').

## `workType` (type: `string`):

Filter by work type code (e.g. '242' for Full Time on JobsDB).

## `workArrangement` (type: `string`):

Filter by work arrangement code (e.g. '2' for Remote).

## `classification` (type: `string`):

Filter by job classification/category code (e.g. '6281' for IT).

## `subClassification` (type: `string`):

Filter by job sub-classification code (e.g. '6287' for Developers/Programmers under IT).

## `salaryMin` (type: `integer`):

Filter jobs with salary at or above this amount (local currency).

## `salaryMax` (type: `integer`):

Filter jobs with salary at or below this amount (local currency).

## `salaryType` (type: `string`):

Filter by salary type (e.g. 'monthly', 'annual').

## `includeDetails` (type: `boolean`):

Fetch each job's detail page for full description, salary data, and company info.

## `descriptionMaxLength` (type: `integer`):

Truncate description to this many characters. 0 = no truncation.

## `compact` (type: `boolean`):

Output only core fields (for AI-agent/MCP workflows).

## `incrementalMode` (type: `boolean`):

Compare against previous run state. Requires stateKey.

## `stateKey` (type: `string`):

Stable identifier for the tracked search universe (e.g. "hk-software-hk").

## `emitUnchanged` (type: `boolean`):

When incremental, also emit records that haven't changed.

## `emitExpired` (type: `boolean`):

When incremental, also emit records no longer found.

## `skipReposts` (type: `boolean`):

When incremental, skip jobs that are reposts of previously seen (now expired) jobs.

## Actor input object example

```json
{
  "query": "software engineer",
  "country": "HK",
  "maxResults": 5,
  "maxPages": 5,
  "includeDetails": true,
  "descriptionMaxLength": 0,
  "compact": false,
  "incrementalMode": false,
  "emitUnchanged": false,
  "emitExpired": false,
  "skipReposts": false
}
```

# Actor output Schema

## `results` (type: `string`):

No description

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "query": "software engineer",
    "maxResults": 5
};

// Run the Actor and wait for it to finish
const run = await client.actor("blackfalcondata/jobsdb-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "query": "software engineer",
    "maxResults": 5,
}

# Run the Actor and wait for it to finish
run = client.actor("blackfalcondata/jobsdb-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "query": "software engineer",
  "maxResults": 5
}' |
apify call blackfalcondata/jobsdb-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=blackfalcondata/jobsdb-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "JobsDB Scraper — Hong Kong & Thailand Job Listings",
        "description": "JobsDB scraper for Hong Kong (hk.jobsdb.com) and Thailand (th.jobsdb.com) job listings. Extract salaries, employer profiles, and full job descriptions with multi-geo filters for keyword and classification.",
        "version": "0.1",
        "x-build-id": "7MvcPgaiZxNeQ8jLu"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/blackfalcondata~jobsdb-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-blackfalcondata-jobsdb-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/blackfalcondata~jobsdb-scraper/runs": {
            "post": {
                "operationId": "runs-sync-blackfalcondata-jobsdb-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/blackfalcondata~jobsdb-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-blackfalcondata-jobsdb-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "properties": {
                    "query": {
                        "title": "Search Query",
                        "type": "string",
                        "description": "Job search keywords. Use JSON array for multi-query."
                    },
                    "country": {
                        "title": "Country",
                        "enum": [
                            "HK",
                            "TH"
                        ],
                        "type": "string",
                        "description": "Which JobsDB market to search.",
                        "default": "HK"
                    },
                    "location": {
                        "title": "Location",
                        "type": "string",
                        "description": "City, state, or region. Use JSON array for multi-location."
                    },
                    "startUrls": {
                        "title": "Start URLs",
                        "type": "array",
                        "description": "Direct search or job detail URLs.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "maxResults": {
                        "title": "Max Results",
                        "minimum": 0,
                        "maximum": 1000,
                        "type": "integer",
                        "description": "Maximum total job listings to return (0 = unlimited).",
                        "default": 25
                    },
                    "maxPages": {
                        "title": "Max Pages",
                        "minimum": 1,
                        "maximum": 50,
                        "type": "integer",
                        "description": "Maximum SERP pages to scrape per search source.",
                        "default": 5
                    },
                    "sortMode": {
                        "title": "Sort Mode",
                        "enum": [
                            "relevance",
                            "date"
                        ],
                        "type": "string",
                        "description": "Sort results by relevance or date."
                    },
                    "dateRange": {
                        "title": "Date Range",
                        "type": "string",
                        "description": "Filter jobs posted within a time range (e.g. '1', '3', '7', '14', '31')."
                    },
                    "workType": {
                        "title": "Work Type",
                        "type": "string",
                        "description": "Filter by work type code (e.g. '242' for Full Time on JobsDB)."
                    },
                    "workArrangement": {
                        "title": "Work Arrangement",
                        "type": "string",
                        "description": "Filter by work arrangement code (e.g. '2' for Remote)."
                    },
                    "classification": {
                        "title": "Classification",
                        "type": "string",
                        "description": "Filter by job classification/category code (e.g. '6281' for IT)."
                    },
                    "subClassification": {
                        "title": "Sub-Classification",
                        "type": "string",
                        "description": "Filter by job sub-classification code (e.g. '6287' for Developers/Programmers under IT)."
                    },
                    "salaryMin": {
                        "title": "Minimum Salary",
                        "minimum": 0,
                        "type": "integer",
                        "description": "Filter jobs with salary at or above this amount (local currency)."
                    },
                    "salaryMax": {
                        "title": "Maximum Salary",
                        "minimum": 0,
                        "type": "integer",
                        "description": "Filter jobs with salary at or below this amount (local currency)."
                    },
                    "salaryType": {
                        "title": "Salary Type",
                        "type": "string",
                        "description": "Filter by salary type (e.g. 'monthly', 'annual')."
                    },
                    "includeDetails": {
                        "title": "Include Detail Pages",
                        "type": "boolean",
                        "description": "Fetch each job's detail page for full description, salary data, and company info.",
                        "default": true
                    },
                    "descriptionMaxLength": {
                        "title": "Description Max Length",
                        "minimum": 0,
                        "type": "integer",
                        "description": "Truncate description to this many characters. 0 = no truncation.",
                        "default": 0
                    },
                    "compact": {
                        "title": "Compact Output",
                        "type": "boolean",
                        "description": "Output only core fields (for AI-agent/MCP workflows).",
                        "default": false
                    },
                    "incrementalMode": {
                        "title": "Incremental Mode",
                        "type": "boolean",
                        "description": "Compare against previous run state. Requires stateKey.",
                        "default": false
                    },
                    "stateKey": {
                        "title": "State Key",
                        "type": "string",
                        "description": "Stable identifier for the tracked search universe (e.g. \"hk-software-hk\")."
                    },
                    "emitUnchanged": {
                        "title": "Emit Unchanged Records",
                        "type": "boolean",
                        "description": "When incremental, also emit records that haven't changed.",
                        "default": false
                    },
                    "emitExpired": {
                        "title": "Emit Expired Records",
                        "type": "boolean",
                        "description": "When incremental, also emit records no longer found.",
                        "default": false
                    },
                    "skipReposts": {
                        "title": "Skip Reposts",
                        "type": "boolean",
                        "description": "When incremental, skip jobs that are reposts of previously seen (now expired) jobs.",
                        "default": false
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
