Jobsdb Scraper avatar

Jobsdb Scraper

Pricing

Pay per usage

Go to Apify Store
Jobsdb Scraper

Jobsdb Scraper

A lightweight actor to scrape job listings from Jobsdb, extracting details like title, company, and location. It's fast and easy to use. For the most reliable and consistent results, using residential proxies is strongly recommended to prevent getting blocked and ensure data accuracy.

Pricing

Pay per usage

Rating

5.0

(1)

Developer

Shahid Irfan

Shahid Irfan

Maintained by Community

Actor stats

0

Bookmarked

15

Total users

4

Monthly active users

5 days ago

Last modified

Share

Extract structured job listings from JobsDB across Thailand and Hong Kong. Collect rich hiring data including company metadata, classifications, locations, work arrangements, and posting details at scale. Built for recruitment research, market monitoring, and talent intelligence workflows.

Features

  • Keyword Search — Discover jobs by role-specific terms such as software engineer, sales manager, or accountant.
  • Region Targeting — Collect data from Thailand (TH-Main) and Hong Kong (HK-Main) markets.
  • Structured Job Data — Get normalized records ready for analytics, BI tools, and automations.
  • Pagination Support — Crawl multiple pages with controlled result limits.
  • Rich Company Context — Capture employer metadata, classifications, and listing attributes.
  • Export Ready — Use dataset output in JSON, CSV, Excel, XML, and integrations.

Use Cases

Recruitment Intelligence

Track job demand by role, location, and industry. Compare hiring trends across markets and identify where demand is growing fastest.

Salary and Role Benchmarking

Analyze salary labels, work types, and role patterns to benchmark compensation and position strategy for hiring teams.

Competitive Monitoring

Follow competitor hiring behavior by company name, role type, and posting cadence to support workforce and growth planning.

Talent Market Research

Build searchable datasets for labor-market studies, dashboards, and forecasts across specific regions and categories.

Lead Generation Workflows

Extract employer and listing signals that can support B2B outreach, partnerships, and recruiting pipeline prioritization.


Input Parameters

ParameterTypeRequiredDefaultDescription
startUrlStringNoDirect JobsDB search URL. When provided, it can override keyword/location setup.
keywordStringNodeveloperSearch keyword for job discovery.
locationStringNoRegion or city filter for matching jobs.
countryStringNothCountry site selector: th (Thailand) or hk (Hong Kong).
posted_dateStringNoanytimePosting time window: anytime, 24h, 7d, 30d.
results_wantedIntegerNo20Preferred maximum number of jobs to collect.
maxJobsIntegerNo20Maximum number of jobs to collect (compatible alias).
maxPagesPerListIntegerNo50Safety cap for pages to process.
collectDetailsBooleanNotrueCompatibility toggle retained for existing workflows.
proxyConfigurationObjectNoProxy settings for improved reliability on larger runs.

Output Data

Each dataset item includes:

FieldTypeDescription
idStringJobsDB listing identifier.
urlStringJob ad URL.
titleStringJob title.
companyStringEmployer display name.
locationStringPrimary location label.
workTypeStringPrimary work type (for example Full time).
classificationStringMain job classification description.
salaryStringSalary label when available.
postedAt_relativeStringRelative posting age (for example 3d ago).
postedAt_isoStringISO posting datetime when available.
description_textStringListing teaser/summary text.
description_htmlStringReserved field for compatibility.
sourceStringSource identifier (jobsdb).
list_urlStringSearch URL used for that item batch.
scrapedAtStringISO extraction timestamp.
advertiserObjectAdvertiser metadata (id, description).
brandingObjectBranding assets such as logo URL.
bulletPointsArrayHighlight points shown in listing cards.
classificationsArrayClassification and subclassification objects.
companyProfileStructuredDataIdNumberCompany profile structured identifier.
displayStyleObjectListing style metadata.
displayTypeStringListing type (for example standard, promoted).
employerObjectEmployer metadata (id, name, company references).
isFeaturedBooleanFeatured flag for listing prominence.
listingDateStringRaw listing datetime.
listingDateDisplayStringHuman-readable listing age.
locationsArrayFull locations array with hierarchy and country code.
roleIdStringRole identifier slug.
solMetadataObjectRanking and placement metadata for the result.
tagsArrayListing tags like urgency or expiring soon.
teaserStringShort listing summary.
trackingStringTracking token from listing data.
workTypesArrayWork type list from source.
workArrangementsObjectWork arrangement data (for example on-site, hybrid).

Usage Examples

{
"keyword": "software engineer",
"country": "hk",
"results_wanted": 20
}

Thailand Search by Keyword and Location

{
"keyword": "data analyst",
"location": "Bangkok",
"country": "th",
"posted_date": "7d",
"maxJobs": 40,
"maxPagesPerList": 5
}

Start from a Direct Search URL

{
"startUrl": "https://hk.jobsdb.com/jobs?siteKey=HK-Main&keywords=project%20manager&page=1&pageSize=20",
"results_wanted": 30
}

Proxy Configuration

{
"keyword": "marketing",
"country": "hk",
"results_wanted": 20,
"proxyConfiguration": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}

Sample Output

{
"id": "90538991",
"url": "https://hk.jobsdb.com/job/90538991",
"title": "Software Engineer",
"company": "OpenRice",
"location": "Quarry Bay, Eastern District",
"workType": "Full time",
"classification": "Information & Communication Technology",
"salary": "",
"postedAt_relative": "3d ago",
"postedAt_iso": "2026-02-24T11:03:36.000Z",
"description_text": "OpenRice is seeking an energetic and passionate talent to be part of the dynamic team.",
"description_html": null,
"source": "jobsdb",
"list_url": "https://hk.jobsdb.com/api/jobsearch/v5/search?siteKey=HK-Main&keywords=software+engineer&page=1&pageSize=20",
"scrapedAt": "2026-02-28T10:20:00.000Z",
"companyProfileStructuredDataId": 408363,
"displayType": "standard",
"isFeatured": false,
"listingDate": "2026-02-24T11:03:36Z",
"roleId": "software-engineer",
"workTypes": ["Full time"],
"workArrangements": {
"data": [{ "id": "1", "label": { "text": "On-site" } }]
}
}

Tips for Best Results

Start with Small Limits

  • Begin with results_wanted: 20 to validate your query quickly.
  • Increase gradually for production runs.

Use Targeted Keywords

  • Specific keywords usually produce higher relevance than broad terms.
  • Combine role terms with locations when possible.

Control Pagination

  • Use maxPagesPerList as a safety cap for predictable run times.
  • Keep page caps lower for scheduled jobs.

Improve Reliability

  • Use proxy configuration for frequent or high-volume extraction.
  • Retry with narrower keywords if the market is very broad.

Integrations

  • Google Sheets — Build live hiring trackers.
  • Airtable — Create searchable recruitment databases.
  • Make — Automate enrichment and downstream actions.
  • Zapier — Trigger alerts and workflows.
  • Webhooks — Send results to your own backend.

Export Formats

  • JSON — Application and API workflows.
  • CSV — Spreadsheet and BI imports.
  • Excel — Business reporting.
  • XML — Legacy/system integrations.

Frequently Asked Questions

How many jobs can I collect?

You can collect as many as available results and your input limits allow. Use results_wanted or maxJobs to control volume.

Does it support both Thailand and Hong Kong?

Yes. Set country to th or hk to target the corresponding market.

Can I start from an existing JobsDB URL?

Yes. Use startUrl to begin from a direct JobsDB search URL.

Why are some fields empty?

Some listings do not provide every field (for example salary), so values may be missing for specific ads.

Is pagination handled automatically?

Yes. The actor iterates pages until it reaches your limit or no more results are returned.


Support

For issues, improvements, or custom requirements, contact support through the Apify Console.

Resources


This actor is intended for legitimate data collection use cases. Users are responsible for compliance with applicable laws, platform terms, and internal data governance policies.