Glassdoor Jobs Scraper avatar

Glassdoor Jobs Scraper

Pricing

from $4.99 / 1,000 results

Go to Apify Store
Glassdoor Jobs Scraper

Glassdoor Jobs Scraper

๐Ÿ’ผ Glassdoor Jobs Scraper extracts Glassdoor job postings at scale: titles, companies, locations, salary ranges, employer ratings, descriptions & URLs. ๐Ÿ“Š Export-ready data for HR, recruiters & analysts to build pipelines, track pay trends and monitor competitors. ๐Ÿš€

Pricing

from $4.99 / 1,000 results

Rating

0.0

(0)

Developer

API Empire

API Empire

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

2 days ago

Last modified

Share

Apify Actor that collects Glassdoor job listings from job search (SERP) URLs using the same BFF + employer-overview strategy as the reference implementation. Output rows match the structured schema (job fields, company fields, and all raw payload).

Features

  • Bulk input: multiple Glassdoor job search URLs in one run.
  • Filters: salary, age, company substring, industry, domain, employer size, job type, seniority, remote, radius, rating (client-side where applicable).
  • Proxy ladder (default direct): direct โ†’ Apify DATACENTER โ†’ Apify RESIDENTIAL (up to 3 URL retries per step); after a successful residential response, sticky residential for the rest of the run.
  • Live dataset push: each job is pushed as it is built.

Input

The Console form is grouped into Search settings, Salary settings, Search filters, and Proxy settings (see actor.json).

FieldDescription
keywordJob search text (used with country when urls is empty).
countryRegional Glassdoor site (us, gb, de, โ€ฆ).
urlsOptional: advanced bulk Glassdoor job search URLs; if present, keyword/country are ignored.
proxyConfigurationApify proxy settings (used when falling back from direct; ensure proxy is enabled on the Apify account for DC/residential).
maxItemsMax jobs per search run (after filters). Default 200.
locationFree text resolved via Glassdoor location AJAX.
includeNoSalaryJobIf false, drop listings without pay data.
companyNameEmployer name substring filter.
minSalary / maxSalarySalary filters (site currency).
fromAgeANY or 1 / 3 / 7 / 14 / 30 (days).
jobTypeall, fulltime, parttime, contract, โ€ฆ or seniority-like internship, entrylevel.
radiuskm string: 0, 6, 12, 18, โ€ฆ
industryType / domainType / employerSizesSymbolic or numeric IDs (see code maps).
remoteWorkTypetrue = remote-only filter; false/omit = no remote-only filter.
seniorityTypee.g. all, internship.
minRatingMinimum employer rating 0โ€“5.
requestDelayMsExtra delay between pagination requests.

Output

One dataset item per job: job_title, job_id, job_url, job_location, job_salary, company_*, job_benefits_tags, all (full merged jobview), etc., as produced by build_output().

Local run

cd Glassdoor-Jobs-Scraper
pip install -r requirements.txt

Create storage/key_value_stores/default/INPUT.json with your run input (fields match .actor/actor.json). Set APIFY_LOCAL_STORAGE_DIR to the absolute path of the storage directory (the folder that contains key_value_stores), not the project root:

$env:APIFY_LOCAL_STORAGE_DIR = "D:\path\to\Glassdoor-Jobs-Scraper\storage"
python -m src

On the Apify platform, input is injected automatically; APIFY_TOKEN (or proxy password) is available so datacenter/residential fallback works.

Provide input via the Apify Console, CLI, or local KVS INPUT as above.

Use only in compliance with Glassdoorโ€™s terms and applicable laws. You are responsible for lawful use of scraped data.