Glassdoor Jobs Scraper
Pricing
from $4.99 / 1,000 results
Glassdoor Jobs Scraper
๐ผ Glassdoor Jobs Scraper extracts Glassdoor job postings at scale: titles, companies, locations, salary ranges, employer ratings, descriptions & URLs. ๐ Export-ready data for HR, recruiters & analysts to build pipelines, track pay trends and monitor competitors. ๐
Pricing
from $4.99 / 1,000 results
Rating
0.0
(0)
Developer
API Empire
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
2 days ago
Last modified
Categories
Share
Apify Actor that collects Glassdoor job listings from job search (SERP) URLs using the same BFF + employer-overview strategy as the reference implementation. Output rows match the structured schema (job fields, company fields, and all raw payload).
Features
- Bulk input: multiple Glassdoor job search URLs in one run.
- Filters: salary, age, company substring, industry, domain, employer size, job type, seniority, remote, radius, rating (client-side where applicable).
- Proxy ladder (default direct): direct โ Apify DATACENTER โ Apify RESIDENTIAL (up to 3 URL retries per step); after a successful residential response, sticky residential for the rest of the run.
- Live dataset push: each job is pushed as it is built.
Input
The Console form is grouped into Search settings, Salary settings, Search filters, and Proxy settings (see actor.json).
| Field | Description |
|---|---|
keyword | Job search text (used with country when urls is empty). |
country | Regional Glassdoor site (us, gb, de, โฆ). |
urls | Optional: advanced bulk Glassdoor job search URLs; if present, keyword/country are ignored. |
proxyConfiguration | Apify proxy settings (used when falling back from direct; ensure proxy is enabled on the Apify account for DC/residential). |
maxItems | Max jobs per search run (after filters). Default 200. |
location | Free text resolved via Glassdoor location AJAX. |
includeNoSalaryJob | If false, drop listings without pay data. |
companyName | Employer name substring filter. |
minSalary / maxSalary | Salary filters (site currency). |
fromAge | ANY or 1 / 3 / 7 / 14 / 30 (days). |
jobType | all, fulltime, parttime, contract, โฆ or seniority-like internship, entrylevel. |
radius | km string: 0, 6, 12, 18, โฆ |
industryType / domainType / employerSizes | Symbolic or numeric IDs (see code maps). |
remoteWorkType | true = remote-only filter; false/omit = no remote-only filter. |
seniorityType | e.g. all, internship. |
minRating | Minimum employer rating 0โ5. |
requestDelayMs | Extra delay between pagination requests. |
Output
One dataset item per job: job_title, job_id, job_url, job_location, job_salary, company_*, job_benefits_tags, all (full merged jobview), etc., as produced by build_output().
Local run
cd Glassdoor-Jobs-Scraperpip install -r requirements.txt
Create storage/key_value_stores/default/INPUT.json with your run input (fields match .actor/actor.json). Set APIFY_LOCAL_STORAGE_DIR to the absolute path of the storage directory (the folder that contains key_value_stores), not the project root:
$env:APIFY_LOCAL_STORAGE_DIR = "D:\path\to\Glassdoor-Jobs-Scraper\storage"python -m src
On the Apify platform, input is injected automatically; APIFY_TOKEN (or proxy password) is available so datacenter/residential fallback works.
Provide input via the Apify Console, CLI, or local KVS INPUT as above.
Legal
Use only in compliance with Glassdoorโs terms and applicable laws. You are responsible for lawful use of scraped data.