Google Jobs Scraper Pro
Pricing
Pay per usage
Google Jobs Scraper Pro
Scrape Google Jobs search results. Get job title, company, location, full description, salary, qualifications, benefits, and apply URL from any search query.
Pricing
Pay per usage
Rating
0.0
(0)
Developer
BotFlowTech
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
2 days ago
Last modified
Categories
Share
Google Jobs Scraper Pro — Real-Time Job Data
The most reliable Google Jobs scraper on Apify. Extract structured job listings from Google's jobs carousel at scale — including full descriptions, salary ranges, apply links, qualifications, and benefits.
Why this scraper?
Most Google Jobs scrapers on Apify have sub-2/5 ratings because they use brittle CSS selectors, ignore dynamic rendering, and break within days of a Google UI update. Google Jobs Scraper Pro is built differently:
- Multi-selector fallback chains — every data field has 4–6 selector fallbacks so minor Google DOM changes don't break extraction
- Playwright + stealth — full headless Chrome automation with fingerprint masking and realistic browser headers
- Proper detail panel handling — clicks each job card to open the full detail pane before extracting long-form data (description, qualifications, benefits)
- Smart pagination — automatically loads more jobs until your
maxJobsPerQuerylimit is reached - PAY_PER_EVENT billing — you only pay for jobs actually extracted, never for failed or empty requests
Use cases
| Use case | Description |
|---|---|
| Job aggregation | Build a job board that pulls real-time listings from Google Jobs across any market |
| Salary research | Collect salary ranges from thousands of postings to benchmark compensation |
| HR tech integration | Feed structured job data into ATS platforms, analytics dashboards, or ML pipelines |
| Recruitment automation | Monitor competitors' hiring activity or track open roles at target companies |
| Labour market analytics | Analyse demand for skills, job titles, or locations over time |
| Academic research | Gather large datasets of job postings for NLP or economics research |
Input
{"queries": ["software engineer London", "data scientist remote"],"country": "gb","language": "en","maxJobsPerQuery": 100,"datePosted": "week","jobType": "fulltime","remoteOnly": false,"proxyConfiguration": {"useApifyProxy": true,"apifyProxyGroups": ["RESIDENTIAL"]}}
Input fields
| Field | Type | Default | Description |
|---|---|---|---|
queries | string[] | required | Search queries, e.g. ["nurse London", "python developer remote"] |
country | string | "us" | ISO 3166-1 two-letter country code for localising results |
language | string | "en" | ISO 639-1 two-letter language code |
maxJobsPerQuery | integer | 50 | Max jobs to collect per query (1–500) |
datePosted | string | "any" | Filter by posting date: any, today, 3days, week, month |
jobType | string | (any) | Filter by employment type: fulltime, parttime, contractor, internship, temporary |
remoteOnly | boolean | false | Append "remote" to every query to target remote positions |
proxyConfiguration | object | (none) | Apify Proxy or custom proxy config — residential proxies strongly recommended |
Output
Each job listing is saved as a dataset item with the following schema:
{"title": "Senior Software Engineer","company": "Monzo","location": "London, UK (Hybrid)","description": "We are looking for a Senior Software Engineer to join our Platform Engineering team...","salary": "£90,000 – £130,000 a year","jobType": "Full-time","datePosted": "3 days ago","applyUrl": "https://boards.greenhouse.io/monzo/jobs/5678901","source": "LinkedIn","qualifications": ["5+ years of experience with Go or a similar compiled language","Experience with distributed systems and microservices","Strong understanding of system design principles"],"benefits": ["Stock options","Private health insurance","Flexible working"],"jobHighlights": ["Hybrid — 2 days in office","90k–130k annually","Full-time"],"query": "software engineer London","scrapedAt": "2026-04-01T10:00:00.000Z"}
Output fields
| Field | Type | Description |
|---|---|---|
title | string | Job title |
company | string | Hiring company name |
location | string | Office location or remote designation |
description | string | Full job description (may include HTML) |
salary | string | null | Salary range if provided, otherwise null |
jobType | string | null | Employment type (Full-time, Part-time, Contract, etc.) |
datePosted | string | null | Human-readable posting date (e.g. "3 days ago") |
applyUrl | string | null | Direct application URL (company ATS or job board) |
source | string | null | Where the listing was posted (LinkedIn, Indeed, company site, etc.) |
qualifications | string[] | Required/preferred qualifications bullet points |
benefits | string[] | Benefits and perks |
jobHighlights | string[] | Key job highlights shown at the top of the detail panel |
query | string | The search query that produced this listing |
scrapedAt | string | ISO 8601 timestamp of when the record was extracted |
Pricing
$1.00 per 1,000 jobs (PAY_PER_EVENT)
You are charged only for jobs successfully extracted and pushed to the dataset. Empty results, failed pages, and pagination requests are not charged.
Estimated costs for common workflows:
| Workflow | Jobs | Cost |
|---|---|---|
| Quick search (1 query, 50 jobs) | 50 | ~$0.05 |
| Standard run (5 queries, 100 jobs each) | 500 | ~$0.50 |
| Large-scale collection (20 queries, 200 each) | 4,000 | ~$4.00 |
| Monthly market analysis (100 queries, 100 each) | 10,000 | ~$10.00 |
Proxy costs are additional if using Apify Proxy. Residential proxies reduce blocking risk but cost more. For best reliability, use Residential proxy groups.
Technical details
How it works
- URL construction — builds a Google Jobs search URL:
https://www.google.com/search?q={query}&ibp=htl;jobs&hl={lang}&gl={country} - Page load — navigates with full headless Chromium, applies stealth patches and rotates User-Agents from a pool of 7 real Chrome versions
- Job card iteration — locates all job card
<li>elements in the Google Jobs panel - Detail extraction — clicks each card to open the right-side detail panel, then extracts all structured fields using multi-fallback selectors
- Pagination — clicks "More jobs" button repeatedly until
maxJobsPerQueryis reached or no more results exist - PAY_PER_EVENT charging — charges
job-resultevent for each record pushed to the dataset
Anti-detection
- Headless Chrome with
--disable-blink-features=AutomationControlled navigator.webdrivermasked- Realistic
User-Agent,Accept-Language,Sec-Ch-Uaheaders - Viewport randomisation (1280–1480 × 900–1000px)
- Crawlee fingerprint generator (desktop Chrome 110+, Windows/macOS)
- Random delays between 800ms–2200ms between actions
- Cookie consent banner auto-dismissal
Error handling
- Automatic retry up to 3 times per URL
- CAPTCHA / "unusual traffic" detection with error logging
- Per-field fallback selectors survive minor Google DOM changes
- Gracefully skips cards where extraction returns empty data
Requirements
- Node.js >= 18
- Apify SDK v3
- Playwright >= 1.42
- Recommended: Apify Residential proxies for large runs
Local development
git clone <repo>cd google-jobs-scraper-pronpm installnpx playwright install chromiumnpm run dev
Create a storage/key_value_stores/default/INPUT.json file with your input before running locally.
Changelog
v1.0.0
- Initial release
- Full Google Jobs extraction: title, company, location, description, salary, job type, date posted, apply URL, source, qualifications, benefits, highlights
- PAY_PER_EVENT billing at $1.00 / 1,000 jobs
- Stealth mode with fingerprint rotation
- Pagination support via "More jobs" button
- Support for country, language, date posted, job type, and remote-only filters
Related actors
- LinkedIn Jobs Scraper — extract jobs from LinkedIn
- Indeed Scraper — scrape Indeed.com job listings
- Glassdoor Scraper — jobs + company reviews
Built with Apify SDK v3 and Crawlee.