Google Jobs Scraper Pro avatar

Google Jobs Scraper Pro

Pricing

Pay per usage

Go to Apify Store
Google Jobs Scraper Pro

Google Jobs Scraper Pro

Scrape Google Jobs search results. Get job title, company, location, full description, salary, qualifications, benefits, and apply URL from any search query.

Pricing

Pay per usage

Rating

0.0

(0)

Developer

BotFlowTech

BotFlowTech

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

2 days ago

Last modified

Share

Google Jobs Scraper Pro — Real-Time Job Data

The most reliable Google Jobs scraper on Apify. Extract structured job listings from Google's jobs carousel at scale — including full descriptions, salary ranges, apply links, qualifications, and benefits.


Why this scraper?

Most Google Jobs scrapers on Apify have sub-2/5 ratings because they use brittle CSS selectors, ignore dynamic rendering, and break within days of a Google UI update. Google Jobs Scraper Pro is built differently:

  • Multi-selector fallback chains — every data field has 4–6 selector fallbacks so minor Google DOM changes don't break extraction
  • Playwright + stealth — full headless Chrome automation with fingerprint masking and realistic browser headers
  • Proper detail panel handling — clicks each job card to open the full detail pane before extracting long-form data (description, qualifications, benefits)
  • Smart pagination — automatically loads more jobs until your maxJobsPerQuery limit is reached
  • PAY_PER_EVENT billing — you only pay for jobs actually extracted, never for failed or empty requests

Use cases

Use caseDescription
Job aggregationBuild a job board that pulls real-time listings from Google Jobs across any market
Salary researchCollect salary ranges from thousands of postings to benchmark compensation
HR tech integrationFeed structured job data into ATS platforms, analytics dashboards, or ML pipelines
Recruitment automationMonitor competitors' hiring activity or track open roles at target companies
Labour market analyticsAnalyse demand for skills, job titles, or locations over time
Academic researchGather large datasets of job postings for NLP or economics research

Input

{
"queries": ["software engineer London", "data scientist remote"],
"country": "gb",
"language": "en",
"maxJobsPerQuery": 100,
"datePosted": "week",
"jobType": "fulltime",
"remoteOnly": false,
"proxyConfiguration": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}

Input fields

FieldTypeDefaultDescription
queriesstring[]requiredSearch queries, e.g. ["nurse London", "python developer remote"]
countrystring"us"ISO 3166-1 two-letter country code for localising results
languagestring"en"ISO 639-1 two-letter language code
maxJobsPerQueryinteger50Max jobs to collect per query (1–500)
datePostedstring"any"Filter by posting date: any, today, 3days, week, month
jobTypestring(any)Filter by employment type: fulltime, parttime, contractor, internship, temporary
remoteOnlybooleanfalseAppend "remote" to every query to target remote positions
proxyConfigurationobject(none)Apify Proxy or custom proxy config — residential proxies strongly recommended

Output

Each job listing is saved as a dataset item with the following schema:

{
"title": "Senior Software Engineer",
"company": "Monzo",
"location": "London, UK (Hybrid)",
"description": "We are looking for a Senior Software Engineer to join our Platform Engineering team...",
"salary": "£90,000 – £130,000 a year",
"jobType": "Full-time",
"datePosted": "3 days ago",
"applyUrl": "https://boards.greenhouse.io/monzo/jobs/5678901",
"source": "LinkedIn",
"qualifications": [
"5+ years of experience with Go or a similar compiled language",
"Experience with distributed systems and microservices",
"Strong understanding of system design principles"
],
"benefits": [
"Stock options",
"Private health insurance",
"Flexible working"
],
"jobHighlights": [
"Hybrid — 2 days in office",
"90k–130k annually",
"Full-time"
],
"query": "software engineer London",
"scrapedAt": "2026-04-01T10:00:00.000Z"
}

Output fields

FieldTypeDescription
titlestringJob title
companystringHiring company name
locationstringOffice location or remote designation
descriptionstringFull job description (may include HTML)
salarystring | nullSalary range if provided, otherwise null
jobTypestring | nullEmployment type (Full-time, Part-time, Contract, etc.)
datePostedstring | nullHuman-readable posting date (e.g. "3 days ago")
applyUrlstring | nullDirect application URL (company ATS or job board)
sourcestring | nullWhere the listing was posted (LinkedIn, Indeed, company site, etc.)
qualificationsstring[]Required/preferred qualifications bullet points
benefitsstring[]Benefits and perks
jobHighlightsstring[]Key job highlights shown at the top of the detail panel
querystringThe search query that produced this listing
scrapedAtstringISO 8601 timestamp of when the record was extracted

Pricing

$1.00 per 1,000 jobs (PAY_PER_EVENT)

You are charged only for jobs successfully extracted and pushed to the dataset. Empty results, failed pages, and pagination requests are not charged.

Estimated costs for common workflows:

WorkflowJobsCost
Quick search (1 query, 50 jobs)50~$0.05
Standard run (5 queries, 100 jobs each)500~$0.50
Large-scale collection (20 queries, 200 each)4,000~$4.00
Monthly market analysis (100 queries, 100 each)10,000~$10.00

Proxy costs are additional if using Apify Proxy. Residential proxies reduce blocking risk but cost more. For best reliability, use Residential proxy groups.


Technical details

How it works

  1. URL construction — builds a Google Jobs search URL: https://www.google.com/search?q={query}&ibp=htl;jobs&hl={lang}&gl={country}
  2. Page load — navigates with full headless Chromium, applies stealth patches and rotates User-Agents from a pool of 7 real Chrome versions
  3. Job card iteration — locates all job card <li> elements in the Google Jobs panel
  4. Detail extraction — clicks each card to open the right-side detail panel, then extracts all structured fields using multi-fallback selectors
  5. Pagination — clicks "More jobs" button repeatedly until maxJobsPerQuery is reached or no more results exist
  6. PAY_PER_EVENT charging — charges job-result event for each record pushed to the dataset

Anti-detection

  • Headless Chrome with --disable-blink-features=AutomationControlled
  • navigator.webdriver masked
  • Realistic User-Agent, Accept-Language, Sec-Ch-Ua headers
  • Viewport randomisation (1280–1480 × 900–1000px)
  • Crawlee fingerprint generator (desktop Chrome 110+, Windows/macOS)
  • Random delays between 800ms–2200ms between actions
  • Cookie consent banner auto-dismissal

Error handling

  • Automatic retry up to 3 times per URL
  • CAPTCHA / "unusual traffic" detection with error logging
  • Per-field fallback selectors survive minor Google DOM changes
  • Gracefully skips cards where extraction returns empty data

Requirements

  • Node.js >= 18
  • Apify SDK v3
  • Playwright >= 1.42
  • Recommended: Apify Residential proxies for large runs

Local development

git clone <repo>
cd google-jobs-scraper-pro
npm install
npx playwright install chromium
npm run dev

Create a storage/key_value_stores/default/INPUT.json file with your input before running locally.


Changelog

v1.0.0

  • Initial release
  • Full Google Jobs extraction: title, company, location, description, salary, job type, date posted, apply URL, source, qualifications, benefits, highlights
  • PAY_PER_EVENT billing at $1.00 / 1,000 jobs
  • Stealth mode with fingerprint rotation
  • Pagination support via "More jobs" button
  • Support for country, language, date posted, job type, and remote-only filters

  • LinkedIn Jobs Scraper — extract jobs from LinkedIn
  • Indeed Scraper — scrape Indeed.com job listings
  • Glassdoor Scraper — jobs + company reviews

Built with Apify SDK v3 and Crawlee.