Google Jobs Scraper
Pricing
Pay per event
Google Jobs Scraper
Search Google Jobs and extract job listings — titles, companies, locations, full descriptions, salaries, apply URLs, qualifications, and more. Export to JSON, CSV, Excel.
Pricing
Pay per event
Rating
0.0
(0)
Developer
Stas Persiianenko
Actor stats
0
Bookmarked
20
Total users
9
Monthly active users
8 days ago
Last modified
Categories
Share
Search Google Jobs by keyword and extract structured job listings — titles, companies, locations, descriptions, salaries, employment types, apply URLs, and qualification highlights. Supports location filtering, country, and language settings.
What does Google Jobs Scraper do?
Google Jobs Scraper searches Google's job aggregation engine and extracts detailed, structured data from every listing. Google Jobs pulls listings from thousands of job boards (Indeed, LinkedIn, Glassdoor, ZipRecruiter, company career pages) into one unified interface — this scraper turns those results into clean, exportable data.
Enter your job search keywords, and the scraper opens Google Jobs in a real browser, scrolls through results, clicks into each listing to capture full details, and returns structured data including descriptions, salary ranges, qualifications, responsibilities, benefits, and direct apply URLs.
The scraper uses Playwright with residential proxies to navigate Google Jobs exactly like a real user would, ensuring you get the same comprehensive results Google shows to job seekers.
Who is Google Jobs Scraper for?
- 🏢 Recruiters and HR teams tracking job market trends and competitor postings across industries
- 📊 Compensation analysts collecting salary data across roles, locations, and companies
- 🔍 Job aggregator platforms building comprehensive job databases from Google's curated listings
- 🧪 Labor market researchers studying employment trends, skill demand, and hiring patterns
- 🤖 Job board startups sourcing listings from Google's aggregated feed of thousands of boards
- 💼 Career coaches analyzing what employers look for in specific roles and industries
- 📈 Data analysts building datasets for workforce analytics, salary benchmarking, and market intelligence
- 🏛️ Policy researchers studying employment patterns, remote work trends, and regional job availability
Why use Google Jobs Scraper?
- 🌐 Aggregated from thousands of sources — Google Jobs combines listings from Indeed, LinkedIn, Glassdoor, ZipRecruiter, and company career pages in one place
- 📋 Full job details — title, company, location, description, salary, employment type, qualifications, responsibilities, and benefits
- 🔗 Direct apply URLs — get the original job posting link to apply directly on the source site
- 🌍 Location + language filtering — target specific countries, cities, and languages for localized results
- 📦 Multiple queries per run — search for "software engineer", "data analyst", and "product manager" in a single run
- 💰 Pay per result — only pay for jobs actually extracted, no flat subscription fees
- ⚡ Batch processing — extract up to 200 jobs per query across multiple search terms
How much does it cost to scrape Google Jobs?
Google Jobs Scraper uses pay-per-event pricing. You only pay for what you use:
| Event | Price |
|---|---|
| Run started (one-time) | $0.035 |
| Per job scraped (Free tier) | $0.006 |
| Per job scraped (Bronze) | $0.0054 |
| Per job scraped (Silver) | $0.0048 |
| Per job scraped (Gold) | $0.0039 |
| Per job scraped (Platinum) | $0.003 |
| Per job scraped (Diamond) | $0.0024 |
Example costs (Free tier):
- 10 jobs for 1 keyword: $0.035 + (10 x $0.006) = $0.095
- 50 jobs for 2 keywords: $0.035 + (100 x $0.006) = $0.635
- 200 jobs for 1 keyword: $0.035 + (200 x $0.006) = $1.235
Apify Free plan users get $5/month in free credits — enough for ~833 job listings.
Data you can extract from Google Jobs
| Field | Type | Description |
|---|---|---|
title | string | Job title (e.g. "Senior Software Engineer") |
company | string | Employer name |
location | string | Job location (city, state, or "Remote") |
description | string | Full job description text |
salary | string | Salary range if available (e.g. "$120K–$160K a year") |
employmentType | string | Full-time, Part-time, Contract, Internship |
datePosted | string | When the job was posted (e.g. "3 days ago") |
sourceUrl | string | URL of the original job posting |
sourceDomain | string | Domain of the source site (e.g. "linkedin.com") |
applyUrl | string | Direct link to apply for the job |
highlights.qualifications | string[] | Required qualifications and skills |
highlights.responsibilities | string[] | Key job responsibilities |
highlights.benefits | string[] | Listed benefits and perks |
query | string | The search keyword that returned this result |
scrapedAt | string | ISO 8601 timestamp when the data was collected |
How to scrape Google Jobs step by step
- Go to Google Jobs Scraper on Apify Store
- Click Try for free
- Enter your job search keywords (e.g., "software engineer", "data analyst")
- Optionally add a location filter (e.g., "New York", "Remote", "London")
- Set the maximum number of jobs per query (default: 20)
- Adjust country and language if needed (default: US, English)
- Click Start and wait for results
- Download your data as JSON, CSV, Excel, or connect it to your workflow
Input configuration
| Field | Type | Description | Default |
|---|---|---|---|
queries | string[] | Job search keywords (e.g., "software engineer", "nurse") | required |
location | string | Location filter appended to each query (e.g., "New York", "Remote") | none |
maxResults | integer | Maximum jobs to extract per query (1–200) | 20 |
country | string | Country code for localized results (e.g., "us", "uk", "de") | "us" |
language | string | Language code for results (e.g., "en", "de", "fr") | "en" |
maxRequestRetries | integer | Number of retry attempts for failed requests (1–10) | 3 |
Example input
{"queries": ["software engineer", "data analyst"],"location": "New York","maxResults": 20,"country": "us","language": "en"}
Output example
{"title": "Senior Software Engineer","company": "Google","location": "New York, NY","description": "We're looking for a Senior Software Engineer to join our Cloud Platform team. You will design and build large-scale distributed systems, mentor junior engineers, and drive technical strategy...","salary": "$160,000–$210,000 a year","employmentType": "Full-time","datePosted": "3 days ago","sourceUrl": "https://careers.google.com/jobs/results/1234567890","sourceDomain": "careers.google.com","applyUrl": "https://careers.google.com/jobs/results/1234567890/apply","highlights": {"qualifications": ["Bachelor's degree in Computer Science or equivalent","5+ years of experience in software development","Proficiency in Python, Java, or Go","Experience with distributed systems and cloud infrastructure"],"responsibilities": ["Design and implement scalable backend services","Mentor junior team members and conduct code reviews","Collaborate with product teams to define technical requirements","Drive architectural decisions for new features"],"benefits": ["Competitive salary and equity","Health, dental, and vision insurance","Flexible remote work policy","Annual education stipend"]},"query": "software engineer","scrapedAt": "2026-03-27T14:30:00.000Z"}
Tips for best results
- 🎯 Use specific job titles — "frontend react developer" returns more relevant results than just "developer"
- 📍 Add location for local jobs — use the location field for city-specific searches like "Austin, TX" or "London"
- 🌐 Try "Remote" as location — filter for remote-only positions by setting location to "Remote"
- 📏 Start small — test with 10 results first, then scale up once you verify the output matches your needs
- 🔄 Batch related queries — add multiple job titles in one run to build a comprehensive dataset efficiently
- 💡 Use country codes — set
countryto "uk" for British job listings or "de" for German ones - 🗣️ Match language to country — set
languageto "de" when searching German jobs for localized titles and descriptions - 📊 Check salary fields — not all listings include salary data; Google shows it when the source provides it
Integrations
Connect Google Jobs data to your existing tools and workflows:
- 📊 Google Sheets — automatically export job listings to a spreadsheet for tracking and analysis
- 🔗 Zapier / Make — trigger workflows when new jobs matching your criteria appear
- 💾 Webhooks — push results to your own API endpoint in real time
- 📁 S3 / Google Cloud Storage — store large job datasets for batch processing
- 📈 Power BI / Tableau — import structured job data for salary benchmarking dashboards
- 🤖 Slack notifications — get alerts when new jobs matching your keywords are posted
- 📧 Email alerts — receive daily digests of new job listings via Apify integrations
Using Google Jobs Scraper with the API
Node.js
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });const run = await client.actor('automation-lab/google-jobs-scraper').call({queries: ['software engineer', 'data analyst'],location: 'New York',maxResults: 50,country: 'us',});const { items } = await client.dataset(run.defaultDatasetId).listItems();console.log(`Found ${items.length} job listings`);items.forEach((job) => {console.log(`${job.title} at ${job.company} — ${job.salary || 'Salary not listed'}`);});
Python
from apify_client import ApifyClientclient = ApifyClient('YOUR_API_TOKEN')run = client.actor('automation-lab/google-jobs-scraper').call(run_input={'queries': ['software engineer', 'data analyst'],'location': 'New York','maxResults': 50,'country': 'us',})items = client.dataset(run['defaultDatasetId']).list_items().itemsprint(f'Found {len(items)} job listings')for job in items:print(f"{job['title']} at {job['company']} — {job.get('salary', 'Salary not listed')}")
cURL
curl "https://api.apify.com/v2/acts/automation-lab~google-jobs-scraper/runs" \-X POST \-H "Content-Type: application/json" \-H "Authorization: Bearer YOUR_API_TOKEN" \-d '{"queries": ["software engineer"],"location": "New York","maxResults": 20,"country": "us"}'
Using with MCP (Model Context Protocol)
Claude Code
Add Google Jobs Scraper to your Claude Code setup:
$claude mcp add --transport http apify "https://mcp.apify.com?tools=automation-lab/google-jobs-scraper"
Then ask Claude: "Search Google Jobs for 'product manager' in San Francisco and show me the top 20 listings with salaries"
Claude Desktop
Add this to your Claude Desktop claude_desktop_config.json:
{"mcpServers": {"apify": {"url": "https://mcp.apify.com?tools=automation-lab/google-jobs-scraper"}}}
Then ask:
- "Find remote data engineering jobs and list them with salary ranges"
- "What are companies hiring for 'machine learning engineer' in New York?"
- "Search for entry-level marketing jobs in London and show me the qualifications required"
Is it legal to scrape Google Jobs?
Google Jobs Scraper accesses only publicly available content — the same job listings any visitor sees when searching on Google. Web scraping of public data is generally considered legal, as established by the U.S. Ninth Circuit's ruling in hiQ Labs v. LinkedIn (2022).
This scraper does not:
- Access private or restricted content
- Bypass authentication or paywalls
- Collect personal data beyond what is publicly displayed
- Circumvent any technical protection measures
The data extracted consists of job postings that employers have intentionally made public. Always review and comply with applicable laws in your jurisdiction before scraping.
FAQ
How many jobs can I extract per query? Up to 200 jobs per query. Google Jobs loads results progressively as you scroll, so larger requests take proportionally longer. For most use cases, 20–50 jobs per query provides a good balance of coverage and speed.
Why am I getting fewer results than the max I set? Google may have fewer matching jobs for your query and location combination. The scraper extracts everything Google shows — if Google only has 15 matching jobs, that's all you'll get.
Why is the salary field empty for some jobs? Google Jobs only displays salary information when the source site provides it. Many employers don't include salary ranges in their listings. This is a limitation of the source data, not the scraper.
Does it support non-English job searches?
Yes. Set the country and language fields to match your target market. For example, use country: "de" and language: "de" for German job listings.
The scraper returned 0 results — what happened? This usually means Google showed a CAPTCHA or verification page. The scraper automatically retries with a fresh session. If it persists, try running again — Google's bot detection is session-dependent and residential proxies typically resolve this.
Can I filter by employment type (full-time, part-time, remote)? Include these terms in your search query, e.g., "software engineer remote" or "part-time data entry". Google Jobs interprets these naturally, just like searching on Google directly.
Can I schedule recurring job searches? Yes. Use Apify's built-in scheduling to run searches hourly, daily, or weekly. This is ideal for monitoring job markets, tracking new postings, or building time-series datasets of job availability.
What job boards does Google Jobs pull from? Google aggregates listings from thousands of sources including Indeed, LinkedIn, Glassdoor, ZipRecruiter, Monster, company career pages, staffing agencies, and government job sites. You get a unified view without scraping each board individually.
Related scrapers
- Indeed Scraper — scrape job listings directly from Indeed
- LinkedIn Jobs Scraper — extract job postings from LinkedIn Jobs
- Glassdoor Jobs Scraper — scrape jobs and company data from Glassdoor
- Greenhouse Jobs Scraper — extract job listings from Greenhouse-powered career pages
- Naukri Scraper — scrape job listings from Naukri.com (India)
- Google Search Scraper — scrape general Google search results