Linkedin Jobs Scraper avatar

Linkedin Jobs Scraper

Pricing

$19.99/month + usage

Go to Apify Store
Linkedin Jobs Scraper

Linkedin Jobs Scraper

🔎 LinkedIn Jobs Scraper (linkedin-jobs-scraper) extracts job postings at scale — titles, companies, locations, descriptions, skills, seniority, employment type, salary, apply URLs & remote tags. ⚙️ Fast, reliable, anti-blocking. 🚀 Perfect for recruiters, HR, market research & data teams.

Pricing

$19.99/month + usage

Rating

0.0

(0)

Developer

ScrapAPI

ScrapAPI

Maintained by Community

Actor stats

0

Bookmarked

4

Total users

1

Monthly active users

5 days ago

Last modified

Share

Linkedin Jobs Scraper

The Linkedin Jobs Scraper is a fast, reliable LinkedIn jobs scraping tool that extracts public job postings at scale — including titles, companies, locations, descriptions, skills, seniority, employment type, salary, apply URLs, and remote tags. It solves the manual, time‑consuming process of collecting LinkedIn job data by automating discovery and detail extraction with anti‑blocking techniques. Built for marketers, developers, data analysts, recruiters, and researchers, this LinkedIn job postings scraper lets you scrape LinkedIn job listings by company, keywords, and filters — enabling repeatable, large‑scale jobs data pipelines for insight and automation.

What data / output can you get?

Data typeDescriptionExample value
idLinkedIn job posting ID4304041530
titleJob titleSoftware Engineer, Full Stack, Google Workspace
companyNameCompany nameGoogle
companyUrlCompany LinkedIn page URLhttps://www.linkedin.com/company/google
jobUrlDirect URL to the job postinghttps://www.linkedin.com/jobs/view/software-engineer-full-stack-google-workspace-at-google-4304041530
locationJob locationSeattle, WA
publishedAtPublication date (YYYY-MM-DD)2025-09-23
postedTimeHuman-readable posted time1 day ago
applicationsCountNumber of applicants (text)68 applicants
salarySalary range if available$141,000.00/yr - $202,000.00/yr
contractTypeEmployment typeFull-time
experienceLevelRequired experience levelNot Applicable
workTypeJob function/categoryInformation Technology and Engineering
sectorIndustry sectorInformation Services and Technology, Information and Internet
applyUrlDirect application URLhttps://careers.google.com/jobs/results/...
applyTypeApplication typeEXTERNAL
descriptionFull job description (plain text)Full job description text...
descriptionHtmlHTML version of description
companyIdLinkedIn company ID1441
benefitsExtracted benefits informationhealth insurance, medical, dental, vision
posterProfileUrlJob poster’s profile URL (if available)
posterFullNameJob poster’s full name (if available)

Notes:

  • Results are saved to your Apify dataset for easy export to JSON or CSV.
  • Bonus: The scraper also extracts benefits from the description where present and detects EXTERNAL vs LINKEDIN apply types automatically.

Key features

  • 🔎 Bold discovery options: Scrape jobs by company name, LinkedIn URL, or numeric company ID, and perform keyword-based job searches to power LinkedIn job board scraping and LinkedIn job search scraping.
  • 🎯 Advanced filtering: Apply filters for location, publication date (publishedAt), workType (on-site/remote/hybrid), contractType (F/P/C/T/I/V), experienceLevel (1–5), and geoId for focused LinkedIn jobs data extraction.
  • 🛡️ Anti-blocking proxy fallback: Automatic progression from no proxy → Apify datacenter proxy (SHADER group) → residential proxy (RESIDENTIAL group) with up to 3 retries, plus intelligent retries and backoff.
  • 📄 Pagination with duplicate detection: Robust pagination via LinkedIn “jobs-guest” endpoints and logic to detect duplicate pages — essential for stable LinkedIn job postings crawling at scale.
  • 🧠 Comprehensive data extraction: Captures IDs, titles, company metadata, location, posted time, salary, apply URLs and types, job functions, sector, benefits, full descriptions (text + HTML), and more.
  • 📈 Bulk processing: Supports multiple companies and keywords in one run — ideal for market research and exporting LinkedIn jobs data to CSV or JSON for analytics.
  • 🧰 Developer-friendly: Built on Apify’s Python stack with a clean dataset output for easy integration into LinkedIn jobs scraper Python pipelines and automation tools.
  • 🚪 No login needed: Uses public “jobs-guest” endpoints — no cookies or authentication required for this LinkedIn job postings scraper Chrome-extension alternative.
  • 📊 Production-ready: Structured logging, graceful error recovery, and resilient networking make it a dependable LinkedIn jobs automation tool.

How to use Linkedin Jobs Scraper – step by step

  1. Sign in to Apify at https://console.apify.com.
  2. Open Actors and select “linkedin-jobs-scraper”.
  3. Add input data:
    • Enter one or more company names, LinkedIn company URLs, or company IDs into companyInput.
    • Optionally set keywords (e.g., “Software Engineer”) and location (defaults to “United States”).
    • Apply filters like publishedAt, workType, contractType, experienceLevel, and geoId as needed.
    • Control volume with maxJobs.
    • Configure proxyConfiguration (defaults to no proxy; automatic fallback is built in).
  4. Click Start to run the actor. The scraper will discover job IDs and fetch full details.
  5. Monitor real-time logs to see progress, pagination, and any proxy fallback events.
  6. When finished, open the OUTPUT tab to view results.
  7. Export data to JSON or CSV directly from the dataset.

Pro tip: Chain this LinkedIn jobs data export into Make, Zapier, n8n, or your internal ETL to automate enrichment and analysis at scale.

Use cases

Use case nameDescription
Job market researchAnalyze hiring across companies and regions to quantify role demand, salary mentions, and job functions.
Competitive intelligenceTrack competitor hiring patterns and growth signals by scraping LinkedIn job listings over time.
Salary benchmarkingCollect and compare public salary ranges for specific titles, stacks, and locations.
Talent acquisition opsMonitor newly posted roles matching your pipeline and sync apply URLs into your ATS or CRM.
Academic & labor studiesBuild longitudinal datasets on job trends, experience levels, and sectors for research.
API/data pipelinesFeed structured LinkedIn job data into warehouses, dashboards, or LinkedIn jobs scraper Python workflows.

Why choose Linkedin Jobs Scraper?

Built for precision and resilience, this LinkedIn job listings scraper balances scale with accuracy — without login or browser extensions.

  • ✅ Accurate, structured fields: Consistent dataset schema tailored for analysis and exports.
  • 🌍 Flexible filtering: Target location, work type, contract type, experience level, publication date, and geoId.
  • ⚡ Built to scale: Bulk companies and keyword searches with pagination and duplicate detection.
  • 🔌 Developer-ready: Clean JSON/CSV outputs for APIs, Python scripts, and automation platforms.
  • 🛡️ Safer alternative: Public data only; no need for plugins, cookies, or unstable browser automation.
  • 💰 Cost-effective: Stable runs with anti-blocking design for reliable LinkedIn jobs data extraction.
  • 🔗 Workflow-friendly: Easy export to CSV/JSON for downstream tools like BI, n8n, Zapier, or Make.

Bottom line: A production-ready LinkedIn jobs scraping tool that outperforms brittle extensions with robust networking and clean, analytics-ready output.

Yes — when used responsibly. The actor collects data from publicly available LinkedIn job pages and does not access private or password-protected content. Always:

  • Scrape only public information.
  • Respect LinkedIn’s Terms of Service and platform limits.
  • Comply with applicable laws (e.g., GDPR/CCPA) and internal policies.
  • Use the data responsibly and avoid spam or misuse. Consult your legal team for specific compliance requirements in your jurisdiction.

Input parameters & output format

Example input JSON

{
"companyInput": ["Google", "Microsoft"],
"keywords": "Software Engineer",
"location": "United States",
"maxJobs": 200,
"sortOrder": "date",
"publishedAt": "r604800",
"workType": "2",
"contractType": "F",
"experienceLevel": "4",
"geoId": "",
"proxyConfiguration": {
"useApifyProxy": false
}
}

Input fields

  • companyInput (array) — List of company names, LinkedIn company URLs, or company IDs. Supports bulk input. Default: none. Required: no.
  • keywords (string) — Job title or keywords to search for. Default: “Software Engineer”. Required: no.
  • location (string) — Job location filter (e.g., “United States”, “New York, NY”). Default: “United States”. Required: no.
  • maxJobs (integer) — Maximum number of jobs to scrape. Min 1, Max 10,000. Default: 20. Required: no.
  • sortOrder (string) — Sort order for results. One of: "", "relevance", "date". Default: "". Required: no.
  • maxComments (integer) — Maximum comments to retrieve (optional, for future use). Minimum 0. Default: 0. Required: no.
  • publishedAt (string) — Publication date filter. One of: "", "r86400", "r604800", "r2592000". Default: "". Required: no.
  • workType (string) — Work type filter. One of: "", "1" (on-site), "2" (remote), "3" (hybrid). Default: "". Required: no.
  • contractType (string) — Contract type filter. One of: "", "F", "P", "C", "T", "I", "V". Default: "". Required: no.
  • experienceLevel (string) — Experience level filter. One of: "", "1", "2", "3", "4", "5". Default: "". Required: no.
  • geoId (string) — Geographic ID for more specific location filtering. Default: "". Required: no.
  • proxyConfiguration (object) — Proxy settings. By default, uses no proxy. If blocked, falls back to datacenter proxy, then residential proxy with up to 3 retries. Required: no.

Example output JSON

[
{
"id": "4304041530",
"publishedAt": "2025-09-23",
"salary": "$141,000.00/yr - $202,000.00/yr",
"title": "Software Engineer, Full Stack, Google Workspace",
"jobUrl": "https://www.linkedin.com/jobs/view/software-engineer-full-stack-google-workspace-at-google-4304041530",
"companyName": "Google",
"companyUrl": "https://www.linkedin.com/company/google",
"location": "Seattle, WA",
"postedTime": "1 day ago",
"applicationsCount": "68 applicants",
"description": "Full job description text...",
"contractType": "Full-time",
"experienceLevel": "Not Applicable",
"workType": "Information Technology and Engineering",
"sector": "Information Services and Technology, Information and Internet",
"applyUrl": "https://careers.google.com/jobs/results/...",
"applyType": "EXTERNAL",
"descriptionHtml": "<div>HTML description...</div>",
"companyId": "1441",
"benefits": "health insurance, medical, dental, vision",
"posterProfileUrl": "",
"posterFullName": ""
}
]

Fields saved per job record:

  • id, publishedAt, salary, title, jobUrl, companyName, companyUrl, location, postedTime, applicationsCount, description, contractType, experienceLevel, workType, sector, applyUrl, applyType, descriptionHtml, companyId, benefits, posterProfileUrl, posterFullName.

Note: Some fields may be empty when not present on the public job page (e.g., salary, benefits, poster details).

FAQ

Do I need to log in to scrape jobs on LinkedIn?

No. The actor uses public “jobs-guest” endpoints and does not require login or cookies to scrape LinkedIn job postings.

Can I scrape jobs without providing a company?

Yes. You can run keyword-based searches using the keywords field and optionally combine with location and other filters.

What’s the maximum number of jobs I can scrape per run?

The maxJobs parameter accepts values from 1 up to 10,000. Adjust based on your use case and resource limits.

Which filters are supported for searches?

You can filter by location, publishedAt (last 24 hours/week/month), workType (on-site/remote/hybrid), contractType (F/P/C/T/I/V), experienceLevel (1–5), and geoId.

How does the proxy fallback work?

By default, the run starts with no proxy. If LinkedIn rejects or blocks requests, the scraper automatically falls back to an Apify datacenter proxy (SHADER), then to a residential proxy (RESIDENTIAL) with up to 3 retries — all handled automatically.

What data fields are extracted for each job?

For each posting the actor extracts IDs, titles, company metadata and ID, location, posted time, applicants count, salary if present, contract type, experience level, work type, sector, apply URL and type, benefits (when detectable), and full descriptions in text and HTML.

Can I export results to CSV for analysis?

Yes. Open the dataset in the Apify run and export to JSON or CSV for downstream analytics and CRM/ATS uploads.

Yes, when done responsibly. This tool collects only publicly available data. You’re responsible for complying with LinkedIn’s Terms of Service and applicable data protection laws.

Final thoughts

The Linkedin Jobs Scraper is built to extract structured LinkedIn job postings at scale for research, analytics, and automation. With robust discovery, advanced filtering, anti‑blocking proxy fallback, and rich structured outputs, it’s ideal for recruiters, HR teams, market researchers, and data engineers. Export LinkedIn jobs data to CSV/JSON, integrate with your API or LinkedIn jobs scraper Python pipeline, and automate insights end‑to‑end. Start extracting smarter, cleaner LinkedIn job data today.