Linkedin Jobs Scraper avatar

Linkedin Jobs Scraper

Pricing

$19.99/month + usage

Go to Apify Store
Linkedin Jobs Scraper

Linkedin Jobs Scraper

Scrape LinkedIn job listings with ease ๐Ÿ’ผ๐Ÿ“Š Extract job titles, companies, locations, salaries, descriptions, posted dates, and direct links. Perfect for job tracking, recruitment insights, market research, and lead generation. Automate LinkedIn job data collection at scale ๐Ÿš€

Pricing

$19.99/month + usage

Rating

0.0

(0)

Developer

ScraperForge

ScraperForge

Maintained by Community

Actor stats

0

Bookmarked

5

Total users

0

Monthly active users

14 days ago

Last modified

Share

Linkedin Jobs Scraper

The Linkedin Jobs Scraper is an Apify actor that automatically extracts structured LinkedIn job postings data at scale. It solves the pain of manual copy-paste and unreliable tools by providing a robust LinkedIn jobs scraper for keyword searches and company-targeted collection โ€” ideal for recruiters, marketers, developers, data analysts, and researchers. With reliable pagination, filtering, and proxy fallback, this LinkedIn job postings scraper enables repeatable, large-scale pipelines for hiring intelligence, market research, and lead generation.

What is Linkedin Jobs Scraper?

Linkedin Jobs Scraper is a purpose-built LinkedIn jobs scraper that collects detailed job data from publicly available postings. It automates keyword and company searches, applies rich filters (location, work type, contract type, experience level, publication date), and handles pagination with duplicate detection. Designed for marketers, developers, data analysts, and researchers, this LinkedIn jobs API alternative helps you build consistent datasets and insights at scale.

What data / output can you get?

Below are the exact fields this LinkedIn job listing scraper exports to the Apify dataset:

Data fieldDescriptionExample value
idLinkedIn job posting ID4304041530
titleJob titleSoftware Engineer, Full Stack
companyNameCompany nameGoogle
companyUrlCompany LinkedIn page URLhttps://www.linkedin.com/company/google
jobUrlDirect URL to the job postinghttps://www.linkedin.com/jobs/view/...-4304041530
locationJob locationSeattle, WA
publishedAtPublication date (YYYY-MM-DD)2025-09-23
postedTimeHuman-readable posted time1 day ago
salarySalary range if available$141,000/yr โ€“ $202,000/yr
applicationsCountNumber of applicants68 applicants
contractTypeEmployment typeFull-time
experienceLevelRequired seniorityNot Applicable
workTypeJob function/categoryInformation Technology and Engineering
sectorIndustry sectorInformation Services and Technology
applyUrlDirect application URLhttps://careers.google.com/jobs/results/...
applyTypeApplication modalityEXTERNAL
descriptionFull job description (plain text)Full job description text...
descriptionHtmlHTML version of description
companyIdLinkedIn company ID1441
benefitsExtracted benefits informationhealth insurance, medical, dental, vision
posterProfileUrlJob poster profile URL (if available)https://www.linkedin.com/in/...
posterFullNameJob poster name (if available)Jane Doe

Notes:

  • Results are saved to the Apify dataset and can be exported to JSON or CSV.
  • Some fields may be empty if not publicly present on the job page (e.g., salary, benefits, poster details).

Key features

  • ๐Ÿ”Ž Bold keyword & company targeting: Run keyword-based searches or scrape by company names, LinkedIn URLs, or numeric company IDs โ€” a flexible LinkedIn job scraper tool for many workflows.
  • ๐Ÿงญ Advanced filters: Filter by location, work type (on-site/remote/hybrid), contract type (full-time, part-time, etc.), experience level, and publication date windows.
  • ๐Ÿ” Reliable pagination with duplicate detection: The LinkedIn job crawler automatically paginates through search results and prevents duplicates for clean datasets.
  • ๐Ÿ›ก๏ธ Intelligent proxy fallback: Starts with no proxy and automatically falls back to datacenter and then residential proxies (with retries) if LinkedIn blocks or rejects requests.
  • ๐Ÿšซ No login required: Scrapes public LinkedIn job postings via the guest endpoints โ€” no cookies, extensions, or session setup required.
  • ๐Ÿงฐ Developer-friendly: Built on the Apify platform for easy automation and integration. Trigger runs via Apify Console or API and export structured data to your stack.
  • ๐Ÿ“ฆ Robust output schema: Extracts IDs, titles, company metadata, locations, dates, salary signals, apply links, job descriptions (text + HTML), benefits, and poster info.
  • ๐Ÿ“ˆ Production-ready resilience: Built-in retry logic, rate limiting, and block detection for high-success runs at scale.

How to use Linkedin Jobs Scraper - step by step

  1. Sign in to Apify at https://console.apify.com.
  2. Open the actor named โ€œlinkedin-jobs-scraperโ€.
  3. Add your inputs:
    • Enter companies in companyInput as names, LinkedIn company URLs, or numeric company IDs (e.g., โ€œGoogleโ€, โ€œhttps://www.linkedin.com/company/google/โ€, or โ€œ1441โ€).
    • Set keywords for role-based searches (e.g., โ€œSoftware Engineerโ€) โ€” this LinkedIn jobs scraping tool supports keyword-only runs.
    • Adjust filters like location, publishedAt, workType, contractType, experienceLevel, or geoId as needed.
    • Control the volume with maxJobs and (optionally) sortOrder.
    • Configure proxyConfiguration (defaults to no proxy with automatic fallback when blocked).
  4. Click Start to launch the run.
  5. Monitor real-time logs for progress, pagination, and any proxy fallback events.
  6. When finished, go to the OUTPUT tab to view results in the dataset.
  7. Export your dataset to JSON or CSV for analysis or ingestion.

Pro tip: Automate your pipeline with the Apify API โ€” schedule runs, pipe JSON to your database, or connect to tools like Make, n8n, or Zapier for end-to-end workflows.

Use cases

Use case nameDescription
Recruitment + role monitoringTrack new postings by title and location to alert recruiters and fill pipelines faster.
Competitive hiring intelligenceMonitor competitorsโ€™ open roles to analyze team growth and strategic priorities.
Salary signals & benchmarkingCollect salary ranges where available to inform compensation analysis.
Location & remote trendsMeasure hybrid/remote vs on-site roles across regions for workforce planning.
Lead generation for HR techBuild datasets of hiring companies and roles for outreach or enrichment.
Academic & market researchStudy job trends, skill demand, and sector dynamics with reproducible runs.
API-driven data pipelinesOrchestrate scheduled scraping and export structured results to your BI/warehouse.

Why choose Linkedin Jobs Scraper?

A precise, production-ready LinkedIn job data scraper built for automation and scale.

  • ๐ŸŽฏ Accuracy that matters: Extracts structured fields from public job pages with text and HTML descriptions.
  • ๐ŸŒ Flexible filtering: Combine keywords, location, work type, contract type, experience, and publication date for targeted datasets.
  • โšก Scales with you: Pagination, duplicate detection, rate limiting, and batch processing for large runs.
  • ๐Ÿงช Developer access: Works seamlessly with the Apify Console and API for programmatic control and CI/CD pipelines.
  • ๐Ÿ”’ Safe-by-design: Targets publicly available job listings only; no login, cookies, or private endpoints.
  • ๐Ÿ” Superior to extensions: Server-side reliability and intelligent proxy fallback outperform unstable browser-based tools.
  • ๐Ÿ”— Integration-ready: Export results to JSON/CSV and connect to your analytics or enrichment workflows.

In short, itโ€™s a LinkedIn jobs scraping software built for reliability, structured outputs, and automation-first teams.

Yes โ€” when used responsibly. This LinkedIn jobs extractor collects data from publicly available job listings only and does not access private profiles or gated content. You are responsible for:

  • Respecting LinkedInโ€™s Terms of Service and applicable laws (e.g., GDPR, CCPA).
  • Using the data for lawful, non-abusive purposes (no spam).
  • Applying reasonable rate limits and proxy use as configured. For edge cases, consult your legal team to ensure compliance with your specific use.

Input parameters & output format

Example input JSON

{
"companyInput": ["Google", "https://www.linkedin.com/company/microsoft/"],
"keywords": "Software Engineer",
"location": "United States",
"maxJobs": 200,
"sortOrder": "date",
"publishedAt": "r604800",
"workType": "2",
"contractType": "F",
"experienceLevel": "4",
"geoId": "",
"proxyConfiguration": {
"useApifyProxy": false
}
}

All input fields

  • companyInput (array)
    • Description: List of company names (e.g., 'Google'), LinkedIn company URLs (e.g., 'https://www.linkedin.com/company/google/'), or company IDs (e.g., '1441'). Supports bulk input.
    • Default: not set (prefill: ["Google"])
    • Required: No
  • keywords (string)
    • Description: Job title or keywords to search for (e.g., 'Software Engineer', 'Developer'). Leave empty to get all jobs.
    • Default: "Software Engineer"
    • Required: No
  • location (string)
    • Description: Job location filter (e.g., 'United States', 'New York, NY').
    • Default: "United States"
    • Required: No
  • maxJobs (integer)
    • Description: Maximum number of jobs to scrape.
    • Default: 20 (min: 1, max: 10000)
    • Required: No
  • sortOrder (string)
    • Description: Sort order for results (optional).
    • Default: "" (allowed: "", "relevance", "date")
    • Required: No
  • maxComments (integer)
    • Description: Maximum comments to retrieve (optional, for future use).
    • Default: 0
    • Required: No
  • publishedAt (string)
    • Description: Filter by publication date. Options: 'r86400' (last 24 hours), 'r604800' (last week), 'r2592000' (last month).
    • Default: "" (allowed: "", "r86400", "r604800", "r2592000")
    • Required: No
  • workType (string)
    • Description: Filter by work type. Options: '1' (on-site), '2' (remote), '3' (hybrid).
    • Default: "" (allowed: "", "1", "2", "3")
    • Required: No
  • contractType (string)
    • Description: Filter by contract type. Options: 'F' (full-time), 'P' (part-time), 'C' (contract), 'T' (temporary), 'I' (internship), 'V' (volunteer).
    • Default: "" (allowed: "", "F", "P", "C", "T", "I", "V")
    • Required: No
  • experienceLevel (string)
    • Description: Filter by experience level. Options: '1' (internship), '2' (entry), '3' (associate), '4' (mid-senior), '5' (director).
    • Default: "" (allowed: "", "1", "2", "3", "4", "5")
    • Required: No
  • geoId (string)
    • Description: Geographic ID for more specific location filtering (optional).
    • Default: ""
    • Required: No
  • proxyConfiguration (object)
    • Description: Choose which proxies to use. By default, uses no proxy. If LinkedIn rejects or blocks the request, falls back to datacenter proxy, then residential proxy with 3 retries.
    • Default: not set (prefill: {"useApifyProxy": false})
    • Required: No

Example output JSON

[
{
"id": "4304041530",
"publishedAt": "2025-09-23",
"salary": "$141,000/yr - $202,000/yr",
"title": "Software Engineer, Full Stack, Google Workspace",
"jobUrl": "https://www.linkedin.com/jobs/view/software-engineer-full-stack-google-workspace-at-google-4304041530",
"companyName": "Google",
"companyUrl": "https://www.linkedin.com/company/google",
"location": "Seattle, WA",
"postedTime": "1 day ago",
"applicationsCount": "68 applicants",
"description": "Full job description text...",
"contractType": "Full-time",
"experienceLevel": "Not Applicable",
"workType": "Information Technology and Engineering",
"sector": "Information Services and Technology, Information and Internet",
"applyUrl": "https://careers.google.com/jobs/results/...",
"applyType": "EXTERNAL",
"descriptionHtml": "<div>HTML description...</div>",
"companyId": "1441",
"benefits": "health insurance, medical, dental, vision",
"posterProfileUrl": "",
"posterFullName": ""
}
]

Output notes:

  • The actor pushes one object per job with the fields above.
  • Some fields may be empty when not present on the public page (e.g., salary, benefits, poster details).
  • Data is written to the default Apify dataset for easy JSON/CSV export.

FAQ

Do I need to log in to scrape LinkedIn jobs?

No. The actor uses LinkedInโ€™s guest endpoints and does not require cookies or a login. It targets publicly available job postings only.

Can I scrape jobs without providing a company?

Yes. You can run keyword-only searches using the keywords field, optionally combined with location and other filters.

How many jobs can I scrape per run?

You can set maxJobs from 1 to 10,000 (default 20 in the input schema). Actual throughput depends on filters, availability, and network conditions.

What filters are supported?

You can filter by location, publishedAt (last 24h/week/month), workType (on-site/remote/hybrid), contractType (full-time, part-time, contract, temporary, internship, volunteer), experienceLevel, and geoId.

Does it extract salary and benefits?

Yes. The scraper detects salary ranges where present and extracts benefit keywords or sentences from the job description when available.

How does the proxy fallback work?

By default, it starts with no proxy. If LinkedIn blocks or rejects requests, it automatically falls back to an Apify datacenter proxy, then to residential proxies with retries. These events are logged during the run.

Can I integrate this with Python or an API?

Yes. Run the actor via the Apify API from any language (including Python) to automate schedules, retrieve datasets as JSON, and push results into your data pipeline.

What output formats are available?

Results are stored in the Apify dataset. You can export to JSON or CSV directly from the actor run.

Is it compliant to use this LinkedIn jobs scraper?

Yes, when used responsibly. The actor collects public job data and avoids private content. Ensure your usage complies with LinkedInโ€™s Terms and applicable regulations (e.g., GDPR/CCPA).

Final thoughts

Linkedin Jobs Scraper is built to automate the collection of public LinkedIn job postings at scale. With flexible filters, intelligent proxy fallback, and a rich, structured output, itโ€™s ideal for recruiters, marketers, developers, analysts, and researchers who need reliable hiring and market insights. Use the Apify Console for quick runs, or integrate via the Apify API to power Python workflows and automated pipelines. Start extracting smarter, structured LinkedIn job data today.