LinkedIn Jobs Scraper - Professional Job Listings
Pricing
from $3.00 / 1,000 results
LinkedIn Jobs Scraper - Professional Job Listings
Scrapes public job listings from LinkedIn's job board. Filter by location, job type, experience level, and remote options. Extract company info, job descriptions, and application links. No login required.
Pricing
from $3.00 / 1,000 results
Rating
0.0
(0)
Developer

Alessandro Santamaria
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
2 days ago
Last modified
Categories
Share
LinkedIn Job Scraper
Scrapes job listings from LinkedIn Jobs - the world's largest professional network with 1B+ members and millions of job postings worldwide.
Three Scraping Modes
This scraper supports three distinct modes for different use cases:
| Mode | Input | Output | Use Case |
|---|---|---|---|
| SEARCH MODE | Search query + filters | Basic job data from SERP | Fast discovery of new jobs |
| SEARCH + DETAILS MODE | includeJobDetails: true | Full job data with descriptions | Complete data collection in one run |
| DIRECT URLS MODE | directUrls: [...] | Full job data + job_status | Still-alive checks, re-scraping specific jobs |
Features
- Global coverage - Search jobs in any country or location worldwide
- Advanced filters - Filter by date posted, job type, experience level, and remote options
- No login required - Uses LinkedIn's public guest API for job search
- Job status checks - Verify if jobs are still online, expired, or removed
- Rich data extraction - Title, company, location, salary, description, apply URL, and more
- Swiss canton detection - Automatically detects Swiss canton codes from location
- Standardized output - Consistent
JobListingschema across all job scrapers - Rate-limited - Respectful delays between requests
- Proxy support - Built-in proxy rotation for reliability
Input
| Field | Type | Description | Default |
|---|---|---|---|
directUrls | array | Direct job URLs to scrape (skips search mode) | [] |
searchQuery | string | Job title, skills, or keywords to search for | "" |
location | string | City, country, or region (e.g., "Zurich", "Switzerland", "Germany") | "" |
datePosted | string | Filter by posting date: any, past-24h, past-week, past-month | any |
jobType | string | Filter by job type: full-time, part-time, contract, temporary, internship | "" |
experienceLevel | string | Filter by experience: entry, associate, mid-senior, director, executive | "" |
remoteFilter | string | Filter by workplace: remote, on-site, hybrid | "" |
maxResults | integer | Maximum number of job listings to scrape | 100 |
includeJobDetails | boolean | Fetch full job descriptions (slower but more data) | false |
proxyConfiguration | object | Apify proxy settings | Residential |
Mode 1: SEARCH MODE (Fast)
Search for jobs using keywords and filters:
- Use case: Discover new jobs, market analysis, broad searches
- Speed: Fast - uses LinkedIn's guest API
- Output: Basic job data from search results
{"searchQuery": "Software Engineer","location": "Switzerland","datePosted": "past-week","jobType": "full-time","maxResults": 100}
Mode 2: SEARCH + DETAILS MODE (Complete Data)
Search with includeJobDetails: true to fetch full descriptions:
- Use case: Complete data collection in one run
- Speed: Moderate - visits detail pages for each job
- Output: Full descriptions, requirements, apply URLs
{"searchQuery": "Data Scientist","location": "Zurich","maxResults": 50,"includeJobDetails": true}
Mode 3: DIRECT URLS MODE (Still Alive Checks)
When directUrls is provided, the scraper operates in direct mode:
- Skips search phase - Goes directly to provided job URLs
- Job status detection - Returns
online,offline,expired, orunknown - Full data extraction - Same as detail page scraping
- Use case: Periodic "still alive" checks, re-scraping specific jobs after deduplication
{"directUrls": ["https://www.linkedin.com/jobs/view/3812345678","https://www.linkedin.com/jobs/view/3823456789","https://www.linkedin.com/jobs/view/senior-software-engineer-at-google-3834567890"]}
Direct URLs mode workflow:
- Provide array of LinkedIn job URLs (must contain job ID)
- Scraper visits each URL directly
- Detects if job is still online or has been removed
- Extracts full job data if available
- Returns
job_statusfield indicating availability
Filter Options
Date Posted
| Value | Description |
|---|---|
any | All jobs (default) |
past-24h | Posted in last 24 hours |
past-week | Posted in last 7 days |
past-month | Posted in last 30 days |
Job Type
| Value | Description |
|---|---|
full-time | Full-time positions |
part-time | Part-time positions |
contract | Contract/freelance work |
temporary | Temporary positions |
internship | Internships |
Experience Level
| Value | Description |
|---|---|
entry | Entry level / Junior |
associate | Associate level |
mid-senior | Mid-Senior level |
director | Director level |
executive | Executive / C-level |
Remote/On-site
| Value | Description |
|---|---|
remote | Fully remote positions |
on-site | On-site only |
hybrid | Hybrid work arrangements |
Example Input
Basic Search (IT Jobs in Switzerland)
{"searchQuery": "Software Engineer","location": "Switzerland","maxResults": 100}
Recent Remote Jobs
{"searchQuery": "Developer","remoteFilter": "remote","datePosted": "past-week","maxResults": 200}
Senior Positions in Zurich
{"searchQuery": "Manager","location": "Zurich","experienceLevel": "mid-senior","jobType": "full-time","maxResults": 50}
Full Data Collection
{"searchQuery": "Data Engineer","location": "Germany","maxResults": 100,"includeJobDetails": true}
Direct URLs - Still Alive Check
{"directUrls": ["https://www.linkedin.com/jobs/view/3812345678","https://www.linkedin.com/jobs/view/3823456789"]}
Output
Each job listing follows the standardized JobListing schema:
{"id": "3812345678","title": "Senior Software Engineer","company": "Google","location": "Zurich, Switzerland","canton": "ZH","job_status": "online","top_listing": false,"employment_type": "full-time","workload_min": null,"workload_max": null,"remote_option": "hybrid","salary_text": "CHF 150'000 - 200'000","description_snippet": "We're looking for a Senior Software Engineer to join our Zurich office...","description_full": "Full job description with all details...","requirements": [],"posted_at": "2024-12-10T00:00:00.000Z","expires_at": null,"source_url": "https://www.linkedin.com/jobs/view/3812345678","source_platform": "linkedin.com","apply_url": "https://careers.google.com/apply/12345","company_url": "https://www.linkedin.com/company/google","company_industry": "Technology, Information and Internet","company_employee_count": "10,001+ employees","scraped_at": "2024-12-13T10:30:00.000Z"}
Output Fields
| Field | Description |
|---|---|
id | LinkedIn job ID (from URL) |
title | Job title |
company | Company name |
location | City/location as displayed |
canton | Swiss canton code (ZH, BE, etc.) - only for Swiss jobs |
job_status | Job availability: online, offline, expired, unknown (Direct URLs mode only) |
top_listing | Boolean - if job is promoted/sponsored (Search mode only) |
employment_type | full-time, part-time, contract, temporary, internship |
workload_min | Minimum workload percentage (usually null for LinkedIn) |
workload_max | Maximum workload percentage (usually null for LinkedIn) |
remote_option | remote, hybrid, onsite, or null |
salary_text | Salary as displayed (if available) |
description_snippet | First 500 characters of description |
description_full | Complete job description (Details mode only) |
requirements | Array of requirements (usually empty for LinkedIn) |
posted_at | Publication date |
expires_at | Expiration date (usually null) |
source_url | Link to job posting on LinkedIn |
source_platform | Always "linkedin.com" |
apply_url | External application URL |
company_url | LinkedIn company page URL |
company_industry | Company industry/sector |
company_employee_count | Company size (e.g., "501-1000 employees") |
scraped_at | Timestamp when job was scraped |
Notes on job_status field:
- Only populated in Direct URLs mode (Mode 3)
- In Search modes (Mode 1 & 2), this field is
nullsince jobs from search results are assumed to be online - Values:
online- Job page loaded successfully with contentoffline- Job page returns "not found" or similarexpired- Job explicitly marked as expired or closedunknown- Unable to determine status
Usage
Via Apify Console
- Go to the actor page
- Configure input parameters
- Click "Start"
- Download results from the Dataset tab (JSON, CSV, Excel)
Via API
curl -X POST "https://api.apify.com/v2/acts/santamaria~linkedin-scraper/runs" \-H "Authorization: Bearer YOUR_API_TOKEN" \-H "Content-Type: application/json" \-d '{"searchQuery": "Software Engineer","location": "Switzerland","maxResults": 100}'
Via Apify SDK (Node.js)
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });const run = await client.actor('santamaria/linkedin-scraper').call({searchQuery: 'Data Scientist',location: 'Zurich',datePosted: 'past-week',maxResults: 50,});const { items } = await client.dataset(run.defaultDatasetId).listItems();console.log(`Found ${items.length} jobs`);
Still Alive Checks via API
// Check if previously scraped jobs are still onlineconst run = await client.actor('santamaria/linkedin-scraper').call({directUrls: ['https://www.linkedin.com/jobs/view/3812345678','https://www.linkedin.com/jobs/view/3823456789',]});const { items } = await client.dataset(run.defaultDatasetId).listItems();items.forEach(job => {console.log(`Job ${job.id}: ${job.job_status}`);// Output: "Job 3812345678: online" or "Job 3823456789: offline"});
Performance
- Speed: ~50-100 jobs/minute (limited by respectful rate limiting)
- Cost: ~0.02-0.05 CU per 100 jobs
- Rate limiting: 2 seconds between search pages, 1 second between detail pages
- Reliability: Built-in retry logic and error handling
Proxy Recommendations
LinkedIn has anti-bot protection. For best results:
- Use Residential proxies (required)
- Avoid Datacenter proxies (will be blocked)
- Keep
maxResultsreasonable (<500 per run) - Add delays between runs if scraping frequently
{"proxyConfiguration": {"useApifyProxy": true,"apifyProxyGroups": ["RESIDENTIAL"]}}
Common Use Cases
- Job market analysis: Track hiring trends across industries and locations
- Competitive intelligence: Monitor which companies are hiring for specific roles
- Job aggregation: Build job search platforms with LinkedIn data
- Still-alive monitoring: Check if previously scraped jobs are still active (Direct URLs mode)
- Post-deduplication enrichment: Scrape basic data first, then fetch details for new jobs only
- Lead generation: Find companies that are actively hiring
- Recruitment analytics: Analyze job market data for specific roles
- Salary research: Analyze compensation ranges (when displayed)
How It Works
Search Mode (Modes 1 & 2)
- API Phase: Queries LinkedIn's public guest API for job search results
- Parsing: Extracts job data from API response (title, company, location, etc.)
- Detail Phase (optional): Visits each job's detail page with Playwright for full description
- Validation: Each job is validated against the JobListing schema before saving
- Pagination: Automatically follows pagination (25 jobs per page) until
maxResultsis reached
Direct URLs Mode (Mode 3)
- Skip search: Goes directly to provided job URLs
- Page load: Uses Playwright to render job detail pages
- Status detection: Checks if job page shows valid content or error
- Extraction: Extracts all available job data
- Status field: Returns
job_statusindicating availability
Notes
- LinkedIn's page structure changes frequently; selectors may need updates
- Some job details (salary, requirements) may not always be available
- Rate limiting applies - don't run too many concurrent requests
- No login required - uses publicly accessible job search
- Job IDs are extracted from URLs (numeric format:
3812345678) - Swiss canton codes are automatically detected from location text
Legal Notice
This actor scrapes publicly available job listings from LinkedIn's public job search (no login required). It does not access any data behind LinkedIn's authentication wall.
Users are responsible for ensuring their use complies with LinkedIn's Terms of Service and applicable laws in their jurisdiction.
Part of the Santamaria Job Scrapers Suite - Professional-grade job data for the DACH region and beyond.
Need help with integration, aggregation, or custom scraping solutions? Contact us at contact@alessandrosantamaria.com