LinkedIn Jobs Scraper - Professional Job Listings avatar

LinkedIn Jobs Scraper - Professional Job Listings

Pricing

from $3.00 / 1,000 results

Go to Apify Store
LinkedIn Jobs Scraper - Professional Job Listings

LinkedIn Jobs Scraper - Professional Job Listings

Scrapes public job listings from LinkedIn's job board. Filter by location, job type, experience level, and remote options. Extract company info, job descriptions, and application links. No login required.

Pricing

from $3.00 / 1,000 results

Rating

0.0

(0)

Developer

Alessandro Santamaria

Alessandro Santamaria

Maintained by Community

Actor stats

0

Bookmarked

49

Total users

13

Monthly active users

4 days ago

Last modified

Share

LinkedIn Job Scraper

Scrapes job listings from LinkedIn Jobs - the world's largest professional network with 1B+ members and millions of job postings worldwide.

HTTP-only, ultra-lightweight - No browser needed. Runs on just 128-512 MB of memory for extremely low compute costs.

Three Scraping Modes

ModeInputOutputUse Case
SEARCH MODESearch query + filtersBasic job data from SERPFast discovery of new jobs
SEARCH + DETAILS MODEincludeJobDetails: trueFull job data with descriptionsComplete data collection in one run
DIRECT URLS MODEdirectUrls: [...]Full job data + job_statusStill-alive checks, re-scraping specific jobs

Features

  • Global coverage - Search jobs in any country or location worldwide
  • Advanced filters - Filter by date posted, job type, experience level, and remote options
  • No login required - Uses LinkedIn's public guest API for job search and detail pages
  • Job status checks - Verify if jobs are still online, expired, or removed
  • Rich data extraction - Title, company, location, description, seniority level, industry, applicant count, and more
  • Ultra-low resource usage - HTTP-only, no Chrome browser needed (128-512 MB)
  • Swiss canton detection - Automatically detects Swiss canton codes from location
  • Standardized output - Consistent JobListing schema across all job scrapers
  • Pay-per-result pricing - Only pay for results you get
  • Rate-limited - Respectful delays between requests
  • Proxy support - Built-in proxy rotation for reliability

Pricing

This actor uses pay-per-result pricing. You only pay for the data you receive.

EventPriceDescription
SERP result$0.008Each job from search results (search-only mode)
Detail result$0.020Each job with full details (detail mode or direct URLs)

Examples:

  • 100 search results = $0.80
  • 50 jobs with full details = $1.00
  • 200 search results + 50 detailed = $2.60

No monthly fees. No minimum spend. Compute costs are minimal (~$0.002/run).

Input

FieldTypeDescriptionDefault
directUrlsarrayDirect job URLs to scrape (skips search mode)[]
searchQuerystringJob title, skills, or keywords to search for""
locationstringCity, country, or region (e.g., "Zurich", "Switzerland", "Germany")""
datePostedstringFilter by posting date: any, past-24h, past-week, past-monthany
jobTypestringFilter by job type: full-time, part-time, contract, temporary, internship""
experienceLevelstringFilter by experience: entry, associate, mid-senior, director, executive""
remoteFilterstringFilter by workplace: remote, on-site, hybrid""
maxResultsintegerMaximum number of job listings to scrape (1-1000)100
includeJobDetailsbooleanFetch full job descriptions from detail pages (slower but richer data)false
proxyConfigurationobjectApify proxy settingsResidential

Mode 1: SEARCH MODE (Fast)

Search for jobs using keywords and filters:

  • Use case: Discover new jobs, market analysis, broad searches
  • Speed: Very fast - pure HTTP requests
  • Output: Basic job data from search results + company URL
  • Cost: $0.008 per result
{
"searchQuery": "Software Engineer",
"location": "Switzerland",
"datePosted": "past-week",
"jobType": "full-time",
"maxResults": 100
}

Mode 2: SEARCH + DETAILS MODE (Complete Data)

Search with includeJobDetails: true to fetch full descriptions:

  • Use case: Complete data collection in one run
  • Speed: Moderate - fetches detail page for each job via HTTP
  • Output: Full descriptions, seniority level, industry, applicant count, company info
  • Cost: $0.020 per result
{
"searchQuery": "Data Scientist",
"location": "Zurich",
"maxResults": 50,
"includeJobDetails": true
}

Mode 3: DIRECT URLS MODE (Still Alive Checks)

When directUrls is provided, the scraper operates in direct mode:

  • Skips search phase - Goes directly to provided job URLs
  • Job status detection - Returns online, offline, expired, or unknown
  • Full data extraction - Same as detail page scraping
  • Use case: Periodic "still alive" checks, re-scraping specific jobs after deduplication
  • Cost: $0.020 per result
{
"directUrls": [
"https://www.linkedin.com/jobs/view/3812345678",
"https://www.linkedin.com/jobs/view/3823456789",
"https://www.linkedin.com/jobs/view/senior-software-engineer-at-google-3834567890"
]
}

Filter Options

Date Posted

ValueDescription
anyAll jobs (default)
past-24hPosted in last 24 hours
past-weekPosted in last 7 days
past-monthPosted in last 30 days

Job Type

ValueDescription
full-timeFull-time positions
part-timePart-time positions
contractContract/freelance work
temporaryTemporary positions
internshipInternships

Experience Level

ValueDescription
entryEntry level / Junior
associateAssociate level
mid-seniorMid-Senior level
directorDirector level
executiveExecutive / C-level

Remote/On-site

ValueDescription
remoteFully remote positions
on-siteOn-site only
hybridHybrid work arrangements

Example Input

Basic Search (IT Jobs in Switzerland)

{
"searchQuery": "Software Engineer",
"location": "Switzerland",
"maxResults": 100
}

Recent Remote Jobs

{
"searchQuery": "Developer",
"remoteFilter": "remote",
"datePosted": "past-week",
"maxResults": 200
}

Senior Positions in Zurich

{
"searchQuery": "Manager",
"location": "Zurich",
"experienceLevel": "mid-senior",
"jobType": "full-time",
"maxResults": 50
}

Full Data Collection

{
"searchQuery": "Data Engineer",
"location": "Germany",
"maxResults": 100,
"includeJobDetails": true
}

Direct URLs - Still Alive Check

{
"directUrls": [
"https://www.linkedin.com/jobs/view/3812345678",
"https://www.linkedin.com/jobs/view/3823456789"
]
}

Output

Each job listing follows the standardized JobListing schema:

Search Mode Output

{
"id": "4371481846",
"title": "Senior Software Engineer, Checkout",
"company": "GetYourGuide",
"location": "Zurich, Zurich, Switzerland",
"canton": "ZH",
"country": "CH",
"job_status": "online",
"top_listing": false,
"actively_hiring": true,
"employment_type": null,
"salary_text": null,
"posted_at": "2026-03-05T00:00:00.000Z",
"source_url": "https://www.linkedin.com/jobs/view/4371481846",
"source_platform": "linkedin",
"company_url": "https://de.linkedin.com/company/getyourguide-ag",
"scraped_at": "2026-03-09T08:30:00.000Z"
}

Detail Mode Output (additional fields)

{
"id": "4371481846",
"title": "Senior Software Engineer, Checkout (Backend Focused)",
"company": "GetYourGuide",
"location": "Zurich, Zurich, Switzerland",
"canton": "ZH",
"country": "CH",
"job_status": "online",
"employment_type": "full-time",
"description_snippet": "Get ready for an exciting career with GetYourGuide...",
"description_full": "Full job description with all details...",
"company_url": "https://de.linkedin.com/company/getyourguide-ag",
"company_industry": "Technology, Information and Internet",
"seniority_level": "Mid-Senior level",
"job_function": "Engineering and Information Technology",
"applicants": "<25",
"posted_at": "2026-03-05T00:00:00.000Z",
"source_url": "https://www.linkedin.com/jobs/view/4371481846",
"source_platform": "linkedin",
"scraped_at": "2026-03-09T08:30:00.000Z"
}

Output Fields

Core Fields (always available)

FieldDescription
idLinkedIn job ID
titleJob title
companyCompany name
locationCity/location as displayed
countryCountry code (AT, CH, DE)
cantonSwiss canton code (ZH, BE, etc.) - only for Swiss jobs
job_statusJob availability: online, offline, expired, unknown
top_listingWhether job is promoted/sponsored
actively_hiringWhether company shows "Actively Hiring" badge
posted_atPublication date (ISO 8601)
source_urlLink to job posting on LinkedIn
source_platformAlways linkedin
company_urlLinkedIn company page URL (tracking params stripped)
scraped_atTimestamp when job was scraped

Detail Fields (with includeJobDetails: true or Direct URLs mode)

FieldDescription
employment_typefull-time, part-time, contract, temporary, internship
description_snippetFirst 500 characters of description
description_fullComplete job description (cleaned text)
company_industryCompany industry/sector (e.g., "IT Services and IT Consulting")
seniority_levelSeniority level (e.g., "Entry level", "Mid-Senior level", "Director")
job_functionJob function (e.g., "Engineering and Information Technology")
applicantsApplicant count: <25, 95, 200+ etc.
salary_textSalary as displayed (if available)
apply_urlExternal application URL
remote_optionremote, hybrid, onsite

Additional Fields

FieldDescription
workload_min / workload_maxWorkload percentage (usually null for LinkedIn)
requirementsArray of requirements (usually empty)
company_employee_countCompany size range
company_websiteExternal company website URL
contact_*Contact person fields (usually null for LinkedIn)

Applicants Field Format

The applicants field normalizes LinkedIn's various applicant display formats:

LinkedIn showsOutput
"Be among the first 25 applicants"<25
"95 applicants"95
"Over 200 applicants"200+

Usage

Via Apify Console

  1. Go to the actor page
  2. Configure input parameters
  3. Click "Start"
  4. Download results from the Dataset tab (JSON, CSV, Excel)

Via API

curl -X POST "https://api.apify.com/v2/acts/santamaria~linkedin-scraper/runs" \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"searchQuery": "Software Engineer",
"location": "Switzerland",
"maxResults": 100
}'

Via Apify SDK (Node.js)

import { ApifyClient } from 'apify-client';
const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });
const run = await client.actor('santamaria-automations/linkedin-scraper').call({
searchQuery: 'Data Scientist',
location: 'Zurich',
datePosted: 'past-week',
maxResults: 50,
});
const { items } = await client.dataset(run.defaultDatasetId).listItems();
console.log(`Found ${items.length} jobs`);

Still Alive Checks via API

// Check if previously scraped jobs are still online
const run = await client.actor('santamaria-automations/linkedin-scraper').call({
directUrls: [
'https://www.linkedin.com/jobs/view/3812345678',
'https://www.linkedin.com/jobs/view/3823456789',
]
});
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach(job => {
console.log(`Job ${job.id}: ${job.job_status}`);
// Output: "Job 3812345678: online" or "Job 3823456789: expired"
});

Performance

MetricSearch ModeDetail Mode
Speed~50-100 jobs/min~30-40 jobs/min
Memory128-256 MB256-512 MB
Compute cost~$0.002/run~$0.003/run
  • Rate limiting: 2 seconds between search pages, 1 second between detail pages
  • Retry logic: Automatic retries with exponential backoff
  • Zero browser overhead: Pure HTTP requests with TLS fingerprinting

Proxy Recommendations

LinkedIn has anti-bot protection. For best results:

  • Use Residential proxies for large-scale scraping (500+ results)
  • No proxy needed for small runs (<100 results) in many regions
  • Avoid Datacenter proxies (may be blocked)
  • Keep maxResults reasonable (<500 per run)
{
"proxyConfiguration": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}

Common Use Cases

  1. Job market analysis - Track hiring trends across industries and locations
  2. Competitive intelligence - Monitor which companies are hiring for specific roles
  3. Job aggregation - Build job search platforms with LinkedIn data
  4. Still-alive monitoring - Check if previously scraped jobs are still active
  5. Lead generation - Find companies that are actively hiring (use actively_hiring field)
  6. Recruitment analytics - Analyze seniority levels, industries, and applicant competition
  7. Salary research - Analyze compensation ranges when displayed

How It Works

Technical Architecture

This actor is 100% HTTP-only - no browser automation needed. It uses:

  1. LinkedIn Guest API - Public API endpoints that don't require authentication
  2. got-scraping - HTTP client with TLS fingerprinting for browser-like requests
  3. Cheerio - Fast HTML parser for extracting structured data

Search Mode (Modes 1 & 2)

  1. Queries LinkedIn's guest search API with your filters
  2. Parses job cards from HTML response (using data-entity-urn for reliable ID extraction)
  3. Optionally fetches each job's detail page via the guest detail API
  4. Validates each job against the schema before saving
  5. Paginates automatically until maxResults is reached

Direct URLs Mode (Mode 3)

  1. Extracts job ID from each provided URL
  2. Fetches job data via the guest detail API
  3. Detects job status (online, expired, offline)
  4. Returns full job data with status field

Limitations

  • Some job details (salary, requirements) may not always be available on LinkedIn
  • LinkedIn's page structure changes occasionally; we update selectors regularly
  • Rate limiting applies - very large runs (1000+) may take several minutes
  • No login required - only publicly accessible job data is scraped
  • Swiss canton codes are automatically detected but depend on location text quality

This actor scrapes publicly available job listings from LinkedIn's public job search (no login required). It does not access any data behind LinkedIn's authentication wall.

Users are responsible for ensuring their use complies with LinkedIn's Terms of Service and applicable laws in their jurisdiction.

Feedback & Support

Have a feature request, found a bug, or need help? Open an issue — we actively monitor and respond.


Part of the Santamaria Job Scrapers Suite - Professional-grade job data for the DACH region and beyond.

Need help with integration, aggregation, or custom scraping solutions? Contact us at contact@alessandrosantamaria.com