LinkedIn Jobs Scraper
Pricing
Pay per event
LinkedIn Jobs Scraper
Scrape LinkedIn job listings using the public guest API. Search by keyword, location, job type, experience level, and workplace type. Returns salary, company info, full descriptions, and more. No login or cookies required. 256MB, HTTP-only, fast and cheap.
Pricing
Pay per event
Rating
0.0
(0)
Developer
Stas Persiianenko
Actor stats
0
Bookmarked
22
Total users
10
Monthly active users
2 days ago
Last modified
Categories
Share
Scrape LinkedIn job listings by keyword, location, and filters. Extract salary, company info, full descriptions, and more. No login or cookies required.
What does LinkedIn Jobs Scraper do?
LinkedIn Jobs Scraper extracts job posting data from LinkedIn using the public guest API. It searches for jobs by keyword and location, then extracts detailed information including salary ranges, company profiles, job descriptions, seniority levels, and applicant counts. No LinkedIn account or cookies needed.
What data can you extract?
| Field | Description |
|---|---|
| Job title | Full position title |
| Company | Company name, LinkedIn URL, and logo |
| Location | Job location (city, state, country) |
| Salary | Salary range when available |
| Workplace type | Remote, On-site, or Hybrid |
| Description | Full job description (HTML and plain text) |
| Seniority level | Entry, Mid-Senior, Director, Executive, etc. |
| Employment type | Full-time, Part-time, Contract, etc. |
| Job function | Engineering, Marketing, Sales, etc. |
| Industries | Industry sectors |
| Applicants count | Number of applicants (as integer) |
| Posted date | When the job was listed |
| Apply URL | Direct application link |
| Benefits | Listed benefits |
| Job URL | Direct LinkedIn job posting link |
Why scrape LinkedIn jobs?
- Job market research -- Analyze hiring trends, in-demand skills, and salary benchmarks across industries and locations
- Recruitment intelligence -- Monitor competitor hiring activity and identify talent market dynamics
- Salary benchmarking -- Compare compensation packages across companies, roles, and regions
- Lead generation -- Identify companies that are actively hiring (growing companies = potential customers)
- Academic research -- Study labor market patterns, job description language, and employment trends
- Career planning -- Track job availability and requirements for specific roles
How to scrape LinkedIn jobs
- Go to the LinkedIn Jobs Scraper page on Apify Store.
- Click Try for free to open the actor configuration.
- Enter a search query (e.g., "software engineer", "data analyst", "marketing manager").
- Enter a location (e.g., "New York", "London", "Remote").
- Optionally filter by job type, experience level, workplace type, or date posted.
- Set the maximum number of jobs you want (up to 1,000).
- Click Start and wait for your data.
- Download results as JSON, CSV, or Excel, or connect via the Apify API.
Input example
{"searchQuery": "software engineer","location": "United States","maxJobs": 50,"jobType": "F","workplaceType": "2","scrapeJobDetails": true}
Input parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
| searchQuery | string | required | Job title, keyword, or company name |
| location | string | "" | City, state, country, or "Remote" |
| maxJobs | integer | 50 | Max job listings to scrape (up to 1,000) |
| jobType | string | "all" | F=Full-time, P=Part-time, C=Contract, T=Temporary, I=Internship |
| experienceLevel | string | "all" | 1=Internship, 2=Entry, 3=Associate, 4=Mid-Senior, 5=Director, 6=Executive |
| workplaceType | string | "all" | 1=On-site, 2=Remote, 3=Hybrid |
| datePosted | string | "all" | r86400=Past 24h, r604800=Past week, r2592000=Past month |
| sortBy | string | "R" | R=Most relevant, DD=Most recent |
| scrapeJobDetails | boolean | true | Fetch full details (description, salary, criteria) |
Output example
{"id": "4370317193","title": "Software Engineer (New Grads) - San Francisco","url": "https://www.linkedin.com/jobs/view/software-engineer-new-grads-san-francisco-at-giga-4370317193","companyName": "Giga","companyLinkedinUrl": "https://www.linkedin.com/company/gigaml","companyLogo": "https://media.licdn.com/dms/image/v2/...","location": "San Francisco, CA","postedAt": "2 weeks ago","salary": "$160,000.00/yr - $250,000.00/yr","applicantsCount": 200,"workplaceType": null,"descriptionHtml": "<strong>About Giga</strong>...","descriptionText": "About Giga...","seniorityLevel": "Not Applicable","employmentType": "Full-time","jobFunction": "Engineering and Information Technology","industries": "Software Development","applyUrl": "https://www.linkedin.com/jobs/view/...","benefits": null,"scrapedAt": "2026-03-17T17:38:24.331Z"}
Filters and search options
Job type filter
Filter results by employment type:
- Full-time (F) -- Standard full-time positions
- Part-time (P) -- Part-time roles
- Contract (C) -- Contract/freelance work
- Temporary (T) -- Temporary positions
- Internship (I) -- Internship opportunities
Experience level filter
Narrow by seniority:
- Internship (1), Entry level (2), Associate (3), Mid-Senior level (4), Director (5), Executive (6)
Workplace type filter
Find jobs matching your work preferences:
- On-site (1) -- In-office positions
- Remote (2) -- Fully remote roles
- Hybrid (3) -- Mix of remote and on-site
Pricing
This actor uses pay-per-event pricing -- 50% cheaper than alternatives:
| Event | Price | Description |
|---|---|---|
| Run started | $0.005 | One-time charge per run |
| Job scraped | $0.0005 | Per job listing extracted |
Cost estimate: Scraping 1,000 job listings costs approximately $0.505 ($0.005 start + $0.50 for listings). Compare to competitors charging $1.00+ per 1,000 results.
Pure HTTP architecture (no browser overhead) means lower platform costs too. Runs on 256MB memory with datacenter proxy.
Tips for best results
- Use specific keywords -- "react developer" returns more relevant results than "developer"
- Combine filters -- Use job type + experience level + workplace type to narrow results
- Set
scrapeJobDetails: falsefor faster runs when you only need listing summaries (title, company, location) - Sort by date (
sortBy: "DD") to find the newest postings first - LinkedIn caps results at ~1,000 per search -- use more specific queries to get targeted results
Integrations
Connect LinkedIn Jobs Scraper with your tools and workflows:
- Google Sheets -- Export job listings directly to a spreadsheet for tracking and analysis
- Slack -- Get notified when new jobs matching your criteria are found
- Zapier -- Trigger workflows when new job data is available (e.g., alert hiring managers, update CRM)
- Make -- Build automated pipelines: scrape jobs, enrich data, then push to your ATS
- Webhooks -- Send results to your own API endpoint
- Schedule -- Run daily or weekly to monitor new job postings automatically
Programmatic access via API
Use the Apify API to run LinkedIn Jobs Scraper from your code.
Python
from apify_client import ApifyClientclient = ApifyClient("YOUR_API_TOKEN")run = client.actor("automation-lab/linkedin-jobs-scraper").call(run_input={"searchQuery": "data analyst","location": "New York","maxJobs": 50,"scrapeJobDetails": True,})for item in client.dataset(run["defaultDatasetId"]).iterate_items():print(f"{item['title']} @ {item['companyName']} -- {item.get('salary', 'N/A')}")
Node.js
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });const run = await client.actor('automation-lab/linkedin-jobs-scraper').call({searchQuery: 'data analyst',location: 'New York',maxJobs: 50,scrapeJobDetails: true,});const { items } = await client.dataset(run.defaultDatasetId).listItems();items.forEach(item => console.log(`${item.title} @ ${item.companyName} -- ${item.salary}`));
cURL
curl -X POST "https://api.apify.com/v2/acts/automation-lab~linkedin-jobs-scraper/runs?token=YOUR_API_TOKEN&waitForFinish=120" \-H "Content-Type: application/json" \-d '{"searchQuery": "data analyst", "location": "New York", "maxJobs": 50}'
Use with AI agents via MCP
LinkedIn Jobs Scraper is available as a tool for AI assistants that support the Model Context Protocol (MCP).
Setup for Claude Code
$claude mcp add --transport http apify "https://mcp.apify.com"
Setup for Claude Desktop, Cursor, or VS Code
Add this to your MCP config file:
{"mcpServers": {"apify": {"url": "https://mcp.apify.com"}}}
Example prompts
Once connected, try asking your AI assistant:
- "Search LinkedIn for 'software engineer' jobs in San Francisco"
- "Find all remote data scientist jobs posted this week"
- "Get marketing manager positions in London"
Learn more in the Apify MCP documentation.
Legality
Scraping publicly available data is generally legal according to the US Court of Appeals ruling (HiQ Labs v. LinkedIn). This actor only accesses publicly available information and does not require authentication. Always review and comply with the target website's Terms of Service before scraping. For personal data, ensure compliance with GDPR, CCPA, and other applicable privacy regulations.
FAQ
How much does it cost to scrape LinkedIn jobs?
At $0.0005 per listing, scraping 1,000 jobs costs about $0.51. Apify's free plan includes $5/month of platform credits, so you can scrape approximately 10,000 job listings per month for free.
How fast is the scraper?
Very fast -- it uses pure HTTP requests (no browser), so it can extract 100 jobs with full details in under 60 seconds. Without details (scrapeJobDetails: false), 100 listings complete in about 5 seconds.
Does it require a LinkedIn account?
No. This scraper uses LinkedIn's public guest API, which does not require login, cookies, or any LinkedIn account. It accesses the same data visible to anyone browsing LinkedIn jobs without signing in.
Does it extract full job descriptions?
Yes. With scrapeJobDetails: true (the default), the scraper fetches the complete job detail page for each listing. This includes the full HTML and plain text description, salary, seniority level, employment type, job function, industries, and applicant count.
What is the maximum number of results?
LinkedIn's guest API caps search results at approximately 1,000 per query. For larger datasets, use multiple searches with different keyword/location/filter combinations.
What data formats can I export?
You can download results as JSON, CSV, Excel, XML, or HTML table. You can also access the data via the Apify API or connect it to Google Sheets, Zapier, Make, and other integrations.
Is it legal to scrape LinkedIn?
This actor is provided for educational and research purposes. Users are responsible for ensuring their use complies with LinkedIn's Terms of Service and applicable laws. Always use scraped data responsibly.
Why are some fields null?
Not all LinkedIn listings include salary, workplace type, or benefits information. When employers don't provide this data, the corresponding fields will be null. Use filters in your downstream processing to handle optional fields.
Can I scrape LinkedIn jobs on a schedule?
Yes. Use Apify Schedules to run the scraper daily or weekly. Combine with Google Sheets to build a live job tracking dashboard, or use Slack notifications to get alerted when new jobs match your criteria.
How does this compare to other LinkedIn scrapers?
This scraper is 50% cheaper ($0.0005 vs $0.001/job), uses structured input (keyword + location + filters instead of raw URLs), extracts workplace type (Remote/On-site/Hybrid), and runs on 256MB with datacenter proxy (competitors need 4GB + residential proxy).
Technical details
- Architecture: Pure HTTP requests with cheerio HTML parsing -- no browser, no Playwright, no Puppeteer
- Memory: 256MB (minimum Apify allocation)
- Proxy: Works with datacenter proxy (no residential proxy needed)
- Rate limiting: Built-in delays between requests to avoid LinkedIn rate limits
- Retry logic: Configurable retry attempts with exponential backoff
- Pagination: Automatically paginates through search results (25 per page)
- Deduplication: Tracks seen job IDs to avoid duplicate results
Limitations
- LinkedIn caps guest API search results at approximately 1,000 per query
- Some job listings may not include salary, workplace type, or benefits data
- LinkedIn occasionally rotates HTML structure -- if extraction breaks, please report an issue
- The guest API does not support scraping company detail pages (employee count, revenue, etc.)
- Very high-volume scraping may trigger temporary rate limits from LinkedIn
How do I scrape LinkedIn job postings without a LinkedIn account?
LinkedIn Jobs Scraper uses LinkedIn's public guest API — the same data served to unauthenticated users who browse LinkedIn jobs without signing in. No LinkedIn account, cookies, or session tokens are required. You simply specify a search query and location, and the scraper returns structured job data including titles, companies, locations, salaries, descriptions, and applicant counts.
This approach is not only more convenient but also more stable than session-based scrapers, which break whenever LinkedIn rotates cookies or changes its authentication flow.
How can I use LinkedIn job data for salary benchmarking?
LinkedIn is one of the richest public sources of salary data because many job postings now include explicit salary ranges (driven by pay transparency laws in states like California, New York, and Colorado). To build a salary benchmark:
- Scrape job listings for a target role (e.g., "data scientist") across multiple locations.
- Filter results where
salaryis not null. - Parse the salary range strings (e.g.,
"$120,000/yr - $160,000/yr") into min/max figures. - Group by
locationorindustriesto build a regional or sector-level salary distribution.
This gives you a data-driven benchmark updated in near real-time — far more current than annual surveys from Glassdoor or Levels.fyi. Recruiters use this to set competitive offer ranges; candidates use it to evaluate offers and negotiate.
How do I monitor competitor hiring activity on LinkedIn?
A competitor posting 20 software engineer roles in a single month is a strong signal they are expanding their engineering team — and likely their product. Monitoring competitor hiring is a lightweight competitive intelligence technique that requires no insider information.
To track competitor hiring:
- Run LinkedIn Jobs Scraper with the competitor's company name as the
searchQuery. - Filter results by
companyNameto exclude unrelated listings. - Schedule daily or weekly runs and compare job counts over time.
- Look for patterns: are they hiring primarily in sales (revenue push), engineering (product build), or customer success (retention focus)?
By tracking which roles a competitor is filling — and at what seniority level — you can infer their strategic priorities months before any public announcement.
How do I find remote job opportunities in a specific field on LinkedIn?
To search exclusively for remote jobs in a given field, set workplaceType: "2" (Remote) alongside your keyword. For example, to find remote software engineering roles:
{"searchQuery": "software engineer","workplaceType": "2","jobType": "F","maxJobs": 100,"sortBy": "DD"}
Setting sortBy: "DD" (most recent) ensures you see freshly posted openings first. You can further narrow results with experienceLevel (e.g., "2" for entry-level, "4" for mid-senior) and datePosted: "r604800" to see only jobs posted in the past week. Export to CSV for easy filtering in a spreadsheet, or use Google Sheets integration to build a live job tracker that refreshes automatically.
How can businesses use LinkedIn job scraping for lead generation?
Companies that are actively hiring in a specific function are often high-value sales prospects. A company posting 10 "data engineer" roles is likely investing heavily in their data infrastructure — which means they may need data tools, cloud services, or consulting. This "hiring signals" approach to lead generation is used by B2B sales teams to find companies at the right stage of growth.
To build a leads list from hiring signals:
- Scrape jobs for a role that signals your target buyer (e.g., "Head of Data", "VP Marketing", "Salesforce Administrator").
- Extract
companyNameandcompanyLinkedinUrlfrom results. - Deduplicate by company and sort by
applicantsCount(higher applicants = more competitive hiring = faster-growing company). - Enrich with Email Finder or Website Contact Finder to get contact details.
This produces a warm, intent-based prospect list of companies actively investing in the area your product serves.
Other job scrapers and lead generation tools
- Indeed Scraper -- Scrape job listings from Indeed with salary, company info, and descriptions
- Google Maps Lead Finder -- Find businesses and leads on Google Maps
- Email Finder -- Find email addresses for any company or domain
- Email Enrichment -- Enrich and verify email addresses in bulk
- Website Contact Finder -- Extract contact details from company websites