Lever ATS Job Scraper avatar

Lever ATS Job Scraper

Pricing

$1.00 / 1,000 jobs scrapeds

Go to Apify Store
Lever ATS Job Scraper

Lever ATS Job Scraper

Scrape job listings from any company using Lever ATS. Extract titles, descriptions, departments, locations, salary ranges, workplace types, and application links from 5,000+ Lever-powered career pages. Fast JSON API with filtering, deduplication, and bulk multi-company scraping. Get started free.

Pricing

$1.00 / 1,000 jobs scrapeds

Rating

0.0

(0)

Developer

Vnx0

Vnx0

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

a day ago

Last modified

Share

Extract job listings from any company using Lever ATS — titles, descriptions, departments, teams, locations, salary ranges, workplace types, commitment, structured requirements, and application links. Scrape thousands of Lever-powered career pages in minutes with structured, ready-to-use data.

What is Lever ATS Job Scraper?

Lever ATS Job Scraper is an Apify actor that extracts structured job listing data from companies that use Lever as their applicant tracking system. Over 5,000 companies — including Spotify, Netflix, Shopify, and Stripe — use Lever to power their career pages.

Simply provide a company's Lever career page URL (like https://jobs.lever.co/spotify) or their company slug, and get back every open job listing with full metadata. Scrape multiple companies in a single run, filter by department, team, location, or commitment, and export results to JSON, CSV, or connect to 1,500+ apps via Apify integrations.

Why scrape Lever job listings?

  • Recruitment research and talent sourcing — discover open positions across multiple companies, track hiring trends, and identify which teams are actively growing
  • Competitor job posting analysis — monitor competitor hiring patterns, salary ranges, and required skills to benchmark your own talent strategy
  • Job market intelligence — build dashboards of job market data by department, location, or commitment type across hundreds of companies
  • Job board aggregation — pull listings from thousands of Lever-powered career pages to build a job search engine or aggregator
  • Skill demand tracking — extract requirements and qualifications from job descriptions to analyze which skills are trending in the market
  • Lead generation for recruiters — identify companies actively hiring in specific roles or technologies and reach out at the right time

Features

  • 25+ data fields per job — title, description, department, team, location, salary range, workplace type, commitment, structured requirements, application links, and more
  • 7 server-side filters — filter by department, team, location, commitment, workplace type (remote/hybrid/onsite), and keyword search
  • Multi-company scraping — scrape dozens of companies in a single run by providing multiple URLs or slugs
  • Deduplication — only fetch new or updated jobs across runs, avoiding duplicate data in your pipeline
  • 4 description formats — choose between plain text, HTML, both, or skip descriptions entirely to save storage
  • Salary range extraction — capture salary data when companies include it (currency, interval, min, max)
  • Structured lists — get requirements, responsibilities, and benefits as structured data, not just blobs of text
  • Minimal compute costs — optimized extraction that keeps your Apify compute charges extremely low
  • Zero configuration — works out of the box with no special setup required

What data does this actor extract?

FieldTypeDescription
Job IDStringUnique posting identifier (UUID)
Job titleStringPosition title (e.g., "Senior Backend Engineer")
DepartmentStringDepartment name (e.g., "Engineering")
TeamStringTeam name (e.g., "Platform", "Backend")
LocationStringPrimary job location
All locationsArrayMultiple applicable locations
CommitmentStringEmployment type (Full-time, Part-time, Intern, Contract)
Workplace typeStringRemote, hybrid, onsite, or unspecified
CountryStringISO 3166-1 alpha-2 country code
Salary rangeObjectCurrency, interval, min, and max salary (when available)
DescriptionStringFull job description (plain text or HTML)
Structured listsArrayRequirements, responsibilities, and benefits as structured sections
Additional infoStringBenefits, perks, and extra details
Job URLStringFull URL to the job listing page
Apply URLStringDirect link to the application form
Posted dateStringWhen the job was first posted (ISO 8601)
Company slugStringLever company identifier

How to use

  1. Add your URLs — paste Lever career page URLs (e.g., https://jobs.lever.co/spotify) or just provide company slugs (e.g., spotify, shopify, coinbase)
  2. Configure filters — optionally filter by department, team, location, commitment, or workplace type
  3. Set your preferences — choose description format, enable deduplication, set max items per company
  4. Run the actor — results are saved to the dataset and can be exported as JSON, CSV, or Excel

Supported input formats

FormatExample
Full career page URLhttps://jobs.lever.co/spotify
Career page URL with trailing slashhttps://jobs.lever.co/spotify/
Company slugspotify
Multiple URLs["https://jobs.lever.co/spotify", "https://jobs.lever.co/shopify"]
Multiple slugs["spotify", "shopify", "coinbase", "stripe"]

Input

See the Input tab for full configuration options. Key settings:

SettingDescriptionDefault
Start URLsLever career page URLs to scrapeRequired
Company slugsAlternative: just provide slugs[]
DepartmentFilter by departmentAll
TeamFilter by teamAll
LocationFilter by locationAll
CommitmentFilter by commitment typeAll
Workplace typeRemote, hybrid, or onsiteAll
KeywordSearch in title or descriptionNone
Max items per companyLimit results per company (0 = all)0
Description formatPlain text, HTML, both, or nonePlain text
DeduplicateSkip already-scraped jobsEnabled
Structured listsInclude requirements/responsibilities listsEnabled
ProxyOptional connection configurationDisabled

Output

Each job listing is pushed to the dataset as a structured JSON object. Download the dataset in JSON, CSV, or Excel format, or connect to 1,500+ apps via Apify integrations.

Sample output

{
"id": "1ff4a4e3-897c-4eab-9ee2-aa7d1d07a9d6",
"companySlug": "spotify",
"title": "Senior Backend Engineer",
"url": "https://jobs.lever.co/spotify/1ff4a4e3-...",
"applyUrl": "https://jobs.lever.co/spotify/1ff4a4e3-.../apply",
"department": "Engineering",
"team": "Backend",
"location": "Stockholm",
"allLocations": ["Stockholm", "London"],
"commitment": "Full-time",
"workplaceType": "hybrid",
"country": "SE",
"createdAt": "2025-08-04T17:58:37.000Z",
"description": "We are looking for a Senior Backend Engineer...",
"lists": [
{
"text": "What You'll Do",
"content": "<li>Design and build scalable services</li><li>Collaborate with cross-functional teams</li>"
},
{
"text": "Who You Are",
"content": "<li>5+ years of backend experience</li><li>Proficient in Python or Java</li>"
}
],
"additional": "We offer competitive salary, equity, and benefits...",
"salaryRange": {
"currency": "SEK",
"interval": "yearly",
"min": 650000,
"max": 900000
}
}

How much does it cost?

This actor uses Pay Per Event pricing at $0.001 per job scraped ($1 per 1,000 jobs). You only pay for what you extract — Apify compute costs are billed separately.

Jobs scrapedCost
100$0.10
1,000$1.00
10,000$10.00
100,000$100.00

New Apify accounts get $5 free credit to start. Compute costs are minimal thanks to optimized extraction — most runs cost under $0.01 in compute.

Integrations

Connect your scraped job data to 1,500+ apps:

IntegrationUse Case
Make (Integromat)Automate job posting workflows and notifications
ZapierTrigger actions when new jobs are posted
SlackSend new job alerts to recruiting channels
AirbyteSync job data to your data warehouse
Google SheetsBuild live job tracking spreadsheets
GitHubStore job data in repositories
WebhooksPush real-time job updates to your API
EmailSchedule daily or weekly job digest emails

Tips for best results

  • Use company slugs instead of full URLs — faster to set up and less prone to formatting errors
  • Combine filters for precision — use department + location together to find exactly the roles you need
  • Enable deduplication for scheduled runs — set up recurring runs and only get new or changed job listings
  • Use "none" description format for metadata-only — saves bandwidth and storage when you only need job titles, locations, and links
  • Scrape multiple companies at once — paste up to 50 URLs in a single run for efficient bulk job market research
  • Use keyword filter for targeted search — search for "engineer", "remote", "python" across all companies simultaneously

Use cases

For recruiters and hiring managers

Track competitor hiring patterns, discover which companies are actively growing specific teams, and identify talent pools by scraping job listings across your industry. Get structured data on open positions — titles, requirements, salary ranges, and application links — delivered directly to your preferred tool.

For job board operators

Aggregate job listings from thousands of Lever-powered career pages to build a comprehensive job search engine or niche job board. With multi-company support and deduplication, keeping your board fresh across recurring runs is effortless.

For data scientists and researchers

Collect structured job market data across companies, industries, and geographies to analyze hiring trends, salary benchmarks, and skill demand over time. Export to JSON or CSV and plug directly into your analysis pipeline.

For sales and business development

Identify companies actively hiring in roles related to your product or service — a strong signal of budget and need. Monitor target accounts for new openings and time your outreach perfectly.

FAQ

What is Lever ATS?

Lever is an applicant tracking system (ATS) used by over 5,000 companies to manage their hiring process. Companies using Lever host their career pages on jobs.lever.co. This actor extracts job listings from any Lever-powered career page.

Do I need a Lever account to use this actor?

No. You don't need any accounts, credentials, or special access. Just provide a company's career page URL or slug.

Can I scrape private or internal job postings?

No. This actor only extracts publicly available job listings — the same jobs visible to anyone visiting the company's career page.

How many companies can I scrape in one run?

There is no hard limit. You can provide multiple URLs or company slugs, and each company's jobs are retrieved efficiently. It's designed to handle dozens of companies in a single run.

How do I find a company's Lever slug?

Visit their careers page. If the URL is https://jobs.lever.co/spotify, the slug is spotify. Some companies use custom domains (like careers.company.com), but you can find their Lever slug by searching for "lever" in the page source.

What does deduplication do?

When enabled, the actor remembers which job IDs it has already scraped using persistent storage. On subsequent runs, previously scraped jobs are skipped — only new or changed listings are pushed to the dataset. This is useful for scheduled/recurring runs.

Can I filter for remote-only jobs?

Yes. Set the workplace type filter to "remote" to get only remote job listings. You can also combine it with location, department, or keyword filters.

What happens if a company slug doesn't exist?

The actor logs a warning and skips to the next company. It never crashes on invalid inputs — partial results are always returned.

Support

If you encounter any issues or have feature requests, please visit the Issues tab and report the problem. We actively monitor and respond to all issues.