Lever ATS Job Scraper
Pricing
$1.00 / 1,000 jobs scrapeds
Lever ATS Job Scraper
Scrape job listings from any company using Lever ATS. Extract titles, descriptions, departments, locations, salary ranges, workplace types, and application links from 5,000+ Lever-powered career pages. Fast JSON API with filtering, deduplication, and bulk multi-company scraping. Get started free.
Pricing
$1.00 / 1,000 jobs scrapeds
Rating
0.0
(0)
Developer
Vnx0
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
a day ago
Last modified
Categories
Share
Extract job listings from any company using Lever ATS — titles, descriptions, departments, teams, locations, salary ranges, workplace types, commitment, structured requirements, and application links. Scrape thousands of Lever-powered career pages in minutes with structured, ready-to-use data.
What is Lever ATS Job Scraper?
Lever ATS Job Scraper is an Apify actor that extracts structured job listing data from companies that use Lever as their applicant tracking system. Over 5,000 companies — including Spotify, Netflix, Shopify, and Stripe — use Lever to power their career pages.
Simply provide a company's Lever career page URL (like https://jobs.lever.co/spotify) or their company slug, and get back every open job listing with full metadata. Scrape multiple companies in a single run, filter by department, team, location, or commitment, and export results to JSON, CSV, or connect to 1,500+ apps via Apify integrations.
Why scrape Lever job listings?
- Recruitment research and talent sourcing — discover open positions across multiple companies, track hiring trends, and identify which teams are actively growing
- Competitor job posting analysis — monitor competitor hiring patterns, salary ranges, and required skills to benchmark your own talent strategy
- Job market intelligence — build dashboards of job market data by department, location, or commitment type across hundreds of companies
- Job board aggregation — pull listings from thousands of Lever-powered career pages to build a job search engine or aggregator
- Skill demand tracking — extract requirements and qualifications from job descriptions to analyze which skills are trending in the market
- Lead generation for recruiters — identify companies actively hiring in specific roles or technologies and reach out at the right time
Features
- 25+ data fields per job — title, description, department, team, location, salary range, workplace type, commitment, structured requirements, application links, and more
- 7 server-side filters — filter by department, team, location, commitment, workplace type (remote/hybrid/onsite), and keyword search
- Multi-company scraping — scrape dozens of companies in a single run by providing multiple URLs or slugs
- Deduplication — only fetch new or updated jobs across runs, avoiding duplicate data in your pipeline
- 4 description formats — choose between plain text, HTML, both, or skip descriptions entirely to save storage
- Salary range extraction — capture salary data when companies include it (currency, interval, min, max)
- Structured lists — get requirements, responsibilities, and benefits as structured data, not just blobs of text
- Minimal compute costs — optimized extraction that keeps your Apify compute charges extremely low
- Zero configuration — works out of the box with no special setup required
What data does this actor extract?
| Field | Type | Description |
|---|---|---|
| Job ID | String | Unique posting identifier (UUID) |
| Job title | String | Position title (e.g., "Senior Backend Engineer") |
| Department | String | Department name (e.g., "Engineering") |
| Team | String | Team name (e.g., "Platform", "Backend") |
| Location | String | Primary job location |
| All locations | Array | Multiple applicable locations |
| Commitment | String | Employment type (Full-time, Part-time, Intern, Contract) |
| Workplace type | String | Remote, hybrid, onsite, or unspecified |
| Country | String | ISO 3166-1 alpha-2 country code |
| Salary range | Object | Currency, interval, min, and max salary (when available) |
| Description | String | Full job description (plain text or HTML) |
| Structured lists | Array | Requirements, responsibilities, and benefits as structured sections |
| Additional info | String | Benefits, perks, and extra details |
| Job URL | String | Full URL to the job listing page |
| Apply URL | String | Direct link to the application form |
| Posted date | String | When the job was first posted (ISO 8601) |
| Company slug | String | Lever company identifier |
How to use
- Add your URLs — paste Lever career page URLs (e.g.,
https://jobs.lever.co/spotify) or just provide company slugs (e.g.,spotify,shopify,coinbase) - Configure filters — optionally filter by department, team, location, commitment, or workplace type
- Set your preferences — choose description format, enable deduplication, set max items per company
- Run the actor — results are saved to the dataset and can be exported as JSON, CSV, or Excel
Supported input formats
| Format | Example |
|---|---|
| Full career page URL | https://jobs.lever.co/spotify |
| Career page URL with trailing slash | https://jobs.lever.co/spotify/ |
| Company slug | spotify |
| Multiple URLs | ["https://jobs.lever.co/spotify", "https://jobs.lever.co/shopify"] |
| Multiple slugs | ["spotify", "shopify", "coinbase", "stripe"] |
Input
See the Input tab for full configuration options. Key settings:
| Setting | Description | Default |
|---|---|---|
| Start URLs | Lever career page URLs to scrape | Required |
| Company slugs | Alternative: just provide slugs | [] |
| Department | Filter by department | All |
| Team | Filter by team | All |
| Location | Filter by location | All |
| Commitment | Filter by commitment type | All |
| Workplace type | Remote, hybrid, or onsite | All |
| Keyword | Search in title or description | None |
| Max items per company | Limit results per company (0 = all) | 0 |
| Description format | Plain text, HTML, both, or none | Plain text |
| Deduplicate | Skip already-scraped jobs | Enabled |
| Structured lists | Include requirements/responsibilities lists | Enabled |
| Proxy | Optional connection configuration | Disabled |
Output
Each job listing is pushed to the dataset as a structured JSON object. Download the dataset in JSON, CSV, or Excel format, or connect to 1,500+ apps via Apify integrations.
Sample output
{"id": "1ff4a4e3-897c-4eab-9ee2-aa7d1d07a9d6","companySlug": "spotify","title": "Senior Backend Engineer","url": "https://jobs.lever.co/spotify/1ff4a4e3-...","applyUrl": "https://jobs.lever.co/spotify/1ff4a4e3-.../apply","department": "Engineering","team": "Backend","location": "Stockholm","allLocations": ["Stockholm", "London"],"commitment": "Full-time","workplaceType": "hybrid","country": "SE","createdAt": "2025-08-04T17:58:37.000Z","description": "We are looking for a Senior Backend Engineer...","lists": [{"text": "What You'll Do","content": "<li>Design and build scalable services</li><li>Collaborate with cross-functional teams</li>"},{"text": "Who You Are","content": "<li>5+ years of backend experience</li><li>Proficient in Python or Java</li>"}],"additional": "We offer competitive salary, equity, and benefits...","salaryRange": {"currency": "SEK","interval": "yearly","min": 650000,"max": 900000}}
How much does it cost?
This actor uses Pay Per Event pricing at $0.001 per job scraped ($1 per 1,000 jobs). You only pay for what you extract — Apify compute costs are billed separately.
| Jobs scraped | Cost |
|---|---|
| 100 | $0.10 |
| 1,000 | $1.00 |
| 10,000 | $10.00 |
| 100,000 | $100.00 |
New Apify accounts get $5 free credit to start. Compute costs are minimal thanks to optimized extraction — most runs cost under $0.01 in compute.
Integrations
Connect your scraped job data to 1,500+ apps:
| Integration | Use Case |
|---|---|
| Make (Integromat) | Automate job posting workflows and notifications |
| Zapier | Trigger actions when new jobs are posted |
| Slack | Send new job alerts to recruiting channels |
| Airbyte | Sync job data to your data warehouse |
| Google Sheets | Build live job tracking spreadsheets |
| GitHub | Store job data in repositories |
| Webhooks | Push real-time job updates to your API |
| Schedule daily or weekly job digest emails |
Tips for best results
- Use company slugs instead of full URLs — faster to set up and less prone to formatting errors
- Combine filters for precision — use department + location together to find exactly the roles you need
- Enable deduplication for scheduled runs — set up recurring runs and only get new or changed job listings
- Use "none" description format for metadata-only — saves bandwidth and storage when you only need job titles, locations, and links
- Scrape multiple companies at once — paste up to 50 URLs in a single run for efficient bulk job market research
- Use keyword filter for targeted search — search for "engineer", "remote", "python" across all companies simultaneously
Use cases
For recruiters and hiring managers
Track competitor hiring patterns, discover which companies are actively growing specific teams, and identify talent pools by scraping job listings across your industry. Get structured data on open positions — titles, requirements, salary ranges, and application links — delivered directly to your preferred tool.
For job board operators
Aggregate job listings from thousands of Lever-powered career pages to build a comprehensive job search engine or niche job board. With multi-company support and deduplication, keeping your board fresh across recurring runs is effortless.
For data scientists and researchers
Collect structured job market data across companies, industries, and geographies to analyze hiring trends, salary benchmarks, and skill demand over time. Export to JSON or CSV and plug directly into your analysis pipeline.
For sales and business development
Identify companies actively hiring in roles related to your product or service — a strong signal of budget and need. Monitor target accounts for new openings and time your outreach perfectly.
FAQ
What is Lever ATS?
Lever is an applicant tracking system (ATS) used by over 5,000 companies to manage their hiring process. Companies using Lever host their career pages on jobs.lever.co. This actor extracts job listings from any Lever-powered career page.
Do I need a Lever account to use this actor?
No. You don't need any accounts, credentials, or special access. Just provide a company's career page URL or slug.
Can I scrape private or internal job postings?
No. This actor only extracts publicly available job listings — the same jobs visible to anyone visiting the company's career page.
How many companies can I scrape in one run?
There is no hard limit. You can provide multiple URLs or company slugs, and each company's jobs are retrieved efficiently. It's designed to handle dozens of companies in a single run.
How do I find a company's Lever slug?
Visit their careers page. If the URL is https://jobs.lever.co/spotify, the slug is spotify. Some companies use custom domains (like careers.company.com), but you can find their Lever slug by searching for "lever" in the page source.
What does deduplication do?
When enabled, the actor remembers which job IDs it has already scraped using persistent storage. On subsequent runs, previously scraped jobs are skipped — only new or changed listings are pushed to the dataset. This is useful for scheduled/recurring runs.
Can I filter for remote-only jobs?
Yes. Set the workplace type filter to "remote" to get only remote job listings. You can also combine it with location, department, or keyword filters.
What happens if a company slug doesn't exist?
The actor logs a warning and skips to the next company. It never crashes on invalid inputs — partial results are always returned.
Support
If you encounter any issues or have feature requests, please visit the Issues tab and report the problem. We actively monitor and respond to all issues.