๐ฆ Lever Jobs Scraper
Pricing
from $1.50 / 1,000 job records
๐ฆ Lever Jobs Scraper
Scrape every open job from any Lever board. Returns title, team, location, commitment, country, parsed salary range, seniority, qualifications, and per-company hiring snapshots. Watchlist mode emits only new jobs since last run. Export, run via API, schedule, or integrate with other tools.
Pricing
from $1.50 / 1,000 job records
Rating
0.0
(0)
Developer
Skootle
Maintained by CommunityActor stats
0
Bookmarked
2
Total users
1
Monthly active users
5 days ago
Last modified
Categories
Share

TL;DR
BD reps, recruiters, and AI sourcing agents track Lever job boards at dozens of companies and waste 30 minutes a day clicking through each one. One run of this actor pulls every open job from your list of companies, with team, location, commitment, country, parsed salary range, and seniority already typed. Ships an agentMarkdown field per record so you can drop a job straight into Claude, Codex, Slack, or a CRM as a single card. Watchlist mode emits only new postings since your last run, perfect for daily monitoring.
Try it on a small batch of slugs first, then let us know what you think in a review.
What does Lever Jobs do?
You give it a list of Lever company slugs. Each slug is the path after jobs.lever.co/ in a public Lever board URL. For each slug, the actor pulls the company's full open-roles list, parses every field you would otherwise click through to see, and returns a normalized record per job.
What you get for every job:
- Identity: Lever posting ID (idempotent, same across runs), job title, hosted URL, apply URL, company slug, company name.
- Categorization: team, department, commitment (Full-time, Part-time, Contract, Internship, Fixed-Term, Scholarship), location, all-locations array.
- Geo: country code (ISO 2-letter), location string as displayed, normalized location,
isRemoteboolean derived from workplaceType + location text. - Pay: parsed
compRangewith min, max, currency, and period (year / hour / month). Lever does not return salary as a structured field. We parse it from the salary section text using a tested heuristic. About 30 to 50 percent of jobs surface a usable range, depending on the company and US disclosure laws. - Content: plain-text description (capped at 12,000 chars), HTML description (capped at 30,000 chars), Lever's
listsarray with qualifications and responsibilities preserved as separate sections. - Time: posted-at as ISO 8601, raw epoch milliseconds, scraped-at ISO 8601.
- Seniority: heuristic enum (
intern,entry,mid,senior,staff,principal,lead,director,vp,executive, or null) derived from the job title. - Quality:
fieldCompletenessScore0 to 100 so you can self-filter sparse rows downstream. - Agent-ready:
agentMarkdownfield, a 300 to 500 char pre-formatted markdown card per job.
Plus a company_summary record per company you scrape: totalCount, openedInLast7Days, hiringVelocityScore (jobs per day rolling 90-day window), and jobsByTeam / jobsByLocation / jobsByCommitment distribution maps.
Why scrape Lever?
Lever hosts the public job boards for hundreds of growth-stage and enterprise companies. There is no single search interface across all of them. If you care about hiring signal at 30 named companies, you click into 30 different boards, every day, and re-read every list to find what changed. The data is public, the structure is consistent, the work is mechanical. That is exactly the kind of work that should be one API call away.
Specific workflows this collapses:
- Daily hiring monitor: check 50 named companies for new roles every morning. Watchlist mode emits only jobs whose IDs have never been seen, so you get a diff, not the full list.
- Cross-company comp benchmarking: you want to know what staff product designers make at portfolio companies. Pull a basket of slugs, filter by
seniority=staffandteam~ design, read thecompRange. - BD on hiring teams: a company opens 12 engineering roles in a month, that is a buying signal. The
company_summary.hiringVelocityScoregives you that as a sortable number. - Recruiter sourcing list-building: generate a daily CSV of new senior engineering roles across 80 Lever boards. Filter by
seniority=senior+team=Engineering, push to your CRM. - AI agent integrations: the
agentMarkdownfield is built for LLM context windows. Drop any record into Claude or GPT-4 as a single message, ask "is this role a fit for my candidate profile X", get an answer.
Who needs this?
- Business development reps mapping which target accounts are scaling specific functions (sales, customer success, engineering). Use
hiringVelocityScoreandjobsByTeamto rank accounts for outreach. - Recruiters and sourcers building daily lists of new senior roles across a tracked set of companies. Watchlist mode + commitment + seniority filters gets you to a clean sourcing queue in one call.
- Comp intelligence teams benchmarking salary bands across a portfolio. The parsed
compRangefield gives you a numeric min and max with currency and period, ready to feed a notebook. - VC and PE talent partners tracking hiring at portfolio companies. Combine
company_summaryrecords with the per-job feed to build a hiring scorecard. - AI auto-apply agents and sourcing platforms that need a clean, typed jobs feed across Lever boards without standing up their own scraper.
- Sales-intelligence platforms layering hiring-signal data on top of firmographic profiles.
- Recruitment-marketing teams producing weekly "new roles" newsletters from a curated company set. The
agentMarkdowncard is ready to drop into your CMS.
How to use Lever Jobs
- Open the Actor in the Apify Console and click Try for free.
- In the
companiesfield, paste the Lever slugs you want. The slug is whatever followsjobs.lever.co/in a public board URL. Example: forhttps://jobs.lever.co/palantir, the slug ispalantir. - Optionally narrow the results with
keywords(title),locations(substring),teams,commitments, or theremoteOnlyflag. - Set
maxItemsto the cap you want. Default is 10 (so the daily auto-test fits), raise to 1000 or more for production runs. - For daily monitoring, flip
watchlistMode: true. The actor will read previously-seen IDs from the key-value store and only emit new ones. - Click Start. Results stream into the default dataset as they are written.
- Download as JSON / CSV / Excel, or pull via the API for downstream pipelines. The
AGENT_BRIEFINGmarkdown digest is in the key-value store under that key.
How much will scraping Lever cost?
Pay-per-event pricing, charged per record written. Two events:
| Event | Description | FREE | BRONZE | SILVER | GOLD | PLATINUM | DIAMOND |
|---|---|---|---|---|---|---|---|
| Actor start | One-time per run | $0.001 | $0.001 | $0.001 | $0.001 | $0.001 | $0.001 |
| Job record | Per dataset record (primary) | $0.003 | $0.0025 | $0.002 | $0.0015 | $0.0015 | $0.0015 |
A typical daily monitor run across 20 Lever boards with 30 active jobs each returns ~600 records and costs roughly $1.80 at FREE tier, $0.90 at GOLD. Compute time is sub-minute per company (HTTP-only, no browser), so Apify platform usage is negligible.
Is it legal to scrape Lever?
The Lever job-board API (api.lever.co/v0/postings/<slug>?mode=json) is the same public endpoint that powers every embed on company career pages. It returns no authenticated data, no PII beyond what the company has chosen to publish, and is delivered without any anti-bot challenge. We honor rate limits, identify ourselves with a standard browser User-Agent, and do not attempt to access private boards or applicant data.
You are responsible for your own use of the resulting data. If you plan to republish or resell the records, talk to your legal counsel about the specific company's terms of service. Most growth-stage companies publish these boards because they want maximum distribution; aggregating that data for buyer-facing workflows is generally well within the norms of the public-jobs ecosystem.
Examples
1. Single company, full board
{"companies": ["palantir"],"maxItems": 500}
2. Engineering jobs across a portfolio
{"companies": ["palantir", "veo"],"teams": ["Engineering", "Data"],"maxItems": 200}
3. Remote senior roles
{"companies": ["palantir"],"remoteOnly": true,"keywords": ["senior", "staff", "principal"],"maxItems": 100}
4. New York jobs, full-time only
{"companies": ["palantir"],"locations": ["New York"],"commitments": ["Full-time"],"maxItems": 100}
5. Internship hunt across multiple companies
{"companies": ["palantir", "veo"],"commitments": ["Internship"],"maxItems": 200}
6. Daily watchlist monitor (only new jobs)
{"companies": ["palantir", "veo"],"watchlistMode": true,"maxItems": 1000}
Input parameters
| Field | Type | Required | Description |
|---|---|---|---|
companies | string[] | yes | Lever board slugs. The path after jobs.lever.co/. |
keywords | string[] | no | Title keywords. Job kept if ANY keyword appears in the title (case-insensitive). |
locations | string[] | no | Substring match against Lever's location field. |
teams | string[] | no | Substring match against the Lever team name. |
commitments | string[] | no | Filter by commitment (Full-time, Part-time, Contract, Internship, Fixed-Term, Scholarship, Temporary). |
remoteOnly | boolean | no | Only jobs flagged remote by the company. Default false. |
maxItems | integer | no | Cap on records returned. Default 10, max 5000. |
watchlistMode | boolean | no | When true, only emits jobs whose IDs have never been seen by a prior run. Default false. |
proxyConfiguration | object | no | Optional Apify proxy. Not required; Lever's public API is unrestricted. |
Lever job output format
lever_job
Per-job record. Idempotent primary key is jobId.
| Field | Type | Notes |
|---|---|---|
outputSchemaVersion | string literal '2026-05-11' | Bump on breaking change. |
recordType | 'lever_job' | Discriminator. |
jobId | string | Lever posting UUID. Same across runs. |
companySlug | string | The slug you passed in. |
company | string | Best-effort display name. |
title | string | Job title. |
hostedUrl | string | Public board URL. |
applyUrl | string | null | Apply page URL. |
location | object | { name, normalized, isRemote }. |
team | string | null | Lever team. |
commitment | string | null | Full-time / Internship / Contract / etc. |
department | string | null | Lever department (rarely populated). |
country | string | null | ISO 2-letter country code. |
descriptionPlain | string | Plain text, capped at 12,000 chars. |
descriptionHtml | string | HTML, capped at 30,000 chars. |
lists | array | Qualifications / responsibilities split into {text, content} entries. |
additional | string | null | Salary section + extras (plain text). |
seniority | enum | null | Parsed from title. |
compRange | object | { min, max, currency, period }, currency as ISO 3-letter, period as year/hour/month. |
categories | object | Raw Lever categories. |
createdAtMs | number | null | Epoch milliseconds. |
postedAt | string | ISO 8601. |
scrapedAt | string | ISO 8601. |
fieldCompletenessScore | integer | 0 to 100. |
agentMarkdown | string | Drop-into-LLM card. |
Sample:
{"outputSchemaVersion": "2026-05-11","recordType": "lever_job","jobId": "a543d82a-a089-4b1c-afd1-4f30d3d8ee23","companySlug": "palantir","company": "Palantir","title": "Administrative Business Partner","hostedUrl": "https://jobs.lever.co/palantir/a543d82a-a089-4b1c-afd1-4f30d3d8ee23","applyUrl": "https://jobs.lever.co/palantir/a543d82a-a089-4b1c-afd1-4f30d3d8ee23/apply","location": { "name": "New York, NY", "normalized": "New York, NY", "isRemote": false },"team": "Administrative","commitment": "Full-time","department": null,"country": "US","compRange": { "min": 60000, "max": 120000, "currency": "USD", "period": "year" },"categories": {"team": "Administrative","location": "New York, NY","commitment": "Full-time","department": null,"allLocations": ["New York, NY"]},"createdAtMs": 1679955575647,"postedAt": "2023-03-27T22:19:35.647Z","scrapedAt": "2026-05-11T18:00:00.000Z","seniority": null,"fieldCompletenessScore": 90,"agentMarkdown": "**Administrative Business Partner** - Palantir\n- ๐ฅ Administrative ยท ๐ Full-time\n- ๐ New York, NY\n- ๐ต $60K-$120K/year\n- ๐ Posted 2023-03-27\n- ๐ https://jobs.lever.co/palantir/a543d82a-a089-4b1c-afd1-4f30d3d8ee23"}
company_summary
One per company in the run.
| Field | Type | Notes |
|---|---|---|
recordType | 'company_summary' | Discriminator. |
companySlug | string | |
company | string | |
totalCount | integer | All open jobs found, before filters. |
openedInLast7Days | integer | Roles created in the last 7 days. |
hiringVelocityScore | number | Jobs per day rolling 90-day window. |
jobsByTeam | object | { teamName: count }. |
jobsByLocation | object | { locationName: count }. |
jobsByCommitment | object | { commitment: count }. |
Sample:
{"outputSchemaVersion": "2026-05-11","recordType": "company_summary","companySlug": "palantir","company": "Palantir","totalCount": 226,"openedInLast7Days": 14,"hiringVelocityScore": 1.47,"jobsByTeam": { "Engineering": 82, "Operations": 24, "Sales": 19 },"jobsByLocation": { "New York, NY": 41, "Washington, DC": 32, "London": 21 },"jobsByCommitment": { "Full-time": 204, "Internship": 18, "Fixed-Term": 4 },"scrapedAt": "2026-05-11T18:00:00.000Z"}
During the Actor run
The actor calls api.lever.co/v0/postings/<slug>?mode=json once per company, parses every record, and writes results as they go. Sub-minute per company. Two artifacts land in the default key-value store: AGENT_BRIEFING (markdown digest) and WATCHLIST_STATE (rolling ID list when watchlist mode is on).
FAQ
How is this different from Lever's public RSS feeds?
Lever's RSS feed gives you a list of titles and links. This actor returns the parsed body of every posting, plus normalized location and country, parsed comp range, seniority, and per-company hiring snapshots. RSS is fine for "did anything change" notifications; the structured feed is what you need for analytics, sourcing, or AI agent context.
Can I monitor only new jobs?
Yes. Set watchlistMode: true. The actor reads a rolling list of up to 50,000 previously-seen job IDs from the key-value store and only emits jobs whose IDs are new. Schedule it daily and you have a diff feed.
Will this pair with Greenhouse coverage?
Yes. We ship a sister actor, skootle/greenhouse-jobs, with a near-identical output schema (matching recordType, jobId, team, location, compRange, agentMarkdown). Run both, union the datasets, and you cover both ATS platforms in one pipeline.
Why does this cost more than free Lever scrapers?
Free actors typically return Lever's raw JSON with no normalization. We parse comp ranges out of unstructured text, infer seniority from titles, strip HTML safely to plain text, compute a completeness score so you can self-filter, and ship an agentMarkdown field designed for LLM context. We also keep a versioned schema (outputSchemaVersion) so when Lever changes a field we bump the version and you know to update your downstream code. If you are feeding this into a customer-facing product or a daily AI-agent run, the reliability is the value.
Can I use it with Python / n8n / Make / Zapier?
Yes. Use the Apify API to start runs, poll status, and pull the dataset. The Apify client libraries exist for Python and JavaScript. n8n, Make, and Zapier all have native Apify nodes. Pass the same JSON input shape as the Console UI.
Why don't I see a salary on every job?
Lever does not return salary as a structured field. Some companies publish a range in the salary section (Palantir, for example), some publish it only in US states that require it by law, some never publish at all. When we find a range we parse it cleanly. When we cannot, compRange.min and compRange.max are null. Filter on compRange.min !== null to get only the rows where we resolved one.
How fresh is the data?
Real-time, every run. Lever's public API serves the same data their embed widget uses, with no caching layer in front of it on our side.
What happens if a company switches off Lever?
The fetch returns HTTP 404. We record an error entry in the OUTPUT summary with the offending slug and skip to the next company. Other companies in your list are unaffected.
Can I get historical postings?
Lever's public API only exposes currently-open roles. If a role closes, it disappears from the feed. If you want a historical archive, run the actor on a schedule and persist the dataset yourself.
Do I need to provide a proxy?
No. Lever's public API does not block by IP and has no bot challenge. The actor runs HTTP-only without a proxy. If you want to route through Apify Proxy for compliance reasons, you can pass proxyConfiguration and it will be honored.
Why choose Lever Jobs
- Hiring-velocity signal:
company_summary.hiringVelocityScoreandopenedInLast7Daysgive you a sortable BD trigger that no other Lever scraper computes. - Cross-ATS coverage: pair with
skootle/greenhouse-jobs(matching schema) for unified hiring data across both platforms. - Watchlist diff mode: dedicated dedupe state across runs so monitoring schedules emit only new postings.
- Parsed comp ranges: numeric min, max, currency, and period, pulled out of unstructured salary text with a tested heuristic.
- Seniority enum: typed seniority instead of free-text title parsing every downstream system has to do.
- Idempotent job IDs: Lever's posting UUID is preserved so your downstream cache and dedupe keep working across runs.
- Versioned output schema: every record carries
outputSchemaVersion: '2026-05-11'. We bump on breaking change. - Agent-grade extras:
agentMarkdownper record +AGENT_BRIEFING.mdper run, drop straight into Claude / Codex / Slack / CRM.
Your feedback
Hit a bug or want a feature? Open an issue on the Issues tab rather than the reviews page, and we'll fix it fast (typically within 48 hours).
Other Skootle actors you might want to check
- Greenhouse Jobs Scraper - the matching Greenhouse-side actor with the same schema shape.
- Wellfound Jobs Scraper - startup-focused board with founder profiles and equity data.
- SAM.gov Federal Contracts - U.S. federal contract opportunities joined with USAspending award history.
- SEC EDGAR Filings - public company filings with normalized form-type taxonomy.
Support and contact
Issues and feature requests via the Issues tab on this listing. We respond within 48 hours.