Greenhouse Jobs Scraper
Pricing
from $50.00 / 1,000 job postings
Greenhouse Jobs Scraper
Scrape live Greenhouse-hosted job boards by company slug — title, location, departments, offices, posted date, full description HTML, application URL. LinkedIn-Jobs alternative for sourcing tools, ATS-competitive intel, sales/BD prospecting target companies, recruiters tracking talent moves.
Pricing
from $50.00 / 1,000 job postings
Rating
0.0
(0)
Developer
Stephan Corbeil
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
2 days ago
Last modified
Share
🔍 Greenhouse Jobs Scraper — Recruiter & Sales Intel
Scrape live Greenhouse-hosted career boards in bulk. Feed in a list of company slugs, get back every public posting with title, location, departments, offices, posted-date, full description HTML, and application URL — straight from Greenhouse's official open boards API.
Built for recruiters and sourcing tools (find candidates by tracking who's hiring), ATS competitive analysts (track Greenhouse adoption + posting velocity at target accounts), sales and BD teams (prospect into companies that just posted senior roles), and journalists / analysts (which startups are growing, hiring freezes, layoffs).
This is a thin wrapper over Greenhouse's public boards API, so the data is fresh, deduplicated, and includes the same fields candidates see on the official board.
What you get per job
id,internal_job_id,requisition_id— IDs across Greenhouse's systemscompany,company_slug— display name and the Greenhouse slug you fed intitle— job title (string)location— primary location stringdepartments,department_ids— array of department names + IDsoffices,office_locations— array of office labels + locationsabsolute_url— canonical apply URL on the company's Greenhouse boardfirst_published,updated_at— ISO-8601 timestampslanguage— usually"en"application_deadline— when presentmetadata— any custom fields the company exposes (e.g. seniority, comp band)content_html— full HTML description (whenincludeContent: true)source—"greenhouse.io"
Use cases
- Recruiter sourcing — pull every senior IC role at 50 high-growth Series-B startups, filter for keywords in the description, get a daily diff via Apify Schedules.
- ATS competitive intel — count Greenhouse postings per company per week to track hiring velocity vs. Lever or Workday.
- Sales prospecting — companies that just opened a "Head of Revenue Operations" role are buying CRM/CDP tooling now.
- Talent platform aggregator — feed your job board with tens of thousands of postings from the companies you cover.
- Comp-band benchmarking — many GH boards publish salary ranges in the description; extract them downstream.
- Investor due-diligence — track headcount growth at portfolio companies via job-posting cadence.
Quick start
Input:
{"companies": ["stripe", "airbnb", "notion"],"maxJobs": 100,"includeContent": true}
Sample output row:
{"id": 7532733,"internal_job_id": 3336216,"requisition_id": null,"company": "Stripe","company_slug": "stripe","title": "Software Engineer, Issuing Platform","location": "San Francisco, CA","departments": ["Engineering"],"offices": ["South San Francisco HQ", "Remote North America"],"absolute_url": "https://stripe.com/jobs/search?gh_jid=7532733","first_published": "2024-09-12T18:32:14Z","updated_at": "2025-04-22T14:20:01Z","content_html": "<p>Stripe is the financial infrastructure...</p>","source": "greenhouse.io"}
Run via Python (apify-client)
from apify_client import ApifyClientclient = ApifyClient("YOUR_APIFY_TOKEN")run = client.actor("nexgendata/greenhouse-jobs-scraper").call(run_input={"companies": ["stripe", "airbnb", "notion"],"maxJobs": 500,"includeContent": True,})for item in client.dataset(run["defaultDatasetId"]).iterate_items():print(item["title"], "-", item["company"], "-", item["location"])
Run via cURL
curl -X POST "https://api.apify.com/v2/acts/nexgendata~greenhouse-jobs-scraper/run-sync-get-dataset-items?token=YOUR_TOKEN" \-H "Content-Type: application/json" \-d '{"companies":["stripe","notion"],"maxJobs":50,"includeContent":false}'
Integrations
- Zapier — trigger on each new posting and create a row in Airtable / Notion / a Slack alert.
- Make.com — chain into HubSpot or Salesforce as new prospect signals.
- n8n — schedule + dedupe + push into Postgres for trend analysis.
Pricing
Pay-per-event (PPE):
- Actor start: $0.00005 (negligible)
- Per job posting: $0.05
Cost calculator:
| Jobs returned | Cost |
|---|---|
| 10 (smoke test) | $0.50 |
| 100 | $5.00 |
| 1,000 | $50.00 |
| 5,000 (multi-company sweep) | $250.00 |
That's an order of magnitude cheaper than LinkedIn Recruiter seats for the same hiring-signal data, and you only pay for jobs you actually receive.
FAQ
Q: Where do I find the Greenhouse slug?
A: Go to the company's careers page. If the URL contains boards.greenhouse.io/{slug} (or the page redirects there), {slug} is what you want.
Q: Does this work for any company? A: Only companies hosted on Greenhouse Boards. Lever, Ashby, Workday, and SmartRecruiters need different actors — see Related Actors below.
Q: How fresh is the data? A: Greenhouse's API is the source of truth — postings appear here within minutes of the company adding them.
Q: Do I need a Greenhouse API key? A: No. The Boards API is open. There's no rate limit you'll realistically hit at scraper-scale usage.
Q: Will I get duplicate rows when scraping the same company twice?
A: Each call is a fresh snapshot. Use id as the deduplication key on your side.
Q: Can I get just the metadata without the full description?
A: Yes — set includeContent: false. Cuts the row size by 80–95%.
Related actors
- Lever Jobs Scraper — same shape, for Lever-hosted boards.
- WeWorkRemotely Jobs Scraper — remote-first job aggregator.
- HN Who's Hiring Scraper — monthly Hacker News hiring threads, perfect complement for senior-eng sourcing.
Built and maintained by NexGenData — affordable, focused web scrapers for B2B data teams.