Greenhouse Jobs Scraper
Pricing
Pay per event
Greenhouse Jobs Scraper
Scrape job listings from 220,000+ companies using Greenhouse ATS. Get titles, locations, departments, descriptions, and application questions from Airbnb, Stripe, Figma, and more. Fast HTTP-only API, no browser needed.
Pricing
Pay per event
Rating
0.0
(0)
Developer
Stas Persiianenko
Actor stats
0
Bookmarked
3
Total users
2
Monthly active users
a day ago
Last modified
Categories
Share
Extract job listings from any company using Greenhouse ATS — the dominant applicant tracking system used by 220,000+ companies including Airbnb, Stripe, Figma, Anthropic, Databricks, Coinbase, Cloudflare, and Lyft. Get 17+ structured fields per job listing in seconds, with no browser, no proxy, and no API key required.
What does it do?
This actor connects directly to Greenhouse's public Boards API (boards-api.greenhouse.io) and extracts structured job data using plain HTTP requests. No headless browser, no residential proxy, no authentication — just fast, reliable data from a fully public REST API.
Two modes are available:
- Company Jobs — Scrape all open job listings for one or more companies by their Greenhouse slug (e.g.
airbnb,stripe,figma). Supports multi-company batch input, department filter, and location filter. - Job Details — Fetch full details for specific job IDs, including application form questions.
Who is it for?
Recruiters and talent sourcers
Monitor competitor hiring activity across dozens of companies in one run. Track which roles open up at target companies, filter by department (e.g. "Engineering") or location ("Remote"), and get alerts when new roles appear.
Sales intelligence teams
Identify hiring signals as buying intent. When a company posts 10 new DevOps roles, they're likely evaluating new infrastructure tools. Scrape company job boards to build enriched prospect lists.
HR tech and job aggregators
Build or power a job board that aggregates Greenhouse-hosted listings. With 220,000+ companies using Greenhouse, this is one of the largest single-ATS data sources available anywhere.
Investors and market analysts
Track startup hiring trends at portfolio companies or competitors. A sudden spike in engineering hires at a Series B company is a reliable growth signal. Automate this monitoring with a scheduled daily run.
Researchers and data scientists
Collect labeled job description text for NLP models (job title classification, skill extraction, salary prediction). The content field contains full HTML job descriptions ready for parsing.
HR professionals benchmarking roles
Compare how peer companies describe similar roles — titles, requirements, team structure — to benchmark your own job descriptions and compensation language.
Why use this scraper?
- 220,000+ companies covered — Greenhouse is the #1 ATS for Series B+ tech startups. 68% of major tech companies use it.
- HTTP-only, no browser — runs in 256 MB RAM, starts in under 2 seconds, uses minimal compute
- No proxy needed — Greenhouse's Boards API is fully public with no IP restrictions
- 17+ fields per job — including departments, offices, metadata (work type, remote policy), HTML description, and application questions
- Multi-company batch input — pass 50 company slugs and get all their jobs in one run for one $0.01 start fee
- Department and location filters — reduce output to exactly the jobs you care about before data leaves the actor
- Application question extraction — scrape the full application form structure for each job (name, type, required fields)
- $0.002/job — 6x cheaper than the
fantastic-jobsmulti-ATS scraper ($0.012/job) - URL extraction support — pass full
boards.greenhouse.ioURLs, the actor extracts the slug automatically
How much does it cost?
This actor uses pay-per-event pricing — you only pay for jobs actually scraped.
| Event | FREE tier | BRONZE | SILVER | GOLD+ |
|---|---|---|---|---|
| Run started | $0.01 (one-time) | $0.01 | $0.01 | $0.01 |
| Job scraped | $0.002 | $0.0015 | $0.001 | $0.00075 |
Example costs:
| Task | Jobs | Cost (FREE tier) |
|---|---|---|
| All open roles at one company (100 jobs) | 100 | $0.01 + $0.20 = $0.21 |
| 5 companies, 200 jobs each | 1,000 | $0.01 + $2.00 = $2.01 |
| Large batch: 50 companies, ~500 jobs avg | 25,000 | $0.01 + $50.00 = $50.01 |
| Single job detail with questions | 1 | $0.01 + $0.002 = $0.012 |
Tiered volume discounts apply automatically as your Apify account usage grows. SILVER tier ($0.001/job) is 6x cheaper than fantastic-jobs ($0.012/job on the free tier).
How to use it
Step 1 — Find your company slug
The Greenhouse company slug is the URL path segment on the company's job board. For example:
https://boards.greenhouse.io/airbnb→ slug isairbnbhttps://boards.greenhouse.io/stripe→ slug isstripehttps://boards.greenhouse.io/figma→ slug isfigma
You can also paste the full URL directly — the actor extracts the slug automatically.
To find a company's slug: visit https://boards.greenhouse.io/{guess} (most companies use their company name in lowercase, e.g. databricks, anthropic, coinbase). If the board exists, you'll see their job listings.
Step 2 — Choose your mode
- Company Jobs (default) — enter one or more company slugs to scrape all open roles
- Job Details — enter specific job IDs to get full detail on individual postings
Step 3 — Configure filters (optional)
- filterDepartment — e.g.
Engineeringto only see engineering roles - filterLocation — e.g.
RemoteorNew Yorkto filter by location text - maxJobsPerCompany — cap results per company (default: 500)
- includeContent — include full HTML job description (default: true)
- includeQuestions — include application form questions (default: false, slower)
Step 4 — Run and export
Click Start. Results appear in the dataset in real time. Export as JSON, CSV, Excel, or XML from Apify Console or via API.
Input parameters
| Parameter | Type | Default | Mode |
|---|---|---|---|
mode | select | company_jobs | All |
companySlugs | string list | ["airbnb"] | company_jobs |
jobIds | string list | [] | job_details |
includeContent | boolean | true | All |
includeQuestions | boolean | false | All |
filterDepartment | string | — | company_jobs |
filterLocation | string | — | company_jobs |
maxJobsPerCompany | integer | 500 | company_jobs |
Job ID formats for job_details mode:
- Numeric ID only:
7649441— works if the job is globally accessible - Slug + ID:
airbnb/7649441— recommended format for reliability
Output fields
Each job listing returns 17+ structured fields:
| Field | Description | Type |
|---|---|---|
jobId | Greenhouse numeric job ID | number |
title | Job title | string |
companyName | Company name as returned by API | string |
companySlug | Greenhouse board slug (input value) | string |
location | Job location text | string |
absoluteUrl | Company's own careers page URL for the job | string |
url | Greenhouse board URL for the job | string |
departments | Array of {id, name} department objects | array |
offices | Array of {id, name, location} office objects | array |
content | Full HTML job description (when includeContent: true) | string |
metadata | Array of {name, value} custom fields (e.g. work type, remote policy) | array |
updatedAt | ISO 8601 timestamp of last update | string |
firstPublished | ISO 8601 timestamp when first posted | string |
language | Job posting language code (e.g. en) | string |
requisitionId | HR system requisition ID | string |
internalJobId | Internal Greenhouse job ID | number |
questions | Application form questions (when includeQuestions: true) | array |
Example output:
{"jobId": 7649441,"title": "Account Executive","companyName": "Airbnb","companySlug": "airbnb","location": "Paris, France","absoluteUrl": "https://careers.airbnb.com/positions/7649441","url": "https://boards.greenhouse.io/airbnb/jobs/7649441","departments": [{ "id": 4105217002, "name": "Sales" }],"offices": [{ "id": 4029277002, "name": "Paris, France", "location": "Paris, France" }],"content": "<div><h2>About the Role</h2><p>...</p></div>","metadata": [{ "name": "Workplace Type", "value": "Hybrid" },{ "name": "Remote Eligible", "value": false }],"updatedAt": "2026-02-24T09:25:19-05:00","firstPublished": "2026-02-24T09:04:33-05:00","language": "en","requisitionId": "AE-PARIS-2026","internalJobId": 3369660,"questions": null}
Tips for best results
- Batch multiple companies in one run — add 10, 20, or 50 slugs to
companySlugs. All are processed in one run for one $0.01 start fee. - Use department filter to narrow scope — if you only want engineering jobs, set
filterDepartment: "Engineering". The filter matches any department name containing the text (case-insensitive), so"Eng"will catch"Engineering","Platform Engineering", etc. - Remote jobs filter — set
filterLocation: "Remote"to get only remote-eligible positions across all companies in the batch. - Set
maxJobsPerCompanylow for quick scans — use50for exploratory runs to check if a company has relevant roles before pulling everything. - Use
includeQuestions: falsefor large batches — questions require an extra HTTP request per job, which is slow for companies with 500+ open roles. Enable it only when you specifically need the application form structure. - Content is included by default — if you don't need the HTML job description (e.g. you only want the title, location, and URL), set
includeContent: falsefor faster runs on large batches. - Popular tech company slugs:
stripe,figma,airbnb,databricks,anthropic,coinbase,cloudflare,robinhood,lyft,brex,gusto,lattice,twilio
Integrations
Google Sheets — weekly hiring tracker
Connect the actor to a Google Sheets integration via Apify webhooks. Schedule a weekly run for 20 target companies and automatically append new job listings to a spreadsheet for your team to review.
Slack alerts — new role notifications
Set up a scheduled daily run with a narrow filter (e.g. filterDepartment: "AI" across 30 AI companies). Post new jobs to a Slack channel via webhook — instant signal when target companies post AI roles.
Make / Zapier — CRM enrichment
Trigger the actor from a Make scenario when a new account enters your CRM. Automatically scrape that company's Greenhouse board, count open engineering roles as a buying signal, and attach the count to the CRM deal.
Vector database for job matching
Export job listings as JSON, chunk the content field by paragraph, and load into a vector database (Pinecone, Qdrant, Weaviate). Build a job matching or recommendation system with semantic search over the full job description text.
API usage
Run this actor programmatically from any language using the Apify API.
Node.js (Apify client)
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_APIFY_TOKEN' });const run = await client.actor('automation-lab/greenhouse-jobs-scraper').call({mode: 'company_jobs',companySlugs: ['stripe', 'figma', 'airbnb'],filterDepartment: 'Engineering',maxJobsPerCompany: 200,});const { items } = await client.dataset(run.defaultDatasetId).listItems();console.log(`Scraped ${items.length} engineering jobs`);
Python (Apify client)
from apify_client import ApifyClientclient = ApifyClient(token="YOUR_APIFY_TOKEN")run = client.actor("automation-lab/greenhouse-jobs-scraper").call(run_input={"mode": "company_jobs","companySlugs": ["anthropic", "databricks", "coinbase"],"filterLocation": "Remote","includeContent": True,"maxJobsPerCompany": 500,})for item in client.dataset(run["defaultDatasetId"]).iterate_items():print(item["title"], item["location"], item["companyName"])
cURL
curl -X POST \"https://api.apify.com/v2/acts/automation-lab~greenhouse-jobs-scraper/runs?token=YOUR_APIFY_TOKEN" \-H "Content-Type: application/json" \-d '{"mode": "company_jobs","companySlugs": ["lyft", "brex", "gusto"],"filterDepartment": "Sales","maxJobsPerCompany": 100}'
Fetch results once the run completes:
$curl "https://api.apify.com/v2/datasets/{DATASET_ID}/items?token=YOUR_APIFY_TOKEN&format=json"
Using with Claude via MCP
This actor works with the Apify MCP server, letting you run it directly from Claude without writing code.
Setup
Add the Apify MCP server to your Claude Code config:
{"mcpServers": {"apify": {"command": "npx","args": ["-y", "@apify/actors-mcp-server"],"env": { "APIFY_TOKEN": "YOUR_APIFY_TOKEN" }}}}
Example prompts
"Run the automation-lab/greenhouse-jobs-scraper to get all open engineering jobs at Stripe, Figma, and Airbnb, then show me a table sorted by company."
"Use the Greenhouse scraper to find all remote product management roles at these 10 companies and export them to CSV."
"Scrape all AI/ML jobs posted at Anthropic, Databricks, and Cohere in the last 30 days and summarize the most common required skills."
Is it legal to scrape Greenhouse job boards?
Greenhouse job boards are intentionally public — companies publish them so job seekers can find and apply for roles without authentication. The Greenhouse Boards API (boards-api.greenhouse.io) is the same API used by every job board widget embedded on company career pages.
Key points:
- No login required — the actor does not bypass any authentication
- Public data only — all scraped content is visible to any anonymous visitor
- Polite scraping — 100ms delay between per-job requests, no concurrent flooding
- Greenhouse's own API — the actor uses the official public Boards API, not HTML scraping
You are responsible for ensuring your use of the data complies with applicable laws in your jurisdiction (GDPR, CCPA, etc.) and Greenhouse's Terms of Service. This actor is intended for legitimate business intelligence, recruitment research, and market analysis. Do not use the data to spam applicants or contact individuals without consent.
FAQ
Q: How do I find a company's Greenhouse slug?
Most companies use their company name in lowercase: airbnb, stripe, figma, anthropic. Check https://boards.greenhouse.io/{slug} — if it returns a job board page, the slug is valid. Some companies use abbreviations or branded names (e.g. a company called "Acme Corp" might use acmecorp or acme).
Q: What if a company doesn't use Greenhouse?
The actor will return a 404 warning and skip that company, continuing with the rest of the batch. No error is thrown. You can verify a company uses Greenhouse by visiting https://boards.greenhouse.io/{slug} before running.
Q: Can I scrape all jobs across all Greenhouse companies? Greenhouse does not provide a global listing of all companies. You need to know the company slugs in advance. This actor is designed for targeted scraping of known companies, not discovery.
Q: How many companies use Greenhouse? Greenhouse reports 220,000+ customers. Based on manual testing of 19 major tech companies, approximately 68% of Series B+ tech companies use Greenhouse as their primary ATS.
Q: The run returned 0 jobs for a company. What happened?
Either: (1) the company has no open positions right now, (2) the slug is incorrect (verify at boards.greenhouse.io/{slug}), or (3) your department/location filter excluded all results. Check the actor log for details.
Q: Can I get jobs from a specific department?
Yes — set filterDepartment to a partial department name. The filter is case-insensitive and matches any department whose name contains the text. For example, "Eng" matches "Engineering", "Platform Engineering", and "Data Engineering".
Q: Does includeQuestions slow things down significantly?
Yes — it adds one HTTP request per job. For a company with 500 open roles, that's 500 extra requests (~50 seconds at 100ms delay). Only enable it when you specifically need the application form structure (e.g. for HR tech automation or application scraping research).
Q: Is the content field HTML or plain text?
HTML. You'll need to strip tags to get plain text. In JavaScript: content.replace(/<[^>]+>/g, ' ').trim(). In Python: use BeautifulSoup or html.parser.
Related scrapers
These actors from the same author cover adjacent data sources:
- Indeed Jobs Scraper — job listings from Indeed's global job board
- LinkedIn Jobs Scraper — job postings from LinkedIn
- Glassdoor Jobs Scraper — jobs and company reviews from Glassdoor
Built and maintained by automation-lab on the Apify platform.