Greenhouse Jobs Scraper
Pricing
from $9.00 / 1,000 results
Greenhouse Jobs Scraper
Collect job listings from any Greenhouse-powered careers page. Get titles, locations, departments, offices, salary data, descriptions, and more from 10,000+ companies. Filter by keyword, department, or office. Structured JSON output ready for analysis, alerts, and integrations.
Pricing
from $9.00 / 1,000 results
Rating
0.0
(0)
Developer
ParseForge
Actor stats
0
Bookmarked
12
Total users
5
Monthly active users
5 days ago
Last modified
Categories
Share

π Greenhouse Jobs Scraper
π Last updated: 2026-05-05
Whether you're a recruiter tracking competitors, a job seeker monitoring openings, or a data analyst studying hiring trends, this actor makes it easy to collect structured job data from any company using Greenhouse.
The Greenhouse Jobs Scraper collects complete job listings from 10,000+ companies, with 18 data fields per job, plus filtering by keyword, department, and office.
π What Does It Do
- π Complete job data - titles, descriptions, locations, departments, offices, and more from any Greenhouse-powered careers page
- π°οΈ Smart filtering - narrow results by keyword, department, or office without wasting compute
- π Department & office hierarchy - get full organizational structure with parent IDs for mapping
- π¨βπ Custom metadata - company-specific fields like workplace type, quota coverage, and more
- πΈ GDPR compliance info - data compliance details for each listing
π¬ Demo Video
Coming soon
π§ Input
boardToken (required) - The company's Greenhouse identifier. Find it in their careers URL: boards.greenhouse.io/TOKEN. Common tokens: gitlab, discord, airbnb, figma, notion, stripe.
maxItems - Cap the number of jobs returned. Leave at 0 to get all available listings.
includeContent - Toggle full job description text. Turn off for faster metadata-only collection.
searchQuery - Filter by keyword matching against job title and location. Try engineer, remote, marketing.
departmentFilter - Show only jobs in a specific department. Partial match, so engineering catches "Software Engineering" too.
officeFilter - Filter by office or geographic region. Works with US, EMEA, Remote, city names, etc.
{"boardToken": "gitlab","maxItems": 50,"includeContent": true,"searchQuery": "engineer","departmentFilter": "","officeFilter": "Remote"}
π Output
| Field | Description |
|---|---|
url | Direct link to the job posting |
applyUrl | Direct link to the application form |
jobId | Greenhouse public job ID |
internalJobId | Internal Greenhouse job ID |
title | Job title |
company | Company name |
boardToken | Board token used |
location | Job location |
locations | Parsed location list |
isRemote | Whether the role is remote |
isHybrid | Whether the role is hybrid |
employmentType | Employment type from metadata |
departments | Department names |
departmentDetails | Full department info (id, name, parentId) |
offices | Office names |
officeDetails | Full office info (id, name, location, parentId) |
metadata | Company-specific custom fields |
salary | Parsed salary range if found |
requisitionId | Requisition/job req number |
language | Job listing language |
description | Full job description |
dataCompliance | GDPR/compliance details |
firstPublished | First publication date |
updatedAt | Last update date |
scrapedAt | Timestamp when data was collected |
{"url": "https://job-boards.greenhouse.io/gitlab/jobs/8481922002","applyUrl": "https://job-boards.greenhouse.io/gitlab/jobs/8481922002#app","jobId": 8481922002,"internalJobId": 6387960002,"title": "Backend Engineer, Analytics Instrumentation (Golang)","company": "GitLab","boardToken": "gitlab","location": "Remote, India","locations": ["Remote", "India"],"isRemote": true,"isHybrid": false,"employmentType": null,"departments": ["Data Engineering"],"departmentDetails": [{"id": 4115239002, "name": "Data Engineering", "parentId": 4011044002}],"offices": ["India"],"officeDetails": [{"id": 4112140002, "name": "India", "location": null, "parentId": 4019590002}],"metadata": {"Quota Coverage Type": "n/a"},"salary": null,"requisitionId": "6173","language": "en","description": "GitLab is the intelligent orchestration platform for DevSecOps...","dataCompliance": [{"type": "gdpr", "requiresConsent": false, "requiresProcessingConsent": false, "requiresRetentionConsent": false, "retentionPeriod": null}],"firstPublished": "2026-03-27T17:37:17-04:00","updatedAt": "2026-03-27T17:37:17-04:00","scrapedAt": "2026-03-31T21:01:00.887Z"}
πΈ Why Choose the Greenhouse Jobs Scraper?
| Feature | Our Actor | Manual Collection |
|---|---|---|
| Setup time | 30 seconds | Hours |
| Fields collected | 18 per job | Varies |
| Filtering | Keyword + department + office | None |
| Output format | Structured JSON | Unstructured data |
| Works across companies | β Any Greenhouse board | One at a time |
π¨βπ How to Use
- Sign Up - Create a free account with $5 credit on Apify
- Configure - Enter the company's
boardTokenand optional filters - Run It - Hit Start and get structured job data in seconds
That's it. No coding, no setup, no maintenance.
π°οΈ Business Use Cases
- π Recruiters - Monitor competitor job openings to understand their hiring strategy and team growth
- π HR Analytics Teams - Track hiring trends across industries, departments, and regions over time
- π¨βπ Job Seekers - Set up alerts for specific roles at target companies by running periodic collections
- πΈ Market Researchers - Analyze which skills and roles are in demand across thousands of companies
- π°οΈ Investors - Track headcount growth at portfolio companies through job posting volume
π Integrate with any app
Greenhouse Jobs Scraper connects to any cloud service via Apify integrations:
- Make - Automate multi-step workflows
- Zapier - Connect with 5,000+ apps
- Slack - Get run notifications in your channels
- Airbyte - Pipe results into your warehouse
- GitHub - Trigger runs from commits and releases
- Google Drive - Export datasets straight to Sheets
You can also use webhooks to trigger downstream actions when a run finishes. Push fresh data into your product backend, or alert your team in Slack.
π€ Ask an AI assistant about this scraper
Open a ready-to-send prompt about this ParseForge actor in the AI of your choice:
- π¬ ChatGPT
- π§ Claude
- π Perplexity
- π Copilot
β Frequently Asked Questions
π How many companies use Greenhouse? Over 10,000 companies use Greenhouse for hiring, including GitLab, Discord, Airbnb, Figma, Notion, and Stripe.
π°οΈ How do I find a company's board token?
Go to their careers page and look for a URL containing greenhouse.io. The token is the path after /boards/ or after greenhouse.io/. You can also try the company name in lowercase.
π How much does it cost to run? Minimal. Collecting all jobs from a large company (200+ jobs) costs less than $0.01 in platform compute.
π¨βπ Does it work with non-English job boards?
Yes. The language field indicates the listing language, and the actor handles any language Greenhouse supports.
πΈ What's in the metadata field?
Company-specific custom fields. For example, GitLab includes "Quota Coverage Type", Airbnb includes "Workplace Type". Every company configures these differently.
π Integrate Greenhouse Jobs Scraper with any app
- Apify API - access results programmatically
- Webhooks - trigger actions when a run finishes
- Google Sheets - export job listings directly to spreadsheets
- Zapier - connect to 5,000+ apps
- Make - build automated workflows
π More ParseForge Actors
- Glassdoor Scraper - Collect company reviews, salaries, and interview data
- Indeed Scraper - Collect job listings from Indeed
- LinkedIn Jobs Scraper - Collect job postings from LinkedIn
Browse our complete collection at ParseForge on Apify Store.
πΈ Ready to Start?
Create a free account with $5 credit and start collecting Greenhouse job data in seconds.
π Need Help?
- π Check the FAQ section above
- π Apify Documentation
- π¬ Contact us
β οΈ Disclaimer
This Actor is an independent tool and is not affiliated with, endorsed by, or connected to Greenhouse Software, Inc. It accesses publicly available job board data.
β¨ Why choose this Actor
| Capability | |
|---|---|
| π― | Built for the job. Scoped specifically to this data source so you skip the parser engineering entirely. |
| π | Structured output. Clean, typed fields ready for analysis, dashboards, or downstream pipelines. |
| β‘ | Fast. Optimized request patterns return results in seconds, not minutes. |
| π | Always fresh. Every run pulls live data, so the dataset reflects the source as of run time. |
| π | No infra to manage. Apify handles proxies, retries, scaling, scheduling, and storage. |
| π‘οΈ | Reliable. Battle-tested across many runs and edge cases, with graceful error handling. |
| π« | No code required. Configure in the UI, run from CLI, schedule via cron, or call from any language with the Apify SDK. |
π Production-grade structured data without the engineering overhead of building and maintaining your own scraper.
π How it compares to alternatives
| Approach | Cost | Coverage | Refresh | Filters | Setup |
|---|---|---|---|---|---|
| β Greenhouse Jobs Scraper (this Actor) | $5 free credit, then pay-per-use | Full source coverage | Live per run | Source-native filters supported | β‘ 2 min |
| Build your own scraper | Engineering hours | Full once built | Whenever you maintain it | Custom code | π’ Days to weeks |
| Paid managed APIs | $$$ monthly | Vendor-defined | Live | Vendor-defined | β³ Hours |
| Third-party data dumps | Varies | Subset, often stale | Periodic | None | π Variable |
Pick this Actor when you want broad coverage, server-side filtering, and no pipeline maintenance.
π How to use
- π Sign up. Create a free account with $5 credit (takes 2 minutes).
- π Open the Actor. Go to the Greenhouse Jobs Scraper page on the Apify Store.
- π― Set input. Configure the input fields in the form (or paste a JSON), then set
maxItems. - π Run it. Click Start and let the Actor collect your data.
- π₯ Download. Grab your results in the Dataset tab as CSV, Excel, JSON, or XML.
β±οΈ Total time from signup to downloaded dataset: 3-5 minutes. No coding required.
πΌ Business use cases
π Beyond business use cases
Data like this powers more than commercial workflows. The same structured records support research, education, civic projects, and personal initiatives.
π Recommended Actors
- π Google Search Scraper - Multi-engine SERP results with country and language targeting
- πΊοΈ Nominatim OSM Scraper - Geocode addresses via OpenStreetMap
- π Indexmundi Scraper - Global demographic and economic indicators
- π° RAG Web Browser - Crawl and extract clean text from any URL for AI retrieval
- π Website Content Crawler - Crawl entire sites and export structured content
π‘ Pro Tip: browse the complete ParseForge collection for more reference-data scrapers.
π Need Help? Open our contact form to request a new scraper, propose a custom data project, or report an issue.