Hiring.Cafe Scraper — 2.8M AI-Enriched Jobs from 46 ATS
Pricing
from $1.80 / 1,000 results
Hiring.Cafe Scraper — 2.8M AI-Enriched Jobs from 46 ATS
Scrape Hiring.Cafe (hiring.cafe) — AI-enriched job aggregator with 2.8M+ listings from 46 ATS platforms. Structured salary, company, and remote-work data with incremental tracking for recurring job monitoring.
Pricing
from $1.80 / 1,000 results
Rating
0.0
(0)
Developer
Black Falcon Data
Actor stats
1
Bookmarked
24
Total users
14
Monthly active users
13 minutes ago
Last modified
Categories
Share
What does Hiring.Cafe Scraper do?
Hiring.Cafe Scraper extracts structured job data from hiring.cafe — including salary data, apply URLs, company metadata, full descriptions, and location data. It supports keyword search, location filters, and controllable result limits, so you can run the same query consistently over time. The actor also offers detail enrichment (full descriptions and company metadata) where the source provides them.
New to Apify? Sign up free and use the included $5 monthly platform credit to test this actor.
Key features
- ♻️ Incremental mode — recurring runs emit only NEW / UPDATED / REAPPEARED records — UNCHANGED and EXPIRED are opt-in. First run builds the baseline; subsequent runs emit and charge only for the diff. Pair with notifications for daily "new jobs" alerts to your hiring team. Saves 80–95% on daily monitoring.
- 🔔 Notifications — Telegram, Slack, Discord, WhatsApp Cloud API, generic webhook — out of the box. Pair with incremental +
notifyOnlyChangesfor daily "new Hiring jobs" pings to your hiring channel. - 📋 Detail enrichment — two-stage mode: list, then enrich each job with the full description + detail-page fields (apply counts, education, etc.). One toggle, no extra orchestration.
- 🔀 Source-board provenance — every listing carries the original source-board URL — full audit trail across hiring.cafe's aggregated sources, so you can de-duplicate against direct-source feeds you already run.
- 📦 Compact mode — AI-agent and MCP-friendly compact payloads with core fields only — pipe straight into your ATS, salary-benchmarking tool, or LLM context without parsing extras.
- ✂️ Description truncation — cap description length with
descriptionMaxLengthto control LLM prompt cost and dataset size — set 0 for full descriptions, or any char-limit to trim. - 📌 Change classification — each record carries a
changeTypeof NEW / UPDATED / UNCHANGED / REAPPEARED / EXPIRED. Default emits NEW + UPDATED + REAPPEARED; opt into the others withemitUnchanged/emitExpired. Repost detection flags previously-expired listings that come back. - 📤 Export anywhere — Download the dataset as JSON, CSV, or Excel from the Apify Console, or stream live via the Apify API and integrations (Make, Zapier, Google Sheets, n8n, …).
What data can you extract from hiring.cafe?
Each result includes Core listing fields (jobId, title, location, workplaceType, commitment, seniorityLevel, jobCategory, and salaryMin, and more), detail fields when enrichment is enabled (roleType, roleActivities, and description), apply information (applyUrl), and company metadata (company, positionEmployerType, companyName, and companyWebsite). In standard mode, all fields are always present — unavailable data points are returned as null, never omitted. In compact mode, only core fields are returned.
Enable detail enrichment in the input to get richer fields such as full descriptions and company metadata where the source provides them.
Input
The main inputs are a search keyword, an optional location filter, and a result limit. Additional filters and options are available in the input schema.
Key parameters:
query— Job search keywords. Leave blank to browse all jobs.startUrls— Optional Hiring.Cafe search result URLs. Paste URLs such as https://hiring.cafe/jobs/software-engineer/locations/united-states to run the same search/location directly.country— Country market to search. Hiring.Cafe currently exposes a verified country-level search market for the United States; more countries will be added here after their location slugs are verified. (default:"US")location— City, state, or region.workplaceTypes— Filter results by workplace type.seniorityLevels— Filter results by seniority level.commitmentTypes— Filter results by employment commitment.onlyTransparentSalaries— Only include jobs where Hiring.Cafe reports transparent compensation. (default:false)minSalary— Only include jobs whose reported salary range reaches at least this amount.maxSalary— Only include jobs whose reported salary range starts at or below this amount.postedWithinDays— Only include jobs with an estimated posted date within the last N days. 0 = no date filter. (default:0)maxResults— Maximum total results (0 = unlimited). Memory scales automatically: 256 MB up to 1000, 512 MB up to 2000, 1024 MB above. (default:25)- ...and 20 more parameters
Input examples
Basic search — Keyword-driven search with a result cap.
→ Full payload per result — all standard fields populated where the source provides them.
{"query": "software engineer","maxResults": 50}
Incremental tracking — Only emit jobs that changed since the previous run with this stateKey.
→ First run builds the baseline state. Subsequent runs emit only records that are new or whose tracked content changed. Set emitUnchanged: true to include unchanged records as well.
{"query": "software engineer","maxResults": 200,"incrementalMode": true,"stateKey": "software-engineer-tracker"}
Compact output for AI agents — Return only core fields for AI-agent and MCP workflows.
→ Small payload with the most important fields — ideal for piping into LLMs without token overhead.
{"query": "software engineer","maxResults": 50,"compact": true}
Output
Each run produces a dataset of structured job records. Results can be downloaded as JSON, CSV, or Excel from the Dataset tab in Apify Console.
Example job record
{"jobId": "992a3ac51a843a7a66d4d8f6e4cb8da26765de425ac135779a382eeb489e5541","title": "Software Engineer","company": "EverWatch","location": "Annapolis Junction or Aurora","workplaceType": "Onsite","commitment": "Full Time","seniorityLevel": "Senior Level","roleType": "Individual Contributor","roleActivities": ["develop software","design system","coordinate installation"],"jobCategory": "Software Development","description": "Job Title:\nSoftware Engineer\n\nOverview:\n<p style=\"margin: 0px;\"><span style=\"font-family: arial, helvetica, sans-serif; font-size: 10pt;\">EverWatch is a government solutions company providing advanced...","salaryMin": 200012.8,"salaryMax": 225014.40000000002,"salaryCurrency": "USD","salaryFrequency": "Hourly","isCompensationTransparent": true,"requirementsSummary": "Senior software engineer with extensive experience in full stack development and system integration.","technicalTools": ["SDLC","Software Design","Java"],"minYearsExperience": 14,"minManagementYears": null,"degreeRequirement": "Bachelors","degreeFieldsOfStudy": ["Computer Science","Software Engineering"],"licensesOrCertifications": null,"languageRequirements": ["English"],"securityClearance": "Top Secret/SCI","driverLicenseRequired": false,"retirement401kMatching": false,"retirementPlan": false,"tuitionReimbursement": false,"generousParentalLeave": false,"generousPaidTimeOff": false,"fourDayWorkWeek": false,"visaSponsorship": false,"relocationAssistance": false,"fairChance": false,"militaryVeterans": false,"physicalLaborIntensity": "Low","physicalPosition": "Sitting","workplaceEnvironment": "Office","computerUsage": "High","cognitiveDemand": "High","oralCommunicationLevel": "Medium","overtimeRequired": false,"onCallRequirement": null,"airTravelRequirement": null,"landTravelRequirement": null,"morningShiftWork": null,"eveningShiftWork": null,"overnightWork": null,"weekendAvailabilityRequired": false,"holidayAvailabilityRequired": false,"positionEmployerType": "External Position","workplaceCountries": ["US"],"workplaceContinents": ["North America"],"workplaceStates": ["Maryland, US","Colorado, US"],"workplaceCities": ["Annapolis Junction, Maryland, US","Aurora, Colorado, US"],"workplaceCounties": ["Anne Arundel County, Maryland, US","Arapahoe County, Colorado, US"],"isWorkplaceWorldwideOk": false,"latitude": 38.9709,"longitude": -76.5177,"companyName": "Everwatch","companyWebsite": "everwatchsolutions.com","companySector": "Information Technology","companyIndustries": ["Defense & Space","Information Technology"],"companyActivities": ["government solutions","defense"],"companyTagline": "Government solutions provider delivering defense and intelligence services.","companyEmployeeCount": 500,"companyHqCountry": "US","companyYearFounded": 2018,"companyOrganizationType": "Private","companyParent": "Booz Allen Hamilton","companySubsidiaries": ["Ian, Evan & Alexander","BrainTrust","Northwood Global Solutions"],"companyStockExchange": null,"companyStockSymbol": null,"companyFundingType": "Private Equity","companyFundingYear": 2022,"companyFundingAmount": 440000000,"companyFundingInvestors": ["Booz Allen Hamilton"],"applyUrl": "https://everwatch-everwatchsolutions.icims.com/jobs/3435/job?utm_source=hiringcafe_integration&iis=Job%20Board&iisn=HiringCafe","portalUrl": "https://hiring.cafe/viewjob/pgbhzvswan64s2pl","sourceAts": "icims","postedDate": "2026-02-13T17:20:00.000Z","scrapedAt": "2026-04-05T13:50:10.906Z","source": "hiring.cafe","changeType": null}
Incremental fields
When incremental: true, each record also carries:
changeType— one ofNEW,UPDATED,UNCHANGED,REAPPEARED,EXPIRED. Default output coversNEW/UPDATED/REAPPEARED; setemitUnchanged: trueoremitExpired: trueto opt into the others.firstSeenAt,lastSeenAt— ISO-8601 timestamps tracking the listing across runs.isRepost,repostOfId,repostDetectedAt— populated when a new listing matches the tracked content of a previously expired one. SetskipReposts: trueto drop detected reposts from the output.
How to scrape hiring.cafe
- Go to Hiring.Cafe Scraper in Apify Console.
- Enter a search keyword and optional location filter.
- Set
maxResultsto control how many results you need. - Enable
includeDetailsif you need full descriptions, company data. - Click Start and wait for the run to finish.
- Export the dataset as JSON, CSV, or Excel.
Use cases
- Extract job data from hiring.cafe for market research and competitive analysis.
- Track salary trends across regions and categories over time.
- Monitor new and changed listings on scheduled runs without processing the full dataset every time.
- Auto-apply or feed apply URLs into your ATS / hiring pipeline.
- Research company hiring patterns, employer profiles, and industry distribution.
- Use structured location data for regional analysis, mapping, and geo-targeting.
- Feed structured data into AI agents, MCP tools, and automated pipelines using compact mode.
- Export clean, structured data to dashboards, spreadsheets, or data warehouses.
How much does it cost to scrape hiring.cafe?
Hiring.Cafe Scraper uses pay-per-event pricing. You pay a small fee when the run starts and then for each result that is actually produced.
- Run start: $0.005 per run
- Per result: $0.0018 per job record
Example costs:
- 10 results: $0.02
- 100 results: $0.18
- 500 results: $0.91
Example: recurring monitoring savings
These examples compare full re-scrapes with incremental runs at different churn rates. Churn is the share of listings that are new or whose tracked content changed since the previous run. Actual churn depends on your query breadth, source activity, and polling frequency — the scenarios below are examples, not predictions.
Example setup: 100 results per run, daily polling (30 runs/month). Event-pricing examples scale linearly with result count.
| Churn rate | Full re-scrape run cost | Incremental run cost | Savings vs full re-scrape | Monthly cost after baseline |
|---|---|---|---|---|
| 5% — stable niche query | $0.18 | $0.01 | $0.17 (92%) | $0.42 |
| 15% — moderate broad query | $0.18 | $0.03 | $0.15 (83%) | $0.96 |
| 30% — high-volume aggregator | $0.18 | $0.06 | $0.13 (68%) | $1.77 |
Full re-scrape monthly cost at daily polling: $5.55. First month with incremental costs $0.59 / $1.11 / $1.90 for the 5% / 15% / 30% scenarios because the first run builds baseline state at full cost before incremental savings apply.
FAQ
How many results can I get from hiring.cafe?
The number of results depends on the search query and available listings on hiring.cafe. Use the maxResults parameter to control how many results are returned per run.
Does Hiring.Cafe Scraper support recurring monitoring?
Yes. Enable incremental mode to only receive new or changed listings on subsequent runs. This is ideal for scheduled monitoring where you want to track changes over time without re-processing the full dataset.
Can I integrate Hiring.Cafe Scraper with other apps?
Yes. Hiring.Cafe Scraper works with Apify's integrations to connect with tools like Zapier, Make, Google Sheets, Slack, and more. You can also use webhooks to trigger actions when a run completes.
Can I use Hiring.Cafe Scraper with the Apify API?
Yes. You can start runs, manage inputs, and retrieve results programmatically through the Apify API. Client libraries are available for JavaScript, Python, and other languages.
Can I use Hiring.Cafe Scraper through an MCP Server?
Yes. Apify provides an MCP Server that lets AI assistants and agents call this actor directly. Use compact mode and descriptionMaxLength to keep payloads manageable for LLM context windows.
Is it legal to scrape hiring.cafe?
This actor extracts publicly available data from hiring.cafe. Web scraping of public information is generally considered legal, but you should always review the target site's terms of service and ensure your use case complies with applicable laws and regulations, including GDPR where relevant.
Your feedback
If you have questions, need a feature, or found a bug, please open an issue on the actor's page in Apify Console. Your feedback helps us improve.
You might also like
- Actiris Brussels Job Scraper — Scrape all active job listings from actiris.brussels — official Brussels public employment service..
- Adzuna Job Scraper — Global Jobs with Salary & Coordinates — Scrape adzuna.com job listings across 19 country markets with structured salary data.
- APEC.fr Scraper - French Executive Jobs — Scrape apec.fr - French executive job listings with salary ranges, company, location, skills,.
- Arbeitsagentur Jobs Feed — German Federal Employment Agency — Extract job listings from arbeitsagentur.de — Germany's official public employment portal with 1M+.
- Arbeitsagentur Scraper - German Jobs — Scrape arbeitsagentur.de - Germany’s official employment portal with 1M+ listings. Contact data,.
- Arbetsformedlingen Job Scraper — Scrape arbetsformedlingen.se (Platsbanken) — Sweden's official employment portal. Returns 84.
- AutoScout24 Scraper — Scrape autoscout24.com - Europe's largest used car marketplace with 770K+ listings. Structured.
- Bayt.com Scraper - Jobs from the Middle East — Scrape bayt.com - the leading Middle East job board. Salary data, experience requirements.
Getting started with Apify
New to Apify? Create a free account with $5 credit — no credit card required.
- Sign up — $5 platform credit included
- Open this actor and configure your input
- Click Start — export results as JSON, CSV, or Excel
Need more later? See Apify pricing.