ATS Job Scraper
Pricing
from $3.00 / 1,000 job extracteds
ATS Job Scraper
Extract job postings from Greenhouse, Lever, Ashby, Workday, and Rippling with automatic ATS detection and standardized JSON output.
Pricing
from $3.00 / 1,000 job extracteds
Rating
0.0
(0)
Developer
Enos Melo
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
4 days ago
Last modified
Categories
Share
Extract job postings from company career pages across 5 major ATS platforms — Greenhouse, Lever, Ashby, Workday, and Rippling — with automatic ATS detection and standardized JSON output. Simply provide a list of company domains and get back structured data ready for Clay, Apollo, Instantly, or any CRM.
What does ATS Job Scraper do?
ATS Job Scraper is a powerful data extraction tool that scrapes job listings from multiple Applicant Tracking Systems (ATS) and normalizes them into a single, consistent JSON format. It automatically detects which ATS a company uses and fetches all available job data.
Key features:
- Auto-detects ATS — No need to know which system a company uses. The actor figures it out automatically.
- 5 ATS platforms supported — Greenhouse, Lever, Ashby, Workday, and Rippling
- Standardized output — Same schema regardless of ATS source
- Built for B2B workflows — Clean JSON ready for enrichment pipelines
- No browser overhead — Uses direct API calls for speed and reliability
Why use ATS Job Scraper?
- Save hours of research — Manually finding and scraping each company's career page takes time. This actor does it in seconds.
- Consistent data — Every job comes with the same fields: title, department, location, salary, description, and more.
- Scale your outreach — Process hundreds of companies in a single run for recruitment, sales intelligence, or market research.
- Works with your stack — Output integrates seamlessly with Clay, Apollo, Instantly, Zapier, Make, or any tool that consumes JSON.
How to use
- Go to the Input tab
- Add companies to the
companiesarray with their domain (e.g.,{ "domain": "stripe.com" }) - Optionally add
careers_urlif you know the career page URL for more accurate detection - Configure filters (keywords, location, department) if needed
- Click Start and download your dataset
Input
| Field | Required | Description |
|---|---|---|
companies | Yes | Array of { name, domain, careers_url } objects |
filters.keywords | No | Comma-separated keywords to filter jobs |
filters.location | No | Location filter (e.g., "remote", "San Francisco") |
filters.department | No | Department name to filter |
filters.posted_within_days | No | Only jobs posted within N days (0 = no limit) |
ats_override | No | Force a specific ATS (greenhouse, lever, ashby, workday, rippling) |
include_description | No | Include full job descriptions (default: true) |
include_salary | No | Include salary data where available (default: false) |
max_jobs_per_company | No | Max jobs per company (default: 100) |
Example Input
{"companies": [{ "name": "Stripe", "domain": "stripe.com" },{ "name": "Linear", "domain": "linear.app", "careers_url": "https://jobs.ashbyhq.com/linear" }],"filters": {"keywords": "engineer,product","location": "remote"},"max_jobs_per_company": 50}
Output
Each job record includes the following fields:
| Field | Type | Description |
|---|---|---|
job_id | string | Unique job ID from the ATS |
company_name | string | Company name |
company_domain | string | Company domain |
ats_platform | string | Source ATS (greenhouse, lever, ashby, workday, rippling) |
job_title | string | Job title |
department | string/null | Department |
team | string/null | Team |
location | string | Job location |
remote | boolean/string | true, false, or "hybrid" |
employment_type | string/null | full_time, part_time, contract, intern |
seniority | string/null | junior, mid, senior, staff, director, vp |
job_url | string | URL to the job posting |
apply_url | string | URL to apply |
description_html | string | Full job description (HTML) |
description_text | string | Full job description (plain text) |
salary_min | number/null | Minimum salary |
salary_max | number/null | Maximum salary |
salary_currency | string/null | Currency code (USD, EUR, etc.) |
posted_at | string/null | Posted date (ISO 8601) |
updated_at | string/null | Last updated (ISO 8601) |
scraped_at | string | Timestamp when scraped |
You can download the dataset in various formats: JSON, CSV, Excel, XML, or RSS.
Supported ATS Platforms
| ATS | API Type | Salary Data | Posted Date |
|---|---|---|---|
| Greenhouse | REST API | Yes (per-job, with include_salary) | updated_at only |
| Lever | REST API | Yes (optional) | Not available |
| Ashby | REST API | Yes (with include_description) | Yes |
| Workday | CXS API | Not available | Yes |
| Rippling | REST API | Not available | Yes |
Pricing
Pay-per-event: $0.003 per job returned
This actor uses the Apify pay-per-event pricing model. You're charged for each job successfully scraped and added to the dataset. No jobs = no charge.
Example: Scraping 50 companies with ~500 total jobs costs approximately $1.50 in actor fees.
Tips
- Provide
careers_urlwhen possible for more accurate ATS detection - Disable
include_descriptionfor faster runs when you only need metadata - Use
ats_overrideif you know the ATS to skip detection entirely - Use
filters.keywordsto reduce the number of results and save on costs
FAQ
Q: Do I need a proxy? A: No. Greenhouse, Lever, and Ashby expose public APIs that don't require proxies. Workday and Rippling may benefit from proxies for large batches, but the actor works without them.
Q: What if ATS detection fails?
A: The actor will attempt to fetch using all available connectors as fallback. You can also use ats_override to force a specific ATS.
Q: Can I scrape unlimited jobs?
A: Yes, but use max_jobs_per_company to control costs and run time. The default is 100 jobs per company.
Q: Why is Lever returning 0 jobs?
A: Some companies disable public job APIs. Use careers_url or try another company as a test.
Q: Is this legal? A: This actor uses publicly available APIs provided by each ATS for job listings. It respects the terms of service of each platform. For Workday and other enterprise systems, ensure you have permission before scraping.
Need help?
- Report issues on GitHub
- Check the Apify documentation
- Contact support through the Apify Console