ATS Job Scraper avatar

ATS Job Scraper

Pricing

from $3.00 / 1,000 job extracteds

Go to Apify Store
ATS Job Scraper

ATS Job Scraper

Extract job postings from Greenhouse, Lever, Ashby, Workday, and Rippling with automatic ATS detection and standardized JSON output.

Pricing

from $3.00 / 1,000 job extracteds

Rating

0.0

(0)

Developer

Enos Melo

Enos Melo

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

4 days ago

Last modified

Share

Extract job postings from company career pages across 5 major ATS platforms — Greenhouse, Lever, Ashby, Workday, and Rippling — with automatic ATS detection and standardized JSON output. Simply provide a list of company domains and get back structured data ready for Clay, Apollo, Instantly, or any CRM.

What does ATS Job Scraper do?

ATS Job Scraper is a powerful data extraction tool that scrapes job listings from multiple Applicant Tracking Systems (ATS) and normalizes them into a single, consistent JSON format. It automatically detects which ATS a company uses and fetches all available job data.

Key features:

  • Auto-detects ATS — No need to know which system a company uses. The actor figures it out automatically.
  • 5 ATS platforms supported — Greenhouse, Lever, Ashby, Workday, and Rippling
  • Standardized output — Same schema regardless of ATS source
  • Built for B2B workflows — Clean JSON ready for enrichment pipelines
  • No browser overhead — Uses direct API calls for speed and reliability

Why use ATS Job Scraper?

  • Save hours of research — Manually finding and scraping each company's career page takes time. This actor does it in seconds.
  • Consistent data — Every job comes with the same fields: title, department, location, salary, description, and more.
  • Scale your outreach — Process hundreds of companies in a single run for recruitment, sales intelligence, or market research.
  • Works with your stack — Output integrates seamlessly with Clay, Apollo, Instantly, Zapier, Make, or any tool that consumes JSON.

How to use

  1. Go to the Input tab
  2. Add companies to the companies array with their domain (e.g., { "domain": "stripe.com" })
  3. Optionally add careers_url if you know the career page URL for more accurate detection
  4. Configure filters (keywords, location, department) if needed
  5. Click Start and download your dataset

Input

FieldRequiredDescription
companiesYesArray of { name, domain, careers_url } objects
filters.keywordsNoComma-separated keywords to filter jobs
filters.locationNoLocation filter (e.g., "remote", "San Francisco")
filters.departmentNoDepartment name to filter
filters.posted_within_daysNoOnly jobs posted within N days (0 = no limit)
ats_overrideNoForce a specific ATS (greenhouse, lever, ashby, workday, rippling)
include_descriptionNoInclude full job descriptions (default: true)
include_salaryNoInclude salary data where available (default: false)
max_jobs_per_companyNoMax jobs per company (default: 100)

Example Input

{
"companies": [
{ "name": "Stripe", "domain": "stripe.com" },
{ "name": "Linear", "domain": "linear.app", "careers_url": "https://jobs.ashbyhq.com/linear" }
],
"filters": {
"keywords": "engineer,product",
"location": "remote"
},
"max_jobs_per_company": 50
}

Output

Each job record includes the following fields:

FieldTypeDescription
job_idstringUnique job ID from the ATS
company_namestringCompany name
company_domainstringCompany domain
ats_platformstringSource ATS (greenhouse, lever, ashby, workday, rippling)
job_titlestringJob title
departmentstring/nullDepartment
teamstring/nullTeam
locationstringJob location
remoteboolean/stringtrue, false, or "hybrid"
employment_typestring/nullfull_time, part_time, contract, intern
senioritystring/nulljunior, mid, senior, staff, director, vp
job_urlstringURL to the job posting
apply_urlstringURL to apply
description_htmlstringFull job description (HTML)
description_textstringFull job description (plain text)
salary_minnumber/nullMinimum salary
salary_maxnumber/nullMaximum salary
salary_currencystring/nullCurrency code (USD, EUR, etc.)
posted_atstring/nullPosted date (ISO 8601)
updated_atstring/nullLast updated (ISO 8601)
scraped_atstringTimestamp when scraped

You can download the dataset in various formats: JSON, CSV, Excel, XML, or RSS.

Supported ATS Platforms

ATSAPI TypeSalary DataPosted Date
GreenhouseREST APIYes (per-job, with include_salary)updated_at only
LeverREST APIYes (optional)Not available
AshbyREST APIYes (with include_description)Yes
WorkdayCXS APINot availableYes
RipplingREST APINot availableYes

Pricing

Pay-per-event: $0.003 per job returned

This actor uses the Apify pay-per-event pricing model. You're charged for each job successfully scraped and added to the dataset. No jobs = no charge.

Example: Scraping 50 companies with ~500 total jobs costs approximately $1.50 in actor fees.

Tips

  • Provide careers_url when possible for more accurate ATS detection
  • Disable include_description for faster runs when you only need metadata
  • Use ats_override if you know the ATS to skip detection entirely
  • Use filters.keywords to reduce the number of results and save on costs

FAQ

Q: Do I need a proxy? A: No. Greenhouse, Lever, and Ashby expose public APIs that don't require proxies. Workday and Rippling may benefit from proxies for large batches, but the actor works without them.

Q: What if ATS detection fails? A: The actor will attempt to fetch using all available connectors as fallback. You can also use ats_override to force a specific ATS.

Q: Can I scrape unlimited jobs? A: Yes, but use max_jobs_per_company to control costs and run time. The default is 100 jobs per company.

Q: Why is Lever returning 0 jobs? A: Some companies disable public job APIs. Use careers_url or try another company as a test.

Q: Is this legal? A: This actor uses publicly available APIs provided by each ATS for job listings. It respects the terms of service of each platform. For Workday and other enterprise systems, ensure you have permission before scraping.

Need help?