Ashby Jobs Scraper - All Companies, All Jobs avatar

Ashby Jobs Scraper - All Companies, All Jobs

Pricing

from $1.80 / 1,000 results

Go to Apify Store
Ashby Jobs Scraper - All Companies, All Jobs

Ashby Jobs Scraper - All Companies, All Jobs

Scrape 16,000+ jobs from every company using Ashby as their ATS. Auto-discovers 2,800+ companies — no manual URL input needed. Filter by location, department, remote, and keywords. Full scrape in under 30 seconds.

Pricing

from $1.80 / 1,000 results

Rating

0.0

(0)

Developer

Doug Silkstone

Doug Silkstone

Maintained by Community

Actor stats

1

Bookmarked

2

Total users

1

Monthly active users

a day ago

Last modified

Share

Ashby Jobs Scraper — Every Job From 2,800+ Companies

Apify Actor License: MIT TypeScript Node.js Jobs Scraped Companies

Scrape 40,000+ jobs from every company using Ashby as their ATS. OpenAI, Notion, Ramp, Deel, Linear, Cursor, Snowflake, Vanta, Deliveroo, and 2,800+ more.

The only Ashby scraper that auto-discovers companies. No manual URL input needed. One click, every job.

What does Ashby Jobs Scraper do?

This Actor discovers every company hosting a job board on jobs.ashbyhq.com, scrapes their public posting API, and returns structured data for every open role.

Other Ashby scrapers require you to already know the company URL. This one finds them for you — auto-discovering 1,350+ company slugs and validating each against Ashby's API.

What you get per job:

FieldExample
companyLinear
titleSenior Software Engineer
departmentEngineering
teamBackend
locationRemote - Europe
isRemotetrue
workplaceTypeRemote
employmentTypeFullTime
compensationTierSummaryEUR 120,000 - 160,000
publishedAt2026-03-01T00:00:00.000Z
jobUrlhttps://jobs.ashbyhq.com/linear/abc123
applyUrlhttps://jobs.ashbyhq.com/linear/abc123/application
countryGermany
cityBerlin

Plus companySlug, jobId, secondaryLocations, region, scrapedAt, and optionally description.

Latest scrape stats

MetricCount
Companies with active jobs830+
Total jobs scraped16,699
Remote jobs7,726 (46%)
Europe-related jobs3,379 (20%)
Full scrape time~2 minutes

How to use Ashby Jobs Scraper

Mode 1: Scrape Everything (Zero Config)

Set mode to all and click Start. That's it.

The Actor will:

  1. Auto-discover every company using Ashby (~90 seconds)
  2. Hit each company's public API endpoint concurrently (~60 seconds)
  3. Push every listed job to the dataset

Input:

{
"mode": "all"
}

Mode 2: Scrape Specific Companies

Set mode to companies and provide an array of Ashby slugs.

The slug is the path in jobs.ashbyhq.com/{slug}. For example, Linear's job board is at jobs.ashbyhq.com/linear, so the slug is linear.

Input:

{
"mode": "companies",
"companies": ["linear", "notion", "posthog", "openai", "vercel"]
}

Set mode to search to scrape specific companies and immediately filter by keyword, location, or department. Faster than a full scrape when you know what you're looking for.

Input:

{
"mode": "search",
"companies": ["openai", "notion", "linear", "cursor", "ramp"],
"keywordFilter": "senior|staff|lead",
"locationFilter": "remote|europe",
"remoteOnly": true
}

Filtering

All filters use regex patterns (case-insensitive) and can be combined:

Find remote engineering jobs in Europe

{
"mode": "all",
"remoteOnly": true,
"locationFilter": "europe|germany|uk|france|netherlands|spain|portugal|czech|austria|switzerland|sweden|denmark|finland|norway|ireland|poland",
"departmentFilter": "engineering|product|technology"
}

Find senior leadership roles

{
"mode": "all",
"keywordFilter": "lead|staff|principal|head.of|director|vp|founding",
"departmentFilter": "engineering"
}

Find React/TypeScript jobs

{
"mode": "all",
"keywordFilter": "react|typescript|full.?stack|frontend"
}

Filter cheat sheet

FilterPatternMatches
Remote European engineeringremoteOnly: true, locationFilter: "europe|uk|germany", departmentFilter: "engineering"~500 jobs
LeadershipkeywordFilter: "lead|staff|head.of|director|vp"~800 jobs
Full-stackkeywordFilter: "full.?stack|fullstack"~400 jobs
Entry-levelkeywordFilter: "intern|junior|entry|graduate"~300 jobs
US tech hubslocationFilter: "san francisco|new york|austin|seattle"~2,000 jobs

Output example

Each row in the dataset is one job posting:

{
"company": "Linear",
"companySlug": "linear",
"jobId": "453f1ba0-a35e-4ed2-8215-1514e0a30b92",
"title": "Senior Software Engineer, Backend",
"department": "Engineering",
"team": "Backend",
"employmentType": "FullTime",
"location": "Remote - Europe",
"secondaryLocations": ["Remote - US", "Remote - Canada"],
"isRemote": true,
"workplaceType": "Remote",
"publishedAt": "2026-02-15T00:00:00.000Z",
"jobUrl": "https://jobs.ashbyhq.com/linear/453f1ba0",
"applyUrl": "https://jobs.ashbyhq.com/linear/453f1ba0/application",
"compensationTierSummary": "USD 150,000 - 210,000",
"country": "United States",
"region": "California",
"city": "San Francisco",
"scrapedAt": "2026-03-06T09:33:00.000Z"
}

Export as JSON, CSV, Excel, XML, or RSS from the Dataset tab.

How much does it cost?

This Actor uses minimal compute — no browser, just HTTP API calls. A full scrape of all 1,350+ companies typically uses:

Run TypeTimeMemoryApproximate Cost
Full scrape (all)~2 min512 MB~$0.05
50 companies~15 sec256 MB~$0.01
Single company~2 sec256 MB< $0.01

Pricing is based on Apify compute units. The Actor is extremely efficient because it uses Ashby's native API (no browser rendering, no proxy needed).

Free tier: Apify's free plan includes $5/month in platform credits — enough for ~100 full scrapes.

Integrate via API

JavaScript / TypeScript

import { ApifyClient } from 'apify-client';
const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });
// Run the scraper
const run = await client.actor('deadlyaccurate/ashby-jobs-scraper').call({
mode: 'all',
remoteOnly: true,
departmentFilter: 'engineering',
});
// Get results
const { items } = await client.dataset(run.defaultDatasetId).listItems();
console.log(`Found ${items.length} jobs`);
items.forEach((job) => {
console.log(`${job.company}${job.title} (${job.location})`);
});

Python

from apify_client import ApifyClient
client = ApifyClient('YOUR_API_TOKEN')
run = client.actor('deadlyaccurate/ashby-jobs-scraper').call(run_input={
'mode': 'all',
'remoteOnly': True,
'locationFilter': 'europe|uk|germany',
})
dataset = client.dataset(run['defaultDatasetId'])
for item in dataset.iterate_items():
print(f"{item['company']}{item['title']} ({item['location']})")

cURL

# Start a run
curl -X POST "https://api.apify.com/v2/acts/deadlyaccurate~ashby-jobs-scraper/runs?token=YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{"mode": "companies", "companies": ["linear", "notion"]}'
# Get results (replace RUN_ID)
curl "https://api.apify.com/v2/datasets/DATASET_ID/items?token=YOUR_TOKEN&format=json"

Webhooks

Set up a webhook to get notified when the scrape completes. POST the dataset URL to your server, Slack, or any HTTP endpoint.

Scheduled runs

Use Apify Schedules to run daily or weekly. Great for:

  • Job alert systems
  • Market intelligence dashboards
  • Competitor hiring tracking

Ashby's API

The Actor uses Ashby's public job posting API:

GET https://api.apify.com/posting-api/job-board/{slug}?includeCompensation=true

This is a documented, public API intended for companies to build custom career pages. No authentication required. No rate limits observed at 10 concurrent requests.

The API returns all listed job postings with full metadata. The Actor filters out unlisted postings (draft/internal roles).

Use cases

  • Job seekers: Find every open role at companies you care about, filtered by your criteria
  • Recruiters: Monitor hiring activity across the Ashby ecosystem
  • Market researchers: Track which departments are growing, which roles are in demand
  • Job board operators: Feed structured job data into your platform
  • Lead generation: Find companies actively hiring (and therefore spending) in specific domains
  • Competitive intelligence: Track competitor hiring patterns and team growth

FAQ

How often should I run this? Job postings change daily. A weekly full scrape catches most changes. For specific companies you're tracking closely, daily runs in companies mode are cheap.

Why do some companies show 0 jobs? Their Ashby board may be empty, they may have switched ATS providers, or all their roles are unlisted (internal). The Actor only returns publicly listed positions.

Can I scrape a company not in the discovery list? Yes — use companies mode and provide the slug directly. Auto-discovery finds ~1,350 companies, but there are ~2,800+ total. If you know a slug, just add it.

Is this legal? This Actor uses Ashby's public, documented API endpoint. The data is publicly accessible on jobs.ashbyhq.com. No authentication is bypassed. See Ashby's API docs.

Why is this Actor so fast? No browser. No proxy. Just native HTTP fetch calls to Ashby's API with 10 concurrent workers. Each API response is pure JSON — no HTML parsing needed.

How do I find a company's slug? Visit their career page. If it's hosted on Ashby, the URL will be jobs.ashbyhq.com/{slug}. The slug is that path segment. You can also run in all mode first — the output includes every discovered slug.

Top companies on Ashby (by job count)

CompanyJobsCompanyJobs
OpenAI614Dandy142
Airwallex604Kraken138
Snowflake420Ramp132
Crusoe310UiPath131
Roadsurfer297Cohere125
Deel276Zip113
EverAI265Hopper108
Deliveroo241ElevenLabs105
Harvey213DeepL100
Vanta185Alan99

Open source

Built on ashby-jobs — an open-source TypeScript library and CLI tool. Star it on GitHub.

Also available as:

  • CLI tool: npx ashby-jobs scrape
  • Claude Code plugin: Auto-discovers and searches Ashby jobs from within Claude Code
  • React Ink TUI: Interactive terminal browser for job results

Other scrapers by this author

Coming soon: Greenhouse, Lever, and Workable scrapers using the same auto-discovery technique.


Author

Doug Silkstone — Lead Full Stack Software Engineer. 15+ years, 3x exits. TypeScript, React, Node, scraping & automation.

Email LinkedIn GitHub Book a Call Apify

Need a custom scraper, data pipeline, or automation? Book a call or email doug@withseismic.com.