Workable + SmartRecruiters Jobs Scraper
Pricing
from $50.00 / 1,000 job postings
Workable + SmartRecruiters Jobs Scraper
Scrape live jobs across Workable AND SmartRecruiters in one run โ id, title, location, department, function, employment type, apply URL, full description (HTML+plain), created/updated timestamps, source ATS, company slug. Companion to Greenhouse + Lever for total ATS coverage.
Pricing
from $50.00 / 1,000 job postings
Rating
0.0
(0)
Developer
Stephan Corbeil
Maintained by CommunityActor stats
0
Bookmarked
2
Total users
1
Monthly active users
9 hours ago
Last modified
Share
๐ผ Workable + SmartRecruiters Jobs Scraper โ Two of the most-used mid-market ATSes in a single multi-ATS scraper
The Workable + SmartRecruiters Jobs Scraper extracts structured job postings from both Workable (apply.workable.com/{company} and {company}.workable.com) and SmartRecruiters (jobs.smartrecruiters.com/{company}) in a single run. Two of the most-used mid-market ATS platforms in one unified output schema โ perfect for hiring-velocity signals, recruiter sourcing, ATS competitive intel, and powering aggregator job boards.
Why Workable + SmartRecruiters Jobs Scraper Beats LinkedIn Talent Insights, Indeed Analytics, and Built In
| Source | Price | What you get |
|---|---|---|
| LinkedIn Talent Insights | $5K-30K/yr | LinkedIn-derived, not native ATS |
| Indeed Analytics | Self-serve / enterprise | Aggregator, not Workable/SmartRecruiters native |
| Built In | $Custom | Curated, narrow coverage |
| AngelList Talent | Free + paid | Startup focus only |
| NexGenData Workable+SmartRecruiters | $0.05/posting PPE | Both ATS in one actor; title, dept, location, description, posted date |
What You Get
- Job ID + canonical ATS URL
- Source ATS (workable / smartrecruiters)
- Company slug + company display name
- Job title
- Department / job function
- Employment type (Full-time / Part-time / Contract / Temp)
- Workplace type (On-site / Remote / Hybrid)
- Location (city, state, country)
- Posted date / created-at timestamp
- Full job description (HTML and plain-text)
- Requirements + benefits sections (when ATS-published)
- Application URL
- Internal ATS posting ID for stable deduping
Use Cases
- Hiring-velocity tracker โ count open roles weekly across portfolio companies that use Workable or SmartRecruiters.
- Recruiter sourcing โ pull every Senior PM role at target companies on either ATS in one job โ no need to maintain two scrapers.
- ATS market-share analytics โ measure how many target companies use Workable vs SmartRecruiters vs Greenhouse vs Lever over time.
- Sales-prospecting โ when a company opens its first VP of Sales / Head of Marketing role, fire a CRM trigger.
- Niche job board โ power an aggregator covering both ATSes for your specific vertical (climate / fintech / dev-tools).
- HR analytics โ track team-mix shift at competitor companies across quarters.
- Compensation benchmarking โ combine with US-pay-transparency-required city filters to build a comp dataset.
Quick Start
from apify_client import ApifyClientclient = ApifyClient("YOUR_APIFY_TOKEN")run_input = {"workableBoards": ["nexar"],"smartrecruitersBoards": ["bosch"],"departments": ["Engineering"]}run = client.actor("nexgendata/workable-smartrecruiters-jobs-scraper").call(run_input=run_input)# Iterate resultsfor item in client.dataset(run["defaultDatasetId"]).iterate_items():print(item)# Or fetch all in one goitems = list(client.dataset(run["defaultDatasetId"]).iterate_items())print(f"Got {len(items)} rows")
You can also run from the Apify CLI:
apify call nexgendata/workable-smartrecruiters-jobs-scraper --input='{"workableBoards": ["nexar"],"smartrecruitersBoards": ["bosch"],"departments": ["Engineering"]}'
Or from the web console: open the actor page on Apify, click Try for free, paste the input JSON, hit Run. Results stream into the dataset which you can export as JSON / JSONL / CSV / Excel / HTML.
Scheduling
This actor pairs cleanly with Apify Scheduler (built into the platform) โ schedule it hourly / daily / cron-style and dedupe results into your warehouse on the stable primary-key fields documented above. Webhook outputs are supported, so you can fire a Slack / Zapier / Make / n8n / your-own-API call the moment new rows materialize.
Integration patterns
- CRM enrichment: pipe rows directly into HubSpot / Salesforce / Pipedrive via Zapier or Make
- Warehouse: append to BigQuery / Snowflake / Postgres on a daily schedule via Apify โ S3 โ warehouse ingest
- LLM-ready RAG: each row is already JSON-flat; embed the plain-text body field and store in pgvector / Pinecone / Weaviate
- Slack alerts: filter by your trigger keyword and fire a Slack webhook for matches in real-time
Pricing
This actor runs on Apify's pay-per-event (PPE) model โ you pay only for results, not run-time:
- $0.05 per job posting โ the primary event (one charge per row pushed to the dataset)
- 0.00005 USD per actor-start GB-event โ actor start cost (one-time per run, sub-cent at typical memory)
No subscriptions, no minimums, no per-CPU-second charges. Apify's $5/month free tier covers most experiments. Browse 200+ buyer-intent actors at https://apify.com/nexgendata?fpr=2ayu9b
Cost worked example
A daily scheduled run pulling 500 fresh rows costs roughly:
- 500 rows ร primary-event price (~$0.04-0.05) = $20-25
- 1 actor start ร ~$0.00005 = negligible
So ~$20-25 per 500-row daily run, or ~$0.04-0.05 per row all-in. There are no surprise compute, storage, or proxy add-ons โ proxy rotation is bundled into the per-row price.
Why pay-per-event beats time-based pricing
- Predictable: you know your cost from row count before the run starts
- Failure-safe: if a target site changes its HTML and the actor returns 0 rows, you pay 0 (vs paying for the CPU-seconds anyway under time-based pricing)
- Easy to attribute: 1 row = 1 unit cost, so per-customer / per-pipeline cost accounting is trivial
Sister Actors in the NexGenData Fleet
| Use case | Actor |
|---|---|
| Greenhouse ATS postings scraper | greenhouse-jobs-scraper |
| Lever ATS postings scraper | lever-jobs-scraper |
| Remote jobs aggregator | weworkremotely-jobs-scraper |
| Convert postings to hiring-velocity signal | hiring-signal-detector |
| LinkedIn jobs scraping | linkedin-jobs-scraper |
| Find recruiter / hiring-manager emails | company-email-finder |
| B2B leads for recruiting-tool sellers | b2b-leads-finder |
| Enrich company list with firmographics | lead-list-enricher |
(All sister actors share the same PPE billing and Apify-standard JSON output, so you can compose multi-step pipelines without rewriting input/output adapters.)
FAQ
Q: Which company boards can I scrape?
A: Any public Workable board (apply.workable.com/{slug} or {slug}.workable.com) and any public SmartRecruiters board (jobs.smartrecruiters.com/{slug}). Pass slugs or full URLs.
Q: How fresh is the data?
A: Each run pulls live data. Both ATSes reflect employer posts in near-real-time.
Q: Why combine the two ATSes?
A: Mid-market companies (50-500 employees) commonly use Workable; growth companies (500-5K) commonly use SmartRecruiters. One job to monitor both is much faster than running two separate scrapers.
Q: How does this compare to scraping LinkedIn jobs?
A: ATS boards are the source-of-truth โ employers post here first. This actor catches roles hours sooner with cleaner team metadata.
Q: Can I deduplicate across runs?
A: Yes โ the ATS-internal posting ID is stable per ATS. The output includes a source-ATS field so you can compose a global key.
Q: Output format?
A: JSON, JSONL, CSV, Excel via Apify dataset export. Schema is stable.
Schema Stability & Versioning
This actor follows NexGenData's additive-only schema contract:
- New fields may be added at any time โ they will simply appear as new keys in the JSON output, defaulting to
nullfor older runs. - Existing fields are never renamed or removed without a major-version bump and an advance changelog notice.
- Field semantics (units, timezones, value-sets) are never silently changed โ if we need to change semantics, we add a new field with the new name and deprecate (but keep) the old one for at least 90 days.
This means you can build production pipelines on this actor and not worry about a Tuesday breaking a Friday's ETL job. If you spot an unexpected change, reach out via the actor's Apify Issues tab and we'll look at it the same day.
Compliance & Legal
- The actor reads public, unauthenticated pages the same way a logged-out browser does.
- All requests route through Apify's compliant residential-proxy infrastructure with polite rate limiting.
- You are responsible for ensuring your downstream use complies with the target site's Terms of Service, your jurisdiction's data-protection laws (GDPR, CCPA, UK DPA, etc.), and any sector-specific rules (HIPAA, PCI, etc.).
- We do not collect, store, or transmit credentials for the target site.
- Most read-only competitive-intelligence and lead-generation use is widely accepted. Consult counsel before bulk redistribution.
Support
Open an issue on the actor's Apify Issues tab โ the NexGenData team responds within one business day. For feature requests (new fields, new input filters), include the use case so we can prioritize on it.
About NexGenData
NexGenData publishes 200+ buyer-intent actors covering SEC filings, YC alumni, Delaware DOC, lead generation, competitive intelligence, stock fundamentals across 30+ exchanges, ATS job boards, real-estate marketplaces, and more. All actors are pay-per-result and share a stable, additive-only JSON schema. Browse the full catalog at https://apify.com/nexgendata?fpr=2ayu9b
SEO: ๐ผ Workable + SmartRecruiters โ Multi-ATS Jobs Scraper API