Workable Job Scraper 🔥
Pricing
Pay per usage
Workable Job Scraper 🔥
Effortlessly scrape job listings from Workable. This actor is designed for simplicity, providing a minimal, clean dataset with just the core job details. Perfect for quick data collection and lead generation without the extra noise.
Pricing
Pay per usage
Rating
4.6
(3)
Developer

Shahid Irfan
Actor stats
3
Bookmarked
16
Total users
3
Monthly active users
9 days ago
Last modified
Categories
Share
Workable Jobs Scraper
Extract structured job listings from Workable career pages with strong field coverage for recruiting, market intelligence, and job aggregation workflows. Collect clean records including company details, location breakdowns, employment type, posting metadata, and full job content.
Features
- Rich job records — Collect 25+ useful fields per listing for analysis-ready datasets.
- Flexible discovery — Start from search URLs, list URLs, or direct job detail URLs.
- Smart pagination — Continues through available pages until your target count is reached.
- Location clarity — Returns both full location text and city/subregion/country components.
- Business-ready output — Includes company profile fields and job-level metadata for reporting.
Use Cases
Recruitment Intelligence
Track active hiring by role, geography, and work style to prioritize outreach and pipeline strategy.
Job Board Operations
Populate internal or external job catalogs with consistently structured records.
Market Research
Monitor hiring demand patterns across industries and regions.
Competitive Monitoring
Analyze competitors’ role mix, growth signals, and hiring focus over time.
Input Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
startUrls | Array | No | [] | Workable search/list/detail URLs used as starting points. |
keyword | String | No | "software" | Keyword filter for role matching. |
location | String | No | "United States" | Geographic filter used in job discovery. |
posted_date | String | No | "anytime" | Date filter: anytime, 24h, 7d, 30d. |
results_wanted | Integer | No | 20 | Maximum number of jobs to return. |
proxyConfiguration | Object | No | { "useApifyProxy": false } | Proxy settings (disabled by default). |
Output Data
Each dataset item can include:
| Field | Type | Description |
|---|---|---|
id | String | Job identifier. |
title | String | Job title. |
state | String | Job state/status. |
language | String | Job language code. |
date_posted | String | Job posting date. |
date_updated | String | Last update date. |
url | String | Job page URL. |
linkout_url | String | External apply URL when available. |
company_id | String | Company identifier. |
company_title | String | Company name. |
company_website | String | Company website. |
company_url | String | Company profile URL. |
company_image | String | Company image/logo URL. |
location | String | Full location text. |
location_city | String | City value. |
location_subregion | String | Region/state value. |
location_country | String | Country value. |
locations | Array | Multi-location variants when present. |
employment_type | String/Array | Employment type value(s). |
job_types | Array | Normalized job type list. |
workplace_type | String | Workplace mode (for example remote/hybrid/on-site). |
is_featured | Boolean | Featured listing indicator. |
social_sharing_description | String | Social summary text. |
description_text | String | Main job description text. |
requirements | String | Requirements section text. |
benefits | String | Benefits section text. |
source_list_url | String | Source listing URL for traceability. |
source_query_keyword | String | Keyword used for this run. |
source_query_location | String | Location used for this run. |
Usage Examples
Basic Search
{"keyword": "data analyst","location": "United States","results_wanted": 20}
Recent Remote Roles
{"keyword": "software engineer","location": "remote","posted_date": "7d","results_wanted": 40}
Start from Known URL
{"startUrls": ["https://jobs.workable.com/search?location=remote"],"results_wanted": 30}
Sample Output
{"id": "3c9c3f80-35d5-4542-a844-b2909e2293a3","title": "Senior Product Analyst","state": "published","language": "en","date_posted": "2026-02-24T14:10:11.000Z","date_updated": "2026-02-27T09:21:06.000Z","url": "https://jobs.workable.com/view/8u6AStSBRuP274KPyJMkiV/...","linkout_url": "https://company.example/careers/12345","company_id": "3fe7a205-8b5f-45e0-a2ef-7d15f2f3e0d1","company_title": "Example Labs","company_website": "https://examplelabs.com","company_url": "https://jobs.workable.com/company/example-labs","company_image": "https://images.workable.com/company/example.png","location": "Boston, MA, United States","location_city": "Boston","location_subregion": "MA","location_country": "United States","locations": ["Boston, MA","Remote"],"employment_type": "FULL_TIME","job_types": ["FULL_TIME"],"workplace_type": "hybrid","is_featured": false,"social_sharing_description": "Join our analytics team...","description_text": "We are looking for a Senior Product Analyst...","requirements": "3+ years of analytics experience...","benefits": "Health insurance, 401(k), learning budget...","source_list_url": "https://jobs.workable.com/api/v1/jobs?q=data+analyst&location=United+States&limit=20","source_query_keyword": "data analyst","source_query_location": "United States"}
Tips for Best Results
Start with Focused Queries
- Use role-specific keywords instead of broad terms.
- Pair keyword + location for cleaner datasets.
Keep Test Runs Small
- Use
results_wanted: 20for quick validation. - Increase gradually for production runs.
Use Proxies for Heavy Workloads
- Recommended for high-volume or frequent execution.
Integrations
- Google Sheets — Share and analyze job data with teams.
- Airtable — Build searchable hiring intelligence tables.
- Make — Automate enrichment and routing workflows.
- Zapier — Trigger downstream actions from fresh records.
- Webhooks — Send output directly to your services.
Export Formats
- JSON — Backend and data pipelines.
- CSV — Spreadsheet workflows.
- Excel — Business reporting.
Frequently Asked Questions
How many jobs can I collect?
Set results_wanted to your target size. Start small for testing and scale up as needed.
Can I scrape from specific company pages?
Yes. Add company-specific Workable URLs to startUrls.
What if some fields are empty?
Some companies do not publish every field for every role. Empty values are expected in those cases.
Does it support remote job filtering?
Yes. Use location: "remote" or a specific geography.
Can I schedule recurring runs?
Yes. Use Apify schedules to refresh your dataset automatically.
Support
For support or feature requests, use the actor page on Apify.
Resources
Legal Notice
This actor is intended for legitimate data collection. You are responsible for complying with website terms and local regulations when using extracted data.