FlexJobs Scraper
Pricing
Pay per usage
FlexJobs Scraper
Scrape Flexjobs remote job listings instantly. Extract job titles, salaries, descriptions, requirements, and company info for job board aggregation, career research, and workforce analytics. Get flexible work opportunities data at scale.
Pricing
Pay per usage
Rating
0.0
(0)
Developer
Shahid Irfan
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
2 days ago
Last modified
Categories
Share
FlexJobs Remote Jobs Scraper
Extract comprehensive remote and flexible job data from FlexJobs public listings in a clean, analysis-ready dataset. Collect role, company, location, schedule, compensation, and posting signals at scale for research, monitoring, and automation workflows.
Features
- Rich job records — Capture detailed fields including company info, locations, remote options, categories, schedule, and salary details.
- Clean output quality — Remove empty fields and keep records normalized for dashboards, alerts, and exports.
- Duplicate-safe collection — Prevent repeated jobs across overlapping pages and categories.
- Flexible crawl control — Set how many jobs to collect and how deep to go per starting URL.
- Description enrichment — Attempt to improve
description_textfrom job detail pages when fuller text is available. - Paywall-aware output — Many jobs are paywalled, and those records may contain reduced fields or summary-only descriptions.
Use Cases
Remote Talent Market Tracking
Monitor new remote roles by function, seniority, and location patterns. Use the dataset to detect hiring surges and market shifts.
Competitive Hiring Intelligence
Compare companies, job categories, and compensation signals over time. Build repeatable market snapshots for planning and strategy.
Job Feed Automation
Power newsletters, internal opportunity feeds, and job alert pipelines with structured and deduplicated records.
Compensation Benchmarking
Analyze salary ranges and compensation formats by role type, geography, and remote level.
Geographic Opportunity Mapping
Track candidate location eligibility and region coverage for distributed teams and mobility research.
Input Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
startUrls | Array | No | 2 starter listing URLs | Listing URLs to begin extraction from |
results_wanted | Integer | No | 20 | Maximum number of jobs to save |
maxPagesPerList | Integer | No | 25 | Maximum page depth per start URL |
proxyConfiguration | Object | No | {"useApifyProxy": false} | Proxy settings for reliability |
Output Data
Each dataset item can include:
| Field | Type | Description |
|---|---|---|
source | String | Source identifier |
source_type | String | Source payload type |
url | String | Public job URL |
title | String | Job title |
company | String | Company name |
company_id | String | Company identifier when available |
company_slug | String | Company slug when available |
company_logo | String | Company logo URL when available |
location | String | Primary location text |
job_locations | Array | Job location list |
allowed_candidate_locations | Array | Candidate-eligible locations |
states | Array | State list when provided |
countries | Array | Country list when provided |
cities | Array | City list when provided |
remote_options | Array | Remote options list |
remote_level | String | Primary remote classification |
job_types | Array | Job type list |
job_type | String | Primary job type |
job_schedules | Array | Schedule list |
schedule | String | Primary schedule |
salary | String | Salary/compensation text |
salary_min | Number | Minimum salary when available |
salary_max | Number | Maximum salary when available |
salary_unit | String | Salary period unit |
salary_currency | String | Salary currency code/text |
career_level | String or Array | Career level data |
category | String | Primary category |
categories | Array | Category labels |
job_categories | Array | Job category labels |
education_levels | Array | Education level requirements |
description_text | String | Best available job description text (often summary-only for paywalled jobs) |
description_source | String | Description source indicator |
job_summary | String | Job summary text |
date_posted | String | Posted timestamp |
created_on | String | Creation timestamp |
valid_through | String | Expiration timestamp when available |
job_id | String | Canonical job identifier |
slug | String | Job slug |
apply_url | String | Apply URL when available |
travel_required | String | Travel requirement text |
coordinates | Object | Latitude/longitude when available |
is_flexible_schedule | Boolean | Flexible schedule flag |
is_telecommute | Boolean | Telecommute flag |
is_freelancing_contract | Boolean | Freelance/contract flag |
featured | Boolean | Featured listing flag |
is_free_job | Boolean | Free job flag |
hosted | Boolean | Hosted listing flag |
track_properties | Object | Additional tracking metadata |
scraped_at | String | Extraction timestamp |
Usage Examples
Basic Remote Jobs Extraction
{"startUrls": ["https://www.flexjobs.com/remote-jobs"],"results_wanted": 20}
Multi-Category Collection
{"startUrls": ["https://www.flexjobs.com/remote-jobs/computer-it","https://www.flexjobs.com/remote-jobs/customer-service-call-center"],"results_wanted": 80,"maxPagesPerList": 20}
Reliability-Focused Run
{"startUrls": ["https://www.flexjobs.com/remote-jobs"],"results_wanted": 50,"maxPagesPerList": 25,"proxyConfiguration": {"useApifyProxy": true}}
Sample Output
{"source": "flexjobs","source_type": "_next_data","url": "https://www.flexjobs.com/publicjobs/director-marketing-ai-transformation-3420c1ce-1f38-4f77-acd5-9a8a47a469be","title": "Director, Marketing AI Transformation","company": "Dynatrace","company_id": "1343","location": "US National","remote_level": "100% Remote Work","job_type": "Employee","schedule": "Full-Time","salary": "166,000.00 - 210,000.00 USD Annually","category": "Computer & IT","description_text": "Lead the Marketing AI roadmap and investment priorities, including quarterly planning and use case prioritization...","description_source": "listing_summary","date_posted": "2026-03-26T05:12:35.000Z","job_id": "3420c1ce-1f38-4f77-acd5-9a8a47a469be","apply_url": "https://www.dynatrace.com/careers/jobs/1375016100/","scraped_at": "2026-03-26T08:41:49.447Z"}
Tips for Best Results
Start With Broad Listings
- Use broad listing pages first to capture a wider spread of roles.
- Add niche category URLs when you need targeted subsets.
Keep Test Runs Small
- Start with
results_wanted: 20for quick validation. - Increase limits after confirming output quality and speed.
Use Proxy for Higher Reliability
- Enable proxy in production workloads to reduce blocking risk.
- Keep retries and page limits practical for faster completion.
Validate Description Expectations
- Most jobs are guarded by a paywall, so many records include shorter summary descriptions.
- Paywalled jobs can also have less complete field coverage than fully visible listings.
- Use
description_sourceto track where description text came from.
Integrations
Connect your dataset with:
- Google Sheets — Build live job tracking sheets
- Airtable — Create searchable role databases
- Slack — Trigger alerts for new matching jobs
- Make — Automate collection and downstream actions
- Zapier — Route records into CRM and reporting tools
- Webhooks — Send records to custom pipelines
Export Formats
- JSON — API and engineering workflows
- CSV — Spreadsheet analysis
- Excel — Business reporting
- XML — Structured integrations
Frequently Asked Questions
How many jobs can I collect in one run?
You can collect as many as available within your results_wanted and page depth limits.
Why are some descriptions shorter than expected?
Most listings are paywalled. Paywalled jobs often expose only summary descriptions and sometimes fewer overall fields. The actor stores the best available text and marks the source.
Can I run multiple categories at once?
Yes. Provide multiple URLs in startUrls to combine categories in one run.
How does duplicate handling work?
Records are deduplicated using stable job identifiers and canonical job URLs.
Can I use this data for trend dashboards?
Yes. The normalized fields are suitable for analytics, alerts, and market monitoring pipelines.
Support
For issues or feature requests, use the Apify Console issue/support channels.
Resources
Legal Notice
This actor is intended for legitimate data collection and research use cases. You are responsible for complying with applicable laws, platform terms, and responsible usage practices.