FlexJobs Scraper avatar

FlexJobs Scraper

Pricing

Pay per usage

Go to Apify Store
FlexJobs Scraper

FlexJobs Scraper

Scrape Flexjobs remote job listings instantly. Extract job titles, salaries, descriptions, requirements, and company info for job board aggregation, career research, and workforce analytics. Get flexible work opportunities data at scale.

Pricing

Pay per usage

Rating

0.0

(0)

Developer

Shahid Irfan

Shahid Irfan

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

2 days ago

Last modified

Share

FlexJobs Remote Jobs Scraper

Extract comprehensive remote and flexible job data from FlexJobs public listings in a clean, analysis-ready dataset. Collect role, company, location, schedule, compensation, and posting signals at scale for research, monitoring, and automation workflows.

Features

  • Rich job records — Capture detailed fields including company info, locations, remote options, categories, schedule, and salary details.
  • Clean output quality — Remove empty fields and keep records normalized for dashboards, alerts, and exports.
  • Duplicate-safe collection — Prevent repeated jobs across overlapping pages and categories.
  • Flexible crawl control — Set how many jobs to collect and how deep to go per starting URL.
  • Description enrichment — Attempt to improve description_text from job detail pages when fuller text is available.
  • Paywall-aware output — Many jobs are paywalled, and those records may contain reduced fields or summary-only descriptions.

Use Cases

Remote Talent Market Tracking

Monitor new remote roles by function, seniority, and location patterns. Use the dataset to detect hiring surges and market shifts.

Competitive Hiring Intelligence

Compare companies, job categories, and compensation signals over time. Build repeatable market snapshots for planning and strategy.

Job Feed Automation

Power newsletters, internal opportunity feeds, and job alert pipelines with structured and deduplicated records.

Compensation Benchmarking

Analyze salary ranges and compensation formats by role type, geography, and remote level.

Geographic Opportunity Mapping

Track candidate location eligibility and region coverage for distributed teams and mobility research.


Input Parameters

ParameterTypeRequiredDefaultDescription
startUrlsArrayNo2 starter listing URLsListing URLs to begin extraction from
results_wantedIntegerNo20Maximum number of jobs to save
maxPagesPerListIntegerNo25Maximum page depth per start URL
proxyConfigurationObjectNo{"useApifyProxy": false}Proxy settings for reliability

Output Data

Each dataset item can include:

FieldTypeDescription
sourceStringSource identifier
source_typeStringSource payload type
urlStringPublic job URL
titleStringJob title
companyStringCompany name
company_idStringCompany identifier when available
company_slugStringCompany slug when available
company_logoStringCompany logo URL when available
locationStringPrimary location text
job_locationsArrayJob location list
allowed_candidate_locationsArrayCandidate-eligible locations
statesArrayState list when provided
countriesArrayCountry list when provided
citiesArrayCity list when provided
remote_optionsArrayRemote options list
remote_levelStringPrimary remote classification
job_typesArrayJob type list
job_typeStringPrimary job type
job_schedulesArraySchedule list
scheduleStringPrimary schedule
salaryStringSalary/compensation text
salary_minNumberMinimum salary when available
salary_maxNumberMaximum salary when available
salary_unitStringSalary period unit
salary_currencyStringSalary currency code/text
career_levelString or ArrayCareer level data
categoryStringPrimary category
categoriesArrayCategory labels
job_categoriesArrayJob category labels
education_levelsArrayEducation level requirements
description_textStringBest available job description text (often summary-only for paywalled jobs)
description_sourceStringDescription source indicator
job_summaryStringJob summary text
date_postedStringPosted timestamp
created_onStringCreation timestamp
valid_throughStringExpiration timestamp when available
job_idStringCanonical job identifier
slugStringJob slug
apply_urlStringApply URL when available
travel_requiredStringTravel requirement text
coordinatesObjectLatitude/longitude when available
is_flexible_scheduleBooleanFlexible schedule flag
is_telecommuteBooleanTelecommute flag
is_freelancing_contractBooleanFreelance/contract flag
featuredBooleanFeatured listing flag
is_free_jobBooleanFree job flag
hostedBooleanHosted listing flag
track_propertiesObjectAdditional tracking metadata
scraped_atStringExtraction timestamp

Usage Examples

Basic Remote Jobs Extraction

{
"startUrls": [
"https://www.flexjobs.com/remote-jobs"
],
"results_wanted": 20
}

Multi-Category Collection

{
"startUrls": [
"https://www.flexjobs.com/remote-jobs/computer-it",
"https://www.flexjobs.com/remote-jobs/customer-service-call-center"
],
"results_wanted": 80,
"maxPagesPerList": 20
}

Reliability-Focused Run

{
"startUrls": [
"https://www.flexjobs.com/remote-jobs"
],
"results_wanted": 50,
"maxPagesPerList": 25,
"proxyConfiguration": {
"useApifyProxy": true
}
}

Sample Output

{
"source": "flexjobs",
"source_type": "_next_data",
"url": "https://www.flexjobs.com/publicjobs/director-marketing-ai-transformation-3420c1ce-1f38-4f77-acd5-9a8a47a469be",
"title": "Director, Marketing AI Transformation",
"company": "Dynatrace",
"company_id": "1343",
"location": "US National",
"remote_level": "100% Remote Work",
"job_type": "Employee",
"schedule": "Full-Time",
"salary": "166,000.00 - 210,000.00 USD Annually",
"category": "Computer & IT",
"description_text": "Lead the Marketing AI roadmap and investment priorities, including quarterly planning and use case prioritization...",
"description_source": "listing_summary",
"date_posted": "2026-03-26T05:12:35.000Z",
"job_id": "3420c1ce-1f38-4f77-acd5-9a8a47a469be",
"apply_url": "https://www.dynatrace.com/careers/jobs/1375016100/",
"scraped_at": "2026-03-26T08:41:49.447Z"
}

Tips for Best Results

Start With Broad Listings

  • Use broad listing pages first to capture a wider spread of roles.
  • Add niche category URLs when you need targeted subsets.

Keep Test Runs Small

  • Start with results_wanted: 20 for quick validation.
  • Increase limits after confirming output quality and speed.

Use Proxy for Higher Reliability

  • Enable proxy in production workloads to reduce blocking risk.
  • Keep retries and page limits practical for faster completion.

Validate Description Expectations

  • Most jobs are guarded by a paywall, so many records include shorter summary descriptions.
  • Paywalled jobs can also have less complete field coverage than fully visible listings.
  • Use description_source to track where description text came from.

Integrations

Connect your dataset with:

  • Google Sheets — Build live job tracking sheets
  • Airtable — Create searchable role databases
  • Slack — Trigger alerts for new matching jobs
  • Make — Automate collection and downstream actions
  • Zapier — Route records into CRM and reporting tools
  • Webhooks — Send records to custom pipelines

Export Formats

  • JSON — API and engineering workflows
  • CSV — Spreadsheet analysis
  • Excel — Business reporting
  • XML — Structured integrations

Frequently Asked Questions

How many jobs can I collect in one run?

You can collect as many as available within your results_wanted and page depth limits.

Why are some descriptions shorter than expected?

Most listings are paywalled. Paywalled jobs often expose only summary descriptions and sometimes fewer overall fields. The actor stores the best available text and marks the source.

Can I run multiple categories at once?

Yes. Provide multiple URLs in startUrls to combine categories in one run.

How does duplicate handling work?

Records are deduplicated using stable job identifiers and canonical job URLs.

Can I use this data for trend dashboards?

Yes. The normalized fields are suitable for analytics, alerts, and market monitoring pipelines.


Support

For issues or feature requests, use the Apify Console issue/support channels.

Resources


This actor is intended for legitimate data collection and research use cases. You are responsible for complying with applicable laws, platform terms, and responsible usage practices.