WeWorkRemotely Jobs Scrapper
Pricing
Pay per usage
WeWorkRemotely Jobs Scrapper
Introducing the WeWorkRemotely Jobs Scrapper, a lightweight actor designed to efficiently extract remote job listings from WeWorkRemotely. Fast, simple, and reliable. For optimal performance and to avoid blocking, the use of residential proxies is highly recommended. Start scraping today!
Pricing
Pay per usage
Rating
0.0
(0)
Developer

Shahid Irfan
Actor stats
0
Bookmarked
8
Total users
3
Monthly active users
17 days ago
Last modified
Categories
Share
WeWorkRemotely Jobs Scraper
Extract remote job listings from WeWorkRemotely quickly and reliably. Collect role details, company information, compensation signals, categories, and application links in structured output ready for analysis, monitoring, and automation. Ideal for hiring teams, market researchers, and job intelligence workflows.
Features
- High-volume collection — Collect well beyond 100 jobs in a single run.
- Search URL filtering — Use one or many search URLs to narrow results by keyword intent.
- Rich job records — Capture title, company, location, compensation, posting date, and apply URL.
- Smart deduplication — Prevent duplicate records across category sources.
- Clean structured output — Exportable dataset ready for BI, alerts, and downstream pipelines.
Use Cases
Job Market Intelligence
Track demand across role types, categories, and locations. Monitor how hiring trends shift over time with repeatable snapshots.
Compensation Benchmarking
Analyze available salary signals across listings and compare compensation patterns by category and company segment.
Candidate Outreach Research
Identify hiring companies, map active roles, and build focused outreach lists using job and company-level fields.
Remote Hiring Operations
Support talent teams with fresh datasets for sourcing, reporting, and recurring hiring dashboards.
Niche Opportunity Discovery
Filter by search intent (for example specific skills) to discover less obvious openings and sub-markets.
Input Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
search_urls | Array | No | [] | Multiple search URLs. Terms are combined with OR logic. |
category | String | No | "all" | Category scope to collect from. |
results_wanted | Integer | No | 20 | Maximum number of records to return. |
Category values
allremote-full-stack-programming-jobsremote-front-end-programming-jobsremote-back-end-programming-jobsremote-design-jobsremote-devops-sysadmin-jobsremote-management-and-finance-jobsremote-product-jobsremote-customer-support-jobsremote-sales-and-marketing-jobsall-other-remote-jobs
Output Data
Each item in the dataset contains:
| Field | Type | Description |
|---|---|---|
title | String | Job title. |
company | String | Company name. |
category | String | Primary category label. |
categories | Array | All category labels found for the listing. |
location | String | Location or region details from listing content. |
salary | String | null | Raw compensation text when available. |
min_salary | Number | null | Parsed minimum salary when detectable. |
max_salary | Number | null | Parsed maximum salary when detectable. |
currency | String | null | Parsed currency code when detectable. |
salary_interval | String | null | Parsed compensation interval (hour, year, etc.) when detectable. |
job_type | String | null | Employment type signals (for example Contract, Full-Time) when detectable. |
date_posted | String | null | Listing publish date string. |
description_html | String | null | Rich listing description. |
description_text | String | null | Plain text description. |
url | String | Listing URL. |
guid | String | null | Listing identifier value. |
apply_url | String | null | Extracted apply URL. |
company_website | String | null | Company website URL when available. |
rss_creator | String | null | Creator metadata when available. |
_source | String | Source marker for traceability. |
Usage Examples
Basic Collection
{"category": "all","results_wanted": 20}
Category-Focused Collection
{"category": "remote-back-end-programming-jobs","results_wanted": 20}
Search URL Filtering (Multiple)
{"search_urls": ["https://weworkremotely.com/remote-jobs/search?term=python","https://weworkremotely.com/remote-jobs/search?term=golang"],"category": "all","results_wanted": 20}
Sample Output
{"title": "Senior Backend Engineer","company": "Example Labs","category": "Back-End Programming","categories": ["Back-End Programming"],"location": "United States","salary": "$120,000 - $160,000 per year","min_salary": 120000,"max_salary": 160000,"currency": "USD","salary_interval": "year","job_type": "Full-Time","date_posted": "Fri, 14 Feb 2026 12:31:45 +0000","description_html": "<p>...</p>","description_text": "...","url": "https://weworkremotely.com/remote-jobs/example-labs-senior-backend-engineer","guid": "https://weworkremotely.com/remote-jobs/example-labs-senior-backend-engineer","apply_url": "https://weworkremotely.com/remote-jobs/example-labs-senior-backend-engineer","company_website": "https://examplelabs.com","rss_creator": null,"_source": "weworkremotely-rss-api"}
Tips for Best Results
Start Broad, Then Narrow
- Run with
category: "all"to build a large baseline dataset. - Add
search_urlsto focus on specific skills and roles.
Request Sufficient Volume
- Use
results_wantedabove 100 for broader market snapshots. - Increase limits for periodic reporting and trend tracking.
Expect Natural Nulls
- Some listings do not publish compensation or employment type.
- Null values indicate source-side absence rather than extraction failure.
Use Repeated Runs for Monitoring
- Schedule recurring runs to detect newly posted jobs and changes over time.
Integrations
- Google Sheets — Analyze listings with pivot tables and filters.
- Airtable — Build searchable hiring databases.
- Slack — Push fresh-job alerts to channels.
- Webhooks — Send results into custom systems.
- Make — Build no-code automation workflows.
- Zapier — Trigger downstream business actions.
Export Formats
- JSON — Developer-friendly structured data.
- CSV — Spreadsheet-ready rows.
- Excel — Business reporting format.
- XML — Structured interchange format.
Frequently Asked Questions
Why are some salary fields null?
Not all listings include compensation details. When compensation is present and parseable, salary fields are populated.
Why is job_type null on some records?
Employment type is only filled when the listing includes clear type signals.
Can I collect more than 100 jobs?
Yes. Set results_wanted above 100 to return larger datasets.
Can I pass multiple search URLs?
Yes. Use search_urls; matching uses OR logic across extracted terms.
Are duplicate jobs removed?
Yes. Records are deduplicated by listing URL.
Support
For issues or feature requests, open a request through the Apify Console.
Resources
Legal Notice
This actor is intended for legitimate data collection. You are responsible for complying with applicable laws and platform terms, and for responsible use of collected data.