WeWorkRemotely Jobs Scraper
Pricing
from $40.00 / 1,000 job postings
WeWorkRemotely Jobs Scraper
Scrape WeWorkRemotely remote jobs across all categories — title, company, URL, posted date, full description HTML, category, region. Programming, design, marketing, customer support, product, sales, all-other. Remote-first recruiters, indie founders, job aggregators, talent platforms.
Pricing
from $40.00 / 1,000 job postings
Rating
0.0
(0)
Developer
Stephan Corbeil
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
a day ago
Last modified
Share
🌍 WeWorkRemotely Jobs Scraper — Remote Job Aggregator
Scrape WeWorkRemotely (the longest-running remote-first job board, ~3M monthly visits) by category. Each call pulls the most-recent ~50–100 postings per category — programming, design, customer support, marketing, product, sales/business, all-other — with title, company, full HTML description, category, region, and posted date.
Built for remote-first recruiters filling distributed teams, indie founders / boutique agencies seeking pre-vetted remote candidates, job aggregators / talent platforms that need clean structured remote-job rows, distributed-work researchers tracking remote-job velocity, and sales prospecting into companies that just opened key remote roles.
What you get per posting
id— stable WWR job URL (also theguid)url— link to the WWR job pagetitle— clean job title (parsed from "Company: Job Title")company— company name (parsed from the title)raw_title— original "Company: Job Title" string (for diff tracking)region— geographic restriction (e.g. "Anywhere in the World", "USA Only", "Europe")category_feed— the RSS feed slug the row came fromcategories— additional WWR sub-categories (e.g. "Full-Stack Programming")posted_at— RFC-822 timestamp frompubDatedescription_html— full HTML body of the postingsource—"weworkremotely.com"
Use cases
- Remote recruiter sourcing — pull every remote programming job posted this week, filter by stack, route to internal candidate pool.
- Job aggregator / talent-platform backfill — feed your remote-jobs vertical with clean structured WWR data refreshed every 6 hours.
- Sales prospecting — companies that just posted "Head of Marketing (Remote)" are buying martech now.
- Distributed-work research — track posting velocity by category and region over time.
- Comp benchmarking — many WWR postings publish salary ranges / comp bands; extract them downstream.
- Founder/agency lead gen — companies actively hiring senior remote roles often need fractional / agency support too.
Quick start
Input:
{"categories": ["remote-programming-jobs","remote-design-jobs","remote-product-jobs"],"maxJobsPerCategory": 25}
Sample output row:
{"id": "https://weworkremotely.com/remote-jobs/rubrik-sales-engineering-manager-cloud-product-line-specialists","url": "https://weworkremotely.com/remote-jobs/rubrik-sales-engineering-manager-cloud-product-line-specialists","title": "Sales Engineering Manager, Cloud Product Line Specialists","company": "Rubrik","raw_title": "Rubrik: Sales Engineering Manager, Cloud Product Line Specialists","region": "Anywhere in the World","category_feed": "remote-programming-jobs","categories": ["Full-Stack Programming"],"posted_at": "Thu, 09 Apr 2026 20:35:22 +0000","source": "weworkremotely.com"}
Run via Python (apify-client)
from apify_client import ApifyClientclient = ApifyClient("YOUR_APIFY_TOKEN")run = client.actor("nexgendata/weworkremotely-jobs-scraper").call(run_input={"categories": ["remote-programming-jobs","remote-design-jobs","remote-marketing-jobs"],"maxJobsPerCategory": 100})for item in client.dataset(run["defaultDatasetId"]).iterate_items():print(item["title"], "-", item["company"], "-", item["region"])
Run via cURL
curl -X POST "https://api.apify.com/v2/acts/nexgendata~weworkremotely-jobs-scraper/run-sync-get-dataset-items?token=YOUR_TOKEN" \-H "Content-Type: application/json" \-d '{"categories":["remote-programming-jobs"],"maxJobsPerCategory":50}'
Integrations
- Zapier — trigger on new postings, push to Slack / Discord / Telegram.
- Make.com — chain into Notion / Airtable as a candidate-sourcing pipeline.
- n8n — schedule every few hours, dedupe by
id, push into Postgres for a candidate-friendly remote-jobs UI.
Pricing
Pay-per-event (PPE):
- Actor start: $0.00005 (negligible)
- Per posting: $0.04
Cost calculator:
| Postings returned | Cost |
|---|---|
| 25 (smoke test) | $1.00 |
| 175 (7 categories × 25) | $7.00 |
| 500 | $20.00 |
| 1,000 | $40.00 |
That's a fraction of WWR's per-post pricing for employers and trivially cheap vs. Greenhouse/Lever multi-tenant aggregators.
FAQ
Q: Do I need a WWR account or API key? A: No. The category RSS feeds are open and require no auth.
Q: How fresh is the data? A: Real-time — WWR's RSS updates as soon as a posting goes live.
Q: Why is company sometimes null?
A: A handful of WWR titles don't follow the Company: Title pattern. The raw_title field is always populated; downstream you can apply your own parser if needed.
Q: Why are some descriptions HTML and others plain?
A: WWR stores all descriptions as HTML; some posters use plain text inside it. Use description_html directly or strip tags downstream as you prefer.
Q: Can I filter by region (e.g. USA only)?
A: Not at the feed level — WWR's RSS includes all regions. Filter on the region field downstream.
Q: Are featured / sticky postings included?
A: Yes — the RSS feed includes them in the <item> list with the rest. They're typically duplicated across categories.
Related actors
- Greenhouse Jobs Scraper — same shape, for ATS-hosted boards.
- Lever Jobs Scraper — same shape, for Lever-hosted boards.
- HN Who's Hiring Scraper — monthly Hacker News hiring threads, complements WWR with founder-driven postings.
Built and maintained by NexGenData — affordable, focused web scrapers for B2B data teams.