Wellfound Jobs Scraper
Pricing
Pay per usage
Wellfound Jobs Scraper
Unlock startup job data with the Wellfound Jobs Scraper. Designed for efficiency on the Wellfound platform to gather fresh listings. Requirement: You must use USA residential proxies for this actor to successfully extract data. Get the startup insights you need today!
Pricing
Pay per usage
Rating
5.0
(7)
Developer

Shahid Irfan
Actor stats
4
Bookmarked
23
Total users
10
Monthly active users
a day ago
Last modified
Categories
Share
Extract startup and tech job listings from Wellfound in a structured, reusable format. Collect job titles, companies, compensation, remote details, and complete job descriptions for research, hiring intelligence, and analysis workflows. Built for fast, repeatable job data collection at scale.
Features
- Comprehensive job extraction — Collect title, company, salary, location, and apply links in one run.
- Description-ready output — Get both
description_textanddescription_htmlfor publishing and NLP workflows. - Company context included — Capture company profile URL, logo, size, and badges when available.
- Flexible volume control — Set exactly how many jobs you want with
results_wanted. - Remote and experience metadata — Includes remote configuration, accepted remote locations, and experience range fields.
- Automation-friendly exports — Use JSON, CSV, Excel, XML, RSS, or HTML from Apify dataset exports.
Use Cases
Job Market Research
Track hiring demand across locations and role types. Compare salary bands, remote trends, and experience requirements over time.
Recruiting Intelligence
Build targeted lead lists of actively hiring startups. Use company and role metadata to prioritize outreach and sourcing.
Job Aggregation
Feed structured Wellfound listings into your own job board, newsletter, or internal opportunities feed.
Data Analysis and AI Workflows
Use normalized job description text and HTML in classification, summarization, matching, and trend analysis pipelines.
Input Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
startUrl | String | No | — | Optional Wellfound URL to start from. If set, it overrides location-based start. |
keyword | String | No | — | Optional keyword for your run metadata and filters. |
location | String | No | "united-states" | Location slug such as "remote" or "san-francisco". |
results_wanted | Integer | No | 20 | Number of jobs to collect. 20 is the QA default only; set any number you need. Use 0 for all available jobs. |
max_pages | Integer | No | 20 | Safety cap for number of paginated pages to visit. |
proxyConfiguration | Object | No | {"useApifyProxy": false} | Proxy configuration for reliability and anti-blocking. |
dedupe | Boolean | No | true | Remove duplicate jobs within the same run. |
Output Data
Each dataset item contains:
| Field | Type | Description |
|---|---|---|
id | String | Wellfound job identifier. |
title | String | Job title. |
company | String | Company name. |
description_text | String | Plain-text job description. |
description_html | String | HTML job description. |
companyDescription | String | Company blurb or high-level company description. |
companySize | String | Company size bucket when provided. |
companyLogoUrl | String | Company logo URL. |
companyBadges | Array | Company badges shown on Wellfound. |
location | String | Job location. |
salary | String | Salary or compensation string from listing. |
jobType | String | Job type (for example full-time). |
primaryRoleTitle | String | Primary role category/title. |
atsSource | String | Applicant tracking source label, if available. |
autoPosted | Boolean | Indicates whether listing is marked auto-posted. |
remote | String | Yes or No remote indicator. |
remoteConfig | Object | Structured remote configuration object. |
acceptedRemoteLocationNames | Array | Accepted remote location names. |
yearsExperienceMin | Number | Null | Minimum years of experience. |
yearsExperienceMax | Number | Null | Maximum years of experience. |
postedDate | String | Number | Posting timestamp/date from source. |
applyUrl | String | Direct job application URL. |
companyUrl | String | Wellfound company profile URL. |
source | String | Source marker for extraction path. |
Usage Examples
Basic Run
Collect up to 20 jobs from the default location:
{"location": "united-states","results_wanted": 20}
Larger Collection
Collect a larger batch for market analysis:
{"location": "remote","results_wanted": 300,"max_pages": 20}
Collect All Available Jobs
Use 0 to scrape all available jobs within your page cap:
{"location": "san-francisco","results_wanted": 0,"max_pages": 50}
Sample Output
{"id": "3813253","title": "Senior Software Engineer, Integrations","company": "Paragon","description_text": "About Paragon ...","description_html": "<p>About Paragon ...</p>","companyDescription": "The Integration Infrastructure Platform for scaling your product's connectors","companySize": "SIZE_51_200","companyLogoUrl": "https://...","companyBadges": ["Actively Hiring", "Top Responder"],"location": "Los Angeles","salary": "$160k – $200k • 0.01% – 0.05%","jobType": "full-time","primaryRoleTitle": "Software Engineer","atsSource": "GREENHOUSE","autoPosted": false,"remote": "Yes","remoteConfig": {},"acceptedRemoteLocationNames": ["United States"],"yearsExperienceMin": 5,"yearsExperienceMax": 8,"postedDate": 1769777533,"applyUrl": "https://wellfound.com/company/useparagon/jobs/senior-software-engineer-integrations","companyUrl": "https://wellfound.com/company/useparagon","source": "json"}
Tips for Best Results
Control Run Size
- Start with
results_wanted: 20for quick validation. - Increase
results_wantedfor production runs; the actor will respect your value. - Keep
max_pagesaligned with your volume goal.
Use Reliable Proxy Settings
- Use residential proxies for higher success rates on protected pages.
- Keep proxy settings stable across scheduled runs for consistency.
Choose Good Start Targets
- Prefer active locations like
united-statesorremote. - Use
startUrlwhen you need a specific Wellfound path.
Integrations
Connect your dataset with:
- Google Sheets — Share and analyze hiring data quickly.
- Airtable — Build searchable startup hiring databases.
- Slack — Send run summaries and alerts.
- Webhooks — Push fresh records into your own systems.
- Make — Automate downstream enrichment and reporting.
- Zapier — Trigger actions in your business apps.
Export Formats
- JSON — API and engineering workflows
- CSV — Spreadsheet analysis
- Excel — Business reporting
- XML — System integrations
- RSS — Feed-based automation
- HTML — Quick browser review
Frequently Asked Questions
Is this actor limited to 20 results?
No. 20 is only the default value used for Apify QA compatibility. You can set results_wanted to any number, or 0 for all available jobs.
Can I collect more than one page?
Yes. Use max_pages to control how many paginated pages are visited.
Why are some fields empty on some jobs?
Some listings do not expose every attribute. Empty fields usually mean the source listing did not provide that value.
What is the difference between description_text and description_html?
description_text is normalized plain text. description_html preserves HTML formatting for rendering and publishing.
Can I run this on a schedule?
Yes. Use Apify schedules to run daily or hourly and keep your dataset up to date.
What if a run gets blocked?
Enable or improve proxy settings and retry. Residential proxy groups typically improve reliability.
Support
For issues or feature requests, contact support through Apify Console actor discussions.
Resources
Legal Notice
This actor is provided for legitimate data collection and analysis use cases. You are responsible for complying with Wellfound terms, applicable laws, and internal data governance policies.