Built In Jobs Scraper
Pricing
Pay per event
Built In Jobs Scraper
Extract tech job listings from builtin.com including salary ranges, company details, GPS location, benefits packages, and full job descriptions.
Pricing
Pay per event
Rating
0.0
(0)
Developer
Stas Persiianenko
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
3 days ago
Last modified
Categories
Share
Extract thousands of tech job listings from Built In — including salary ranges, company details, location data, required experience, benefits packages, and full job descriptions — without writing a single line of code.
What Does Built In Jobs Scraper Do?
Built In Jobs Scraper crawls builtin.com/jobs and extracts structured data from every job listing. For each position it retrieves:
- 🏢 Company information — name, profile URL, logo URL
- 💼 Job basics — title, employment type, direct-apply flag
- 📍 Location — city, state, country, GPS coordinates (lat/lng)
- 💰 Salary — min/max range, currency, period (hourly/annual/monthly)
- 🗓️ Dates — date posted, valid-through expiry
- 🏭 Industries — all tags assigned by Built In (e.g. Software, SaaS, FinTech)
- 🎁 Benefits — full list (401K, equity, dental, parental leave, etc.)
- 🎓 Requirements — months of experience, education level required
- 📝 Description — full plain-text job description (optional)
The scraper uses structured data (Schema.org JobPosting JSON-LD) embedded in each listing page, so the output is consistently typed and requires no brittle HTML parsing.
Who Is It For?
👩💼 Recruiters & Talent Sourcers
Monitor which companies are actively hiring for specific roles. Identify salary benchmarks by role and location. Build candidate outreach lists based on company size and industry.
📊 HR & Compensation Analysts
Pull salary data across hundreds of tech roles in minutes. Compare compensation by employment type, experience level, or geography. Track salary inflation trends over time with scheduled runs.
🚀 Job Seekers & Career Coaches
Aggregate listings from multiple search queries into a single spreadsheet. Filter by direct-apply, salary range, or required experience. Export to Airtable or Google Sheets for organised job hunting.
🤖 Data Scientists & Researchers
Build training datasets for NLP models (job description classification, salary prediction). Research tech hiring trends by industry, city, or technology stack.
🏢 Sales & GTM Teams
Identify companies that are scaling their engineering teams as a signal for SaaS outreach. Enrich prospect lists with real-time headcount-growth signals.
Why Use This Scraper?
- Schema.org data — uses structured
JobPostingJSON-LD for typed, reliable fields - No browser required — fast HTTP-only extraction (no Playwright overhead)
- Salary coverage — extracts min/max salary ranges where available
- Benefit lists — 20-50 benefits per listing, individually parsed
- GPS coordinates — latitude/longitude for mapping or geocoding workflows
- Paginated search — automatically crawls all pages for a search query
What Data Can You Extract?
| Field | Type | Example |
|---|---|---|
id | string | "4457551" |
url | string | "https://builtin.com/job/software-engineer/4457551" |
title | string | "Software Engineer" |
companyName | string | "Oso" |
companyUrl | string | "https://builtin.com/company/oso" |
companyLogoUrl | string | "https://builtin.com/sites/..." |
location | string | "New York, New York" |
city | string | "New York" |
state | string | "New York" |
country | string | "USA" |
latitude | number | 40.7130466 |
longitude | number | -74.0072301 |
datePosted | string | "2026-05-05" |
validThrough | string | "2026-06-04T00:04:37+00:00" |
employmentType | string | "FULL_TIME" |
salaryMin | number | 100000 |
salaryMax | number | 275000 |
salaryCurrency | string | "USD" |
salaryPeriod | string | "YEAR" |
industries | array | ["Security","Software","IaaS"] |
benefits | array | ["401(K)","Company equity","Dental insurance",...] |
directApply | boolean | false |
experienceMonths | number | 108 (= 9 years) |
educationLevel | string | "Bachelor Degree" |
description | string | Full plain-text job description |
How Much Does It Cost to Scrape Built In Jobs?
Built In Jobs Scraper uses pay-per-result pricing — you only pay for jobs actually extracted. There is no charge for empty runs or failed requests.
Pricing tiers
| Tier | Cost per job |
|---|---|
| Free (first 20 results/month) | $0.0023 |
| Bronze (pay-as-you-go) | $0.002 |
| Silver | $0.00156 |
| Gold | $0.0012 |
| Platinum | $0.0008 |
| Diamond | $0.00056 |
Example costs
| Scenario | Jobs | Estimated cost |
|---|---|---|
| Quick market research | 50 | ~$0.10 |
| Weekly job feed | 500 | ~$1.00 |
| Bulk salary analysis | 2,000 | ~$4.00 |
| Full index snapshot | 10,000 | ~$20.00 |
Free plan estimate: Apify's $49/mo plan includes ~$5 in compute credits, enough for approximately 2,500 job listings per month at the Bronze rate.
How to Scrape Built In Jobs — Step by Step
- Open the actor on the Apify platform and click Try for free.
- Set a Start URL — paste any Built In jobs page URL (e.g.
https://builtin.com/jobs?search=machine+learning). Leave it as the defaulthttps://builtin.com/jobsto scrape all recent postings. - (Optional) Enter Keywords — type a job title or skill keyword if you prefer not to construct the URL manually.
- Set Max Jobs — enter how many listings you want (e.g.
200). - Toggle Include Description — enable to get the full job description text.
- Click Start — the scraper runs and populates your dataset in real time.
- Export — download results as JSON, CSV, Excel, or connect to Zapier / Make.
Input Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
startUrls | array | [{"url":"https://builtin.com/jobs"}] | Built In jobs page URLs to crawl with automatic pagination |
keywords | string | — | Job title keyword search (used when Start URLs is empty) |
maxJobs | integer | 100 | Maximum number of job listings to extract |
includeDescription | boolean | true | Include full plain-text job description in output |
maxRequestRetries | integer | 3 | Retry limit for failed HTTP requests |
Supported start URL formats
https://builtin.com/jobs All jobshttps://builtin.com/jobs?search=data+scientist Keyword searchhttps://builtin.com/remote-jobs Remote onlyhttps://builtin.com/jobs/chicago City filterhttps://builtin.com/jobs/san-francisco City filter
Output Example
{"id": "4457551","url": "https://builtin.com/job/software-engineer/4457551","title": "Software Engineer","companyName": "Oso","companyUrl": "https://builtin.com/company/oso","companyLogoUrl": "https://builtin.com/sites/www.builtin.com/files/2024-12/Oso.jpeg","location": "New York, New York","city": "New York","state": "New York","country": "USA","latitude": 40.7130466,"longitude": -74.0072301,"datePosted": "2026-05-05","validThrough": "2026-06-04T00:04:37+00:00","employmentType": "FULL_TIME","salaryMin": 100000,"salaryMax": 275000,"salaryCurrency": "USD","salaryPeriod": "YEAR","industries": ["Security", "Software", "Infrastructure as a Service (IaaS)"],"benefits": ["401(K)", "Company equity", "Dental insurance", "Health insurance", "Unlimited vacation policy"],"directApply": false,"experienceMonths": 108,"educationLevel": "Bachelor Degree","description": "Old problem, new $25B+ market\n\nCompanies like AWS, Stripe, and Twilio..."}
Tips & Best Practices
- 🔍 Use specific search URLs for targeted scraping —
builtin.com/jobs?search=react+developerreturns more relevant results than scraping all jobs and filtering afterward. - 📅 Schedule weekly runs to maintain a fresh jobs feed — Built In listings expire after ~30 days so weekly is sufficient for most use cases.
- 💡 Disable descriptions when you only need job metadata — set
includeDescription: falseto halve response sizes and speed up large runs. - 🗺️ Use GPS coordinates for geospatial analysis — the lat/lng fields are directly usable in mapping tools like Mapbox, Google Maps, or Tableau.
- 📊 Filter by
salaryMinin downstream tools — not all jobs list salary, but those that do are the most useful for compensation benchmarking. - 🔄 Combine multiple start URLs — add
builtin.com/jobs?search=backendandbuiltin.com/jobs?search=frontendas separate start URLs in one run to collect both in parallel. - ⚡ Use
maxJobsto cap spend on exploratory runs — start with 20-50 jobs to validate your search URL before scaling to thousands.
Integrations
📊 Export to Google Sheets for salary tracking
Use Apify's Google Sheets integration to push job data directly into a spreadsheet. Set up a scheduled run every Monday and your salary tracker auto-updates with the latest listings.
🔔 Slack alerts for new jobs at target companies
Combine with Apify's webhook notifications to receive a Slack message whenever a new job appears from a company on your watchlist.
🔗 Enrich LinkedIn Sales Navigator leads
Export companies from job listings → import into LinkedIn Sales Navigator → reach hiring managers the week they posted a role.
🗂️ Airtable recruiting database
Connect via Zapier: new job → filter by salary ≥ $X and required skills → create Airtable record. Build your own ATS with zero coding.
📧 Email digest with Make (formerly Integromat)
Trigger a Make scenario on run completion → filter jobs by your criteria → send a formatted daily email with the best matches.
API Usage
Node.js
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });const run = await client.actor('automation-lab/builtin-jobs-scraper').call({startUrls: [{ url: 'https://builtin.com/jobs?search=software+engineer' }],maxJobs: 100,includeDescription: true,});const { items } = await client.dataset(run.defaultDatasetId).listItems();console.log(`Scraped ${items.length} jobs`);items.forEach(job => {console.log(`${job.title} @ ${job.companyName} — ${job.location}`);if (job.salaryMin) console.log(` Salary: $${job.salaryMin}–$${job.salaryMax} ${job.salaryPeriod}`);});
Python
from apify_client import ApifyClientclient = ApifyClient("YOUR_API_TOKEN")run = client.actor("automation-lab/builtin-jobs-scraper").call(run_input={"startUrls": [{"url": "https://builtin.com/jobs?search=data+scientist"}],"maxJobs": 200,"includeDescription": False,})dataset_items = client.dataset(run["defaultDatasetId"]).list_items().itemsfor job in dataset_items:salary = f"${job.get('salaryMin')}–${job.get('salaryMax')} {job.get('salaryPeriod')}" \if job.get('salaryMin') else "Not listed"print(f"{job['title']} @ {job['companyName']} | {job['location']} | {salary}")
cURL
# Start the scrapercurl -X POST "https://api.apify.com/v2/acts/automation-lab~builtin-jobs-scraper/runs?token=YOUR_API_TOKEN" \-H "Content-Type: application/json" \-d '{"startUrls": [{"url": "https://builtin.com/jobs?search=backend+engineer"}],"maxJobs": 50,"includeDescription": true}'# Get results (replace RUN_ID with the ID from the response above)curl "https://api.apify.com/v2/datasets/RUN_ID/items?token=YOUR_API_TOKEN&format=json"
Use With Claude and Other AI Assistants (MCP)
Built In Jobs Scraper is available as an MCP (Model Context Protocol) tool, letting AI assistants like Claude run the scraper and analyse results in one conversation.
Claude Code (terminal)
$claude mcp add --transport http apify "https://mcp.apify.com?tools=automation-lab/builtin-jobs-scraper"
Claude Desktop / Cursor / VS Code
Add to your MCP config (claude_desktop_config.json or .cursor/mcp.json):
{"mcpServers": {"apify": {"type": "http","url": "https://mcp.apify.com?tools=automation-lab/builtin-jobs-scraper","headers": { "Authorization": "Bearer YOUR_APIFY_TOKEN" }}}}
Example Claude prompts
"Scrape 200 software engineer jobs from Built In and find the top 10 companies hiring the most. Show their average salary ranges."
"Pull 100 remote Python jobs from builtin.com, then list all unique benefits offered and rank them by frequency."
"Get all product manager roles posted in the last week on Built In and create a comparison table sorted by salary."
Legality — Is Scraping Built In Legal?
Built In's publicly accessible job listings contain data that companies have intentionally published for public viewing. Scraping publicly available job data for personal research, market analysis, and non-commercial purposes is generally considered lawful in most jurisdictions.
The built-in.com robots.txt permits crawling of job listing pages. This scraper:
- Does not require login or bypass any access controls
- Respects rate limits with automatic delays between requests
- Only collects data that is publicly visible to any visitor
You are responsible for ensuring your use of the extracted data complies with applicable laws, Built In's Terms of Service, and data privacy regulations (GDPR, CCPA) in your jurisdiction. Do not use extracted personal data (such as contact information) for unsolicited outreach.
FAQ — Frequently Asked Questions
How many jobs can I scrape in one run? There is no hard limit. The scraper handles pagination automatically. Practically, builtin.com has ~10,000–50,000 active listings at any time. For very large runs (>5,000 jobs), consider splitting by search query to manage memory and cost.
Does it scrape all cities or just one?
The default builtin.com/jobs URL covers all cities. Use city-specific URLs like builtin.com/jobs/chicago or builtin.com/jobs/new-york to restrict to a single metro area.
Why are some salary fields null? Not all companies list salary ranges. Built In encourages it but doesn't require it. Roughly 40–60% of listings include salary data.
Can I filter by remote, hybrid, or on-site? Yes — construct a filtered URL on Built In's website (use the "Remote" dropdown), then copy that URL as your start URL. The scraper will respect the filter.
The scraper returned fewer jobs than expected — why?
Built In paginates results at 24–25 per page. If your maxJobs is less than the total matching results, the scraper stops early (by design). Also, some listing pages may lack JSON-LD data and are skipped with a warning in the log.
I'm getting 403 errors on some pages. How do I fix it?
Built In occasionally rate-limits high-frequency requests. Reduce the run frequency or lower maxJobs. The scraper uses automatic retries (maxRequestRetries), which resolves transient blocks automatically.
Related Scrapers
Looking for job data from other platforms?
- Glassdoor Jobs Scraper — salary reviews + job listings from Glassdoor
- LinkedIn Jobs Scraper — LinkedIn job postings with company details
- Indeed Jobs Scraper — Indeed listings with apply links
- Wellfound Scraper — AngelList/Wellfound startup jobs
Built by Automation Lab — your source for reliable, well-maintained data extraction tools.