Monster Job Scraper | US Salary & Remote
Pricing
from $1.00 / 1,000 results
Monster Job Scraper | US Salary & Remote
Scrape monster.com - one of the largest U.S. job boards with 1M+ listings. Structured salary data, remote work detection, and job change monitoring. Incremental mode detects new and changed listings.
Pricing
from $1.00 / 1,000 results
Rating
0.0
(0)
Developer
Black Falcon Data
Actor stats
0
Bookmarked
12
Total users
8
Monthly active users
a day ago
Last modified
Categories
Share
What does Monster Job Scraper do?
Monster Job Scraper extracts structured job data from monster.com and 13 country portals (US, UK, DE, CA, FR, AT, NL, BE, IE, SE, ES, IT, CH, LU) — including structured salary (min/max/currency/period), geo-coordinates, full descriptions, company metadata, remote-work indicators, employment type, industry classification, and the original ATS platform each listing was sourced from (e.g. GreenHouseAPI, Teamtailor). It supports keyword search, location filters, company filters, recency filters, and accepts Monster URLs directly via Start URLs.
Key features
- 14-country support — single actor, all Monster portals. URL detection, country-specific sitemaps, monster.com path canonicalization handled automatically.
- Cross-run cache — every detail page is cached for 180 days using a sliding TTL anchored on sitemap activity. Recurring queries return cached results in milliseconds with zero scrape.do credits used. Cache is shared across all your runs of this actor. Verified savings: 100% credit reduction on warm queries (run-time drops from ~60s to ~15s).
- Incremental mode — combines with the cache to track only new or changed listings across runs via a stable stateKey.
- Structured salary — min/max amounts plus currency and unit period (annual, hourly, etc.) when Monster supplies them. Falls back to the formatted
salaryTextfor display. - Detail enrichment — full HTML and Markdown descriptions, company logo URLs, requirements and benefits sections (heuristic extraction), industry classification, and
postedViashowing which ATS platform the job was originally sourced from (Workday, Greenhouse, Ashby, Lever, etc.). - Compact mode — AI-agent and MCP-friendly payloads with core fields only.
- Start URLs — paste Monster search or detail URLs directly. Country, query, and filters are inferred from the URL. Compatible with incremental mode.
What data can you extract from monster.com?
Each result includes:
- Core listing:
jobId,title,url,applyUrl,postedDate - Location:
location(formatted string),city,state,country, plus geo-coordinates aslatitude/longitude - Company:
company,companyId,companyUrl,companyLogo - Salary — both formats:
salaryText(display string) and structuredsalaryMin/salaryMax/salaryCurrency/salaryUnit - Job metadata:
employmentType,industry,isRemote,postedVia(the ATS platform — GreenHouseAPI, Teamtailor, Workday, etc. — that originally posted the job, surfacing Monster's aggregation layer) - Description:
description(plain text),descriptionHtml,descriptionMarkdown, plus heuristic-extractedrequirementsandbenefitssections - Tracking:
contentHash,scrapedAt,source,portalUrl - Incremental:
changeType,isRepost,repostOfId,repostDetectedAt(when incremental mode is enabled)
In standard mode, all fields are always present — unavailable data points are returned as null, never omitted. In compact mode, only core fields are returned.
Input
The main inputs are a search keyword, an optional location filter, and a result limit. Additional filters and options are available in the input schema.
Key parameters:
query— Job search keywords. Leave blank if using Start URLs.startUrls— Paste Monster search or detail URLs directly. Country, query, and filters are inferred from the URL.country— Market to search. One of: US, GB, DE, CA, FR, AT, NL, BE, IE, SE, ES, IT, CH, LU. (default:"US")location— City, state, or region.radius— Search radius around location in miles. (default:30)companyName— Filter to jobs at a specific employer.employmentType— FULL_TIME, PART_TIME, CONTRACT, INTERN, TEMP, REMOTE.datePosted— TODAY, LAST_2_DAYS, LAST_WEEK, LAST_2_WEEKS, LAST_MONTH.maxResults— Maximum total results. Up to 1,500 per run. (default:25)includeDetails— Fetch full job details. (default:true)descriptionMaxLength— Truncate description to N chars. 0 = no truncation. (default:0)compact— Core fields only (for AI-agent/MCP workflows). (default:false)incrementalMode— Compare against previous run state. (default:false)stateKey— Stable identifier for tracked universe (required when incrementalMode is on).skipReposts— Exclude listings detected as reposts of previously seen jobs.maxConcurrency— Parallel detail fetches. (default:5, max:20)maxRequestRetries— Retries per detail request on transient errors. (default:2)purgeCacheOnStart— One-time sweep to delete stale cache entries (run periodically, e.g. monthly).
Input examples
Basic search — Keyword-driven search with a result cap.
→ Full payload per result — all standard fields populated where the source provides them.
{"query": "software engineer","maxResults": 50}
Filtered search — Narrow results with advanced filters — only matching jobs are returned.
→ Same field set as basic search; fewer, more relevant rows.
{"query": "software engineer","employmentType": "FULL_TIME","datePosted": "TODAY","maxResults": 100}
Incremental tracking — Only emit jobs that changed since the previous run with this stateKey.
→ First run builds the baseline state. Subsequent runs emit only records that are new or whose tracked content changed. Set emitUnchanged: true to include unchanged records as well.
{"query": "software engineer","maxResults": 200,"incrementalMode": true,"stateKey": "software-engineer-tracker"}
Compact filtered output — Combine filters with compact mode for a lightweight AI-agent or MCP data source.
→ Core fields only — ideal for piping into LLMs or downstream tools without token overhead.
{"query": "software engineer","employmentType": "FULL_TIME","datePosted": "TODAY","maxResults": 50,"compact": true}
Output
Each run produces a dataset of structured job records. Results can be downloaded as JSON, CSV, or Excel from the Dataset tab in Apify Console.
Example job record
{"jobId": "b06725d4c8b444ee1031bd119612121273d84c82273846377305e96ef673664b","title": "Experienced Computer Engineer/Software Developer","company": "Naval Nuclear Laboratory","companyId": "7772fa42-3e72-4275-a075-2f4518024f15","companyUrl": null,"companyLogo": "https://media.monster.com/logos/naval-nuclear-laboratory.png","location": "Niskayuna, NY, US","city": "Niskayuna","state": "NY","country": "US","latitude": 42.819444,"longitude": -73.901389,"isRemote": false,"salaryText": "USD 94,800 - 148,200 / year","salaryMin": 94800,"salaryMax": 148200,"salaryCurrency": "USD","salaryUnit": "year","employmentType": "FULL_TIME","industry": "Engineering Services","postedVia": "Workday","description": "Working at the Naval Nuclear Laboratory we foster pride in belonging to an organization whose culture is made up of these core values: Trust, Empowerment, and Collaboration...","descriptionHtml": "<p>Working at the Naval Nuclear Laboratory...</p>","descriptionMarkdown": "Working at the Naval Nuclear Laboratory...","requirements": "Bachelor's degree in Computer Science or related field. 3+ years of experience with C++, Python, or Java...","benefits": "Comprehensive medical, dental, and vision coverage. 401(k) with company match...","skills": null,"url": "https://www.monster.com/job-openings/experienced-computer-engineer-software-developer-niskayuna-ny--44f4f9a4-723b-4829-97ae-8a6885313e9f","applyUrl": "https://career-hcm03.ns2cloud.com/sfcareer/jobreqcareer?jobId=7334","postedDate": "2026-02-24","scrapedAt": "2026-04-25T13:20:29.080Z","source": "monster.com","portalUrl": "https://www.monster.com","changeType": null}
How to scrape monster.com
- Go to Monster Job Scraper | US Salary & Remote in Apify Console.
- Enter a search keyword and optional location filter.
- Set
maxResultsto control how many results you need. - Enable
includeDetailsif you need full descriptions, contact info, or company data. - Click Start and wait for the run to finish.
- Export the dataset as JSON, CSV, or Excel.
Use cases
- Extract job data from monster.com for market research and competitive analysis.
- Track salary trends across regions and categories over time.
- Monitor new and changed listings on scheduled runs without processing the full dataset every time.
- Build outreach lists using contact details and apply URLs from listings.
- Research company hiring patterns, employer profiles, and industry distribution.
- Feed structured data into AI agents, MCP tools, and automated pipelines using compact mode.
- Export clean, structured data to dashboards, spreadsheets, or data warehouses.
- Analyze skill demand across listings using structured skill tags.
How much does it cost to scrape monster.com?
Monster Job Scraper | US Salary & Remote uses pay-per-event pricing. You pay a small fee when the run starts and then for each result that is actually produced.
- Run start: $0.005 per run
- Per result: $0.001 per job record
Example costs:
- 10 results: $0.01
- 100 results: $0.11
- 500 results: $0.51
Example: recurring monitoring savings
These examples compare full re-scrapes with incremental runs at different churn rates. Churn is the share of listings that are new or whose tracked content changed since the previous run. Actual churn depends on your query breadth, source activity, and polling frequency — the scenarios below are examples, not predictions.
Example setup: 100 results per run, daily polling (30 runs/month). Event-pricing examples scale linearly with result count.
| Churn rate | Full re-scrape run cost | Incremental run cost | Savings vs full re-scrape | Monthly cost after baseline |
|---|---|---|---|---|
| 5% — stable niche query | $0.11 | $0.01 | $0.10 (90%) | $0.30 |
| 15% — moderate broad query | $0.11 | $0.02 | $0.09 (81%) | $0.60 |
| 30% — high-volume aggregator | $0.11 | $0.03 | $0.07 (67%) | $1.05 |
Full re-scrape monthly cost at daily polling: $3.15. First month with incremental costs $0.40 / $0.68 / $1.12 for the 5% / 15% / 30% scenarios because the first run builds baseline state at full cost before incremental savings apply.
FAQ
Which countries does this actor support?
All 14 Monster portals: US, UK, DE, CA, FR, AT, NL, BE, IE, SE, ES, IT, CH, LU. Pick a country with the country input parameter, or paste any monster.com / monster.co.uk / monster.de / etc. URL via Start URLs and the actor figures out the country automatically. Salary is returned in the local currency (USD, GBP, EUR, CAD, SEK, CHF) when Monster supplies it.
How does the cross-run cache work?
Every detail page the actor fetches is stored in a shared key-value store, keyed by Monster's stable jobId. On subsequent runs of any user querying the same job, the cached payload is returned in milliseconds — no scrape.do credit, no network round-trip. The cache uses a sliding 180-day TTL anchored on sitemap activity: as long as Monster still lists the job in its sitemap, the cached entry stays fresh. Listings that drop out of Monster age out of the cache automatically. Run with purgeCacheOnStart: true (e.g. monthly via Apify Schedule) to do a full sweep of expired entries.
How many results can I get from monster.com?
The number of results depends on the search query and available listings on monster.com. Use the maxResults parameter to control how many results are returned per run.
Does Monster Job Scraper | US Salary & Remote support recurring monitoring?
Yes. Enable incremental mode to only receive new or changed listings on subsequent runs. This is ideal for scheduled monitoring where you want to track changes over time without re-processing the full dataset.
Can I integrate Monster Job Scraper | US Salary & Remote with other apps?
Yes. Monster Job Scraper | US Salary & Remote works with Apify's integrations to connect with tools like Zapier, Make, Google Sheets, Slack, and more. You can also use webhooks to trigger actions when a run completes.
Can I use Monster Job Scraper | US Salary & Remote with the Apify API?
Yes. You can start runs, manage inputs, and retrieve results programmatically through the Apify API. Client libraries are available for JavaScript, Python, and other languages.
Can I use Monster Job Scraper | US Salary & Remote through an MCP Server?
Yes. Apify provides an MCP Server that lets AI assistants and agents call this actor directly. Use compact mode and descriptionMaxLength to keep payloads manageable for LLM context windows.
Is it legal to scrape monster.com?
This actor extracts publicly available data from monster.com. Web scraping of public information is generally considered legal, but you should always review the target site's terms of service and ensure your use case complies with applicable laws and regulations, including GDPR where relevant.
Your feedback
If you have questions, need a feature, or found a bug, please open an issue on the actor's page in Apify Console. Your feedback helps us improve.
You might also like
- Adzuna Job Scraper — Scrape adzuna.com — the global job board with 20+ country markets. Structured salary.
- Arbeitsagentur Scraper — German Jobs — Scrape arbeitsagentur.de — Germany’s official employment portal with 1M+ listings. Contact data,.
- Bayt.com Scraper — Jobs from the Middle East — Scrape bayt.com — the leading Middle East job board. Salary data, experience requirements.
- Bumeran Scraper — Scrape bumeran.com.ar — the largest job board across 8 LATAM countries. Work modality, contract.
- Cadremploi Job Scraper — Scrape cadremploi.fr — French management and executive jobs. Salary ranges, apply links.
- Dice.com Job Scraper — U.S. Tech Jobs — Scrape dice.com — the leading U.S. tech job board. Structured salary (min/max/currency),.
- Drushim Scraper — Israel Job Listings — Scrape drushim.co.il — Israel’s leading job board. Geo-coordinates per listing, multi-filter search.
- Duunitori Scraper — Finland Job Listings — Scrape duunitori.fi — Finland’s largest job board with 22,000+ listings. Salary ranges, employment.