Infojobs Scraper
Pricing
Pay per usage
Infojobs Scraper
Harvest job postings from Spain's largest employment platform. Extract listings, positions, salaries, and company details. Ideal for job aggregation, recruitment analytics, talent research, and market intelligence. Automate your job board data collection at scale.
Pricing
Pay per usage
Rating
0.0
(0)
Developer
Shahid Irfan
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
3 days ago
Last modified
Categories
Share
InfoJobs Jobs Scraper
Extract job listings from InfoJobs with structured output ready for analytics, lead pipelines, and hiring intelligence workflows. Use a single search URL input plus crawl limits for a simple, production-friendly setup.
Features
- Job listing extraction — Collect titles, company names, locations, contracts, salary data, and publication timestamps.
- Simple input setup — Run with only
searchUrl,results_wanted, andmax_pages. - Pagination support — Automatically walks pages until target result count or page limit is reached.
- Clean output records — Removes empty values from each dataset item for cleaner downstream usage.
- Production-ready dataset — Stable record structure designed for exports and automations.
Use Cases
Hiring Intelligence
Track open roles by category or keyword and analyze hiring trends over time.
Competitive Monitoring
Monitor which companies are hiring, where they hire, and how compensation ranges evolve.
Market Research
Build datasets for labor market studies, salary trend tracking, and demand analysis.
Lead Generation
Collect company and role opportunities for recruitment agencies and B2B outreach teams.
Input Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
searchUrl | String | No | https://www.infojobs.net/jobsearch/search-results/list.xhtml | Base search URL whose query parameters are used as filters. |
results_wanted | Integer | No | 20 | Maximum number of items to save. |
max_pages | Integer | No | 5 | Maximum number of pages to request. |
proxyConfiguration | Object | No | { "useApifyProxy": false } | Optional proxy settings for reliability. |
Output Data
Each dataset item can contain the following fields:
| Field | Type | Description |
|---|---|---|
id | String | Unique offer identifier. |
title | String | Job title. |
description_text | String | Cleaned formatted description text. |
description_html | String | Sanitized HTML description using basic tags (p, br, strong, ul, li). |
city | String | Offer location city. |
offer_url | String | Absolute offer link. |
application_origin | String | Application origin label from offer URL. |
contract_type | String | Contract type label. |
workday | String | Workday/journey label. |
teleworking | String | Work modality. |
published_at | String | Publication timestamp. |
company_name | String | Company name. |
company_url | String | Company profile URL. |
salary_min | Number | Minimum salary value when available. |
salary_max | Number | Maximum salary value when available. |
salary_period | String | Salary period (for example year). |
salary_currency | String | Salary currency code. |
salary_type | String | Salary type (for example gross). |
states | Array | Offer state tags. |
upsellings | Array | Listing promotion tags. |
executive | Boolean | Executive offer flag. |
newbo_id | String | Internal offer identifier. |
search_page | Number | Source page number. |
sort_by | String | Sorting mode used for the run. |
since_date | String | Publication window used for the run. |
only_foreign_country | Boolean | Foreign-country flag used for the run. |
Usage Examples
Basic Run
{"results_wanted": 20,"max_pages": 5}
URL-Driven Run
{"searchUrl": "https://www.infojobs.net/jobsearch/search-results/list.xhtml?keyword=java&sortBy=PUBLICATION_DATE&sinceDate=_24_HOURS","results_wanted": 30,"max_pages": 6}
Sample Output
{"id": "6acc847e4b435aa2e4daed9ab3f673","title": "Senior Software Developer C++","description_text": "MISION DEL PUESTO\\nRealizara de manera cualificada...","description_html": "<p><strong>MISION DEL PUESTO</strong><br>Realizara de manera cualificada...</p>","city": "Madrid","offer_url": "https://www.infojobs.net/madrid/senior-software-developer-c/of-i6acc847e4b435aa2e4daed9ab3f673","contract_type": "Indefinido","teleworking": "Hibrido","published_at": "2026-03-24T10:22:10Z","company_name": "Example Company","salary_min": 45000,"salary_max": 60000,"salary_currency": "EUR","executive": false,"search_page": 1,"sort_by": "PUBLICATION_DATE","since_date": "ANY","only_foreign_country": false}
Tips for Best Results
Start Small
- Begin with
results_wanted: 20for quick validation. - Increase volume once your workflow is confirmed.
Tune Filters
- Put filter parameters directly in
searchUrlto control keyword, date, sorting, and location.
Control Runtime
- Use
max_pagesas a safety cap. - Keep
results_wantedandmax_pagesaligned to your expected output size.
Improve Reliability
- Enable proxy configuration for larger or scheduled runs.
- Re-run with narrower filters if source-side throttling appears.
Integrations
Connect scraped data with:
- Google Sheets — Build tracking dashboards.
- Airtable — Create searchable recruiting tables.
- Make — Automate post-processing and notifications.
- Zapier — Trigger workflows after each run.
- Webhooks — Send data to your own services.
Export Formats
- JSON — Developer-friendly structured output.
- CSV — Spreadsheet analysis.
- Excel — Business reporting.
- XML — System-to-system integrations.
Frequently Asked Questions
How many offers can I collect?
You can collect as many as available within your configured results_wanted and max_pages limits.
Can I run this daily?
Yes. It is suitable for scheduled runs and recurring monitoring workflows.
Why are some salary fields missing?
Some listings do not publish compensation data, so salary fields may be absent in those records.
Can I target specific keywords?
Yes. Add the keyword parameter directly in searchUrl.
Does this include duplicate prevention?
Yes. Duplicate offer IDs are skipped during a run.
Why are some fields not present in every item?
Empty values are removed from each output item to keep records clean.
Support
For issues or feature requests, use your Apify actor issue workflow.
Resources
Legal Notice
This actor is intended for legitimate data collection and analysis. You are responsible for complying with website terms, local regulations, and applicable data-use policies.