Infojobs Scraper avatar

Infojobs Scraper

Pricing

Pay per usage

Go to Apify Store
Infojobs Scraper

Infojobs Scraper

Harvest job postings from Spain's largest employment platform. Extract listings, positions, salaries, and company details. Ideal for job aggregation, recruitment analytics, talent research, and market intelligence. Automate your job board data collection at scale.

Pricing

Pay per usage

Rating

0.0

(0)

Developer

Shahid Irfan

Shahid Irfan

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

3 days ago

Last modified

Share

InfoJobs Jobs Scraper

Extract job listings from InfoJobs with structured output ready for analytics, lead pipelines, and hiring intelligence workflows. Use a single search URL input plus crawl limits for a simple, production-friendly setup.

Features

  • Job listing extraction — Collect titles, company names, locations, contracts, salary data, and publication timestamps.
  • Simple input setup — Run with only searchUrl, results_wanted, and max_pages.
  • Pagination support — Automatically walks pages until target result count or page limit is reached.
  • Clean output records — Removes empty values from each dataset item for cleaner downstream usage.
  • Production-ready dataset — Stable record structure designed for exports and automations.

Use Cases

Hiring Intelligence

Track open roles by category or keyword and analyze hiring trends over time.

Competitive Monitoring

Monitor which companies are hiring, where they hire, and how compensation ranges evolve.

Market Research

Build datasets for labor market studies, salary trend tracking, and demand analysis.

Lead Generation

Collect company and role opportunities for recruitment agencies and B2B outreach teams.


Input Parameters

ParameterTypeRequiredDefaultDescription
searchUrlStringNohttps://www.infojobs.net/jobsearch/search-results/list.xhtmlBase search URL whose query parameters are used as filters.
results_wantedIntegerNo20Maximum number of items to save.
max_pagesIntegerNo5Maximum number of pages to request.
proxyConfigurationObjectNo{ "useApifyProxy": false }Optional proxy settings for reliability.

Output Data

Each dataset item can contain the following fields:

FieldTypeDescription
idStringUnique offer identifier.
titleStringJob title.
description_textStringCleaned formatted description text.
description_htmlStringSanitized HTML description using basic tags (p, br, strong, ul, li).
cityStringOffer location city.
offer_urlStringAbsolute offer link.
application_originStringApplication origin label from offer URL.
contract_typeStringContract type label.
workdayStringWorkday/journey label.
teleworkingStringWork modality.
published_atStringPublication timestamp.
company_nameStringCompany name.
company_urlStringCompany profile URL.
salary_minNumberMinimum salary value when available.
salary_maxNumberMaximum salary value when available.
salary_periodStringSalary period (for example year).
salary_currencyStringSalary currency code.
salary_typeStringSalary type (for example gross).
statesArrayOffer state tags.
upsellingsArrayListing promotion tags.
executiveBooleanExecutive offer flag.
newbo_idStringInternal offer identifier.
search_pageNumberSource page number.
sort_byStringSorting mode used for the run.
since_dateStringPublication window used for the run.
only_foreign_countryBooleanForeign-country flag used for the run.

Usage Examples

Basic Run

{
"results_wanted": 20,
"max_pages": 5
}

URL-Driven Run

{
"searchUrl": "https://www.infojobs.net/jobsearch/search-results/list.xhtml?keyword=java&sortBy=PUBLICATION_DATE&sinceDate=_24_HOURS",
"results_wanted": 30,
"max_pages": 6
}

Sample Output

{
"id": "6acc847e4b435aa2e4daed9ab3f673",
"title": "Senior Software Developer C++",
"description_text": "MISION DEL PUESTO\\nRealizara de manera cualificada...",
"description_html": "<p><strong>MISION DEL PUESTO</strong><br>Realizara de manera cualificada...</p>",
"city": "Madrid",
"offer_url": "https://www.infojobs.net/madrid/senior-software-developer-c/of-i6acc847e4b435aa2e4daed9ab3f673",
"contract_type": "Indefinido",
"teleworking": "Hibrido",
"published_at": "2026-03-24T10:22:10Z",
"company_name": "Example Company",
"salary_min": 45000,
"salary_max": 60000,
"salary_currency": "EUR",
"executive": false,
"search_page": 1,
"sort_by": "PUBLICATION_DATE",
"since_date": "ANY",
"only_foreign_country": false
}

Tips for Best Results

Start Small

  • Begin with results_wanted: 20 for quick validation.
  • Increase volume once your workflow is confirmed.

Tune Filters

  • Put filter parameters directly in searchUrl to control keyword, date, sorting, and location.

Control Runtime

  • Use max_pages as a safety cap.
  • Keep results_wanted and max_pages aligned to your expected output size.

Improve Reliability

  • Enable proxy configuration for larger or scheduled runs.
  • Re-run with narrower filters if source-side throttling appears.

Integrations

Connect scraped data with:

  • Google Sheets — Build tracking dashboards.
  • Airtable — Create searchable recruiting tables.
  • Make — Automate post-processing and notifications.
  • Zapier — Trigger workflows after each run.
  • Webhooks — Send data to your own services.

Export Formats

  • JSON — Developer-friendly structured output.
  • CSV — Spreadsheet analysis.
  • Excel — Business reporting.
  • XML — System-to-system integrations.

Frequently Asked Questions

How many offers can I collect?

You can collect as many as available within your configured results_wanted and max_pages limits.

Can I run this daily?

Yes. It is suitable for scheduled runs and recurring monitoring workflows.

Why are some salary fields missing?

Some listings do not publish compensation data, so salary fields may be absent in those records.

Can I target specific keywords?

Yes. Add the keyword parameter directly in searchUrl.

Does this include duplicate prevention?

Yes. Duplicate offer IDs are skipped during a run.

Why are some fields not present in every item?

Empty values are removed from each output item to keep records clean.


Support

For issues or feature requests, use your Apify actor issue workflow.

Resources


This actor is intended for legitimate data collection and analysis. You are responsible for complying with website terms, local regulations, and applicable data-use policies.