Bumeran Jobs Scraper avatar

Bumeran Jobs Scraper

Pricing

Pay per usage

Go to Apify Store
Bumeran Jobs Scraper

Bumeran Jobs Scraper

Scrape job listings from Bumeran in seconds. Extract job titles, descriptions, salaries, companies & locations automatically. Perfect for job boards, market research & recruitment automation. Save hours on manual data collection.

Pricing

Pay per usage

Rating

0.0

(0)

Developer

Shahid Irfan

Shahid Irfan

Maintained by Community

Actor stats

1

Bookmarked

15

Total users

7

Monthly active users

2 days ago

Last modified

Share

Extract job listings from Bumeran Argentina with a simple input model. Search by Bumeran URL or combine a keyword with a location to collect structured job data for research, monitoring, and analysis.

Features

  • Simple input model — Run with startUrl, keyword, and location instead of a large filter form.
  • Rich job records — Collect titles, company details, locations, work mode, seniority, descriptions, and public URLs.
  • Detail enrichment — Optionally gather full job descriptions, company metadata, and screening questions.
  • URL support — Start from a Bumeran listing URL or a direct job detail URL.
  • QA-friendly defaults — Falls back to INPUT.json when the public search fields are left empty.

Use Cases

Job Market Research

Track open roles in a city or region and analyze which companies are hiring for specific skills.

Competitive Hiring Monitoring

Monitor hiring activity by keyword and location to understand demand across teams, sectors, and markets.

Lead Generation

Build datasets of companies and job openings for recruiting, outreach, and staffing workflows.

Salary and Role Analysis

Collect enriched listings to study role seniority, work mode, and hiring patterns over time.

Input Parameters

ParameterTypeRequiredDefaultDescription
startUrlStringNohttps://www.bumeran.com.ar/empleos.htmlBumeran listing URL or direct job detail URL.
keywordStringNodeveloperSearch keyword used when no specific URL is provided.
locationStringNoBuenos AiresProvince or locality name used to narrow the search.
collectDetailsBooleanNotrueEnrich listings with full job details.
results_wantedIntegerNo20Maximum number of jobs to collect.
max_pagesIntegerNo2Safety cap for pagination.
proxyConfigurationObjectNo{"useApifyProxy": false}Optional Apify Proxy configuration.

Output Data

Each dataset item can include the following fields:

FieldTypeDescription
idIntegerBumeran job identifier.
titleStringJob title.
companyStringCompany name.
locationStringHuman-readable job location.
areaStringMain job area.
subareaStringJob subarea.
modalityStringWork mode such as remote or hybrid.
seniorityStringSeniority level.
job_typeStringJob type such as full-time or part-time.
published_dateStringPublication date shown by Bumeran.
description_textStringPlain text version of the job description.
questionsArrayScreening questions, when available.
urlStringPublic Bumeran job URL.

Usage Examples

{
"keyword": "developer",
"location": "Buenos Aires",
"results_wanted": 20
}

Start From a Listing URL

{
"startUrl": "https://www.bumeran.com.ar/empleos-en-buenos-aires-busqueda-developer.html",
"results_wanted": 20,
"max_pages": 2
}

Collect a Single Job From Its URL

{
"startUrl": "https://www.bumeran.com.ar/empleos/fullstack-developer-sr-acciona-it-1118181775.html"
}

Sample Output

{
"id": 1118181775,
"title": "Fullstack Developer Sr",
"company": "ACCIONA IT",
"location": "Capital Federal, Buenos Aires, Argentina",
"area": "Tecnología, Sistemas y Telecomunicaciones",
"subarea": "Programación",
"modality": "Remoto",
"seniority": "Senior",
"job_type": "Full-time",
"published_date": "05-03-2026",
"description_text": "En Acciona IT estamos buscando un Fullstack Developer SR...",
"questions": [
"¿Cuántos años de experiencia tenés trabajando con Node?"
],
"url": "https://www.bumeran.com.ar/empleos/fullstack-developer-sr-acciona-it-1118181775.html"
}

Tips for Best Results

Use Specific Keywords

  • Start with role names like developer, analista, or marketing.
  • Pair the keyword with a real city or province name for more focused results.

Use Bumeran URLs When You Have Them

  • A listing URL helps preserve Bumeran's own filtering context.
  • A direct job URL is the fastest way to collect one job record.

Keep QA and Local Tests Fast

  • The bundled defaults are set to 20 results and 2 pages for fast verification.
  • Leave collectDetails enabled when you need richer job records.

Proxy Configuration

For reliable results, residential proxies are recommended:

{
"proxyConfiguration": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}

Integrations

Connect your data with:

  • Google Sheets — Export for analysis
  • Airtable — Build searchable databases
  • Slack — Get notifications
  • Webhooks — Send to custom endpoints
  • Make — Create automated workflows
  • Zapier — Trigger actions

Export Formats

Download data in multiple formats:

  • JSON — For developers and APIs
  • CSV — For spreadsheet analysis
  • Excel — For business reporting
  • XML — For system integrations

Frequently Asked Questions

What happens if I leave startUrl, keyword, and location empty?

The actor falls back to the local INPUT.json configuration. This keeps local runs and QA checks reproducible without hardcoding those values in the actor logic.

Can I still scrape a filtered Bumeran URL?

Yes. If you provide a Bumeran listing URL, the actor uses that URL context and extracts matching jobs from it.

Does the actor produce public job URLs?

Yes. Each enriched item includes the public Bumeran job URL whenever it is available.

How many items can I collect?

You can collect all available items. The practical limit depends on the website.

Can I scrape multiple pages?

Yes, the actor automatically handles pagination to reach your desired result count.

What if data is missing?

Some fields may be empty if the source doesn't provide that information.

Support

For issues or feature requests, contact support through the Apify Console.

Resources

This actor is designed for legitimate data collection purposes. Users are responsible for ensuring compliance with website terms of service and applicable laws. Use data responsibly and respect rate limits.