Bumeran Jobs Scraper
Pricing
Pay per usage
Bumeran Jobs Scraper
Scrape job listings from Bumeran in seconds. Extract job titles, descriptions, salaries, companies & locations automatically. Perfect for job boards, market research & recruitment automation. Save hours on manual data collection.
Pricing
Pay per usage
Rating
0.0
(0)
Developer
Shahid Irfan
Actor stats
1
Bookmarked
15
Total users
7
Monthly active users
2 days ago
Last modified
Categories
Share
Extract job listings from Bumeran Argentina with a simple input model. Search by Bumeran URL or combine a keyword with a location to collect structured job data for research, monitoring, and analysis.
Features
- Simple input model — Run with
startUrl,keyword, andlocationinstead of a large filter form. - Rich job records — Collect titles, company details, locations, work mode, seniority, descriptions, and public URLs.
- Detail enrichment — Optionally gather full job descriptions, company metadata, and screening questions.
- URL support — Start from a Bumeran listing URL or a direct job detail URL.
- QA-friendly defaults — Falls back to
INPUT.jsonwhen the public search fields are left empty.
Use Cases
Job Market Research
Track open roles in a city or region and analyze which companies are hiring for specific skills.
Competitive Hiring Monitoring
Monitor hiring activity by keyword and location to understand demand across teams, sectors, and markets.
Lead Generation
Build datasets of companies and job openings for recruiting, outreach, and staffing workflows.
Salary and Role Analysis
Collect enriched listings to study role seniority, work mode, and hiring patterns over time.
Input Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
startUrl | String | No | https://www.bumeran.com.ar/empleos.html | Bumeran listing URL or direct job detail URL. |
keyword | String | No | developer | Search keyword used when no specific URL is provided. |
location | String | No | Buenos Aires | Province or locality name used to narrow the search. |
collectDetails | Boolean | No | true | Enrich listings with full job details. |
results_wanted | Integer | No | 20 | Maximum number of jobs to collect. |
max_pages | Integer | No | 2 | Safety cap for pagination. |
proxyConfiguration | Object | No | {"useApifyProxy": false} | Optional Apify Proxy configuration. |
Output Data
Each dataset item can include the following fields:
| Field | Type | Description |
|---|---|---|
id | Integer | Bumeran job identifier. |
title | String | Job title. |
company | String | Company name. |
location | String | Human-readable job location. |
area | String | Main job area. |
subarea | String | Job subarea. |
modality | String | Work mode such as remote or hybrid. |
seniority | String | Seniority level. |
job_type | String | Job type such as full-time or part-time. |
published_date | String | Publication date shown by Bumeran. |
description_text | String | Plain text version of the job description. |
questions | Array | Screening questions, when available. |
url | String | Public Bumeran job URL. |
Usage Examples
Basic Keyword Search
{"keyword": "developer","location": "Buenos Aires","results_wanted": 20}
Start From a Listing URL
{"startUrl": "https://www.bumeran.com.ar/empleos-en-buenos-aires-busqueda-developer.html","results_wanted": 20,"max_pages": 2}
Collect a Single Job From Its URL
{"startUrl": "https://www.bumeran.com.ar/empleos/fullstack-developer-sr-acciona-it-1118181775.html"}
Sample Output
{"id": 1118181775,"title": "Fullstack Developer Sr","company": "ACCIONA IT","location": "Capital Federal, Buenos Aires, Argentina","area": "Tecnología, Sistemas y Telecomunicaciones","subarea": "Programación","modality": "Remoto","seniority": "Senior","job_type": "Full-time","published_date": "05-03-2026","description_text": "En Acciona IT estamos buscando un Fullstack Developer SR...","questions": ["¿Cuántos años de experiencia tenés trabajando con Node?"],"url": "https://www.bumeran.com.ar/empleos/fullstack-developer-sr-acciona-it-1118181775.html"}
Tips for Best Results
Use Specific Keywords
- Start with role names like
developer,analista, ormarketing. - Pair the keyword with a real city or province name for more focused results.
Use Bumeran URLs When You Have Them
- A listing URL helps preserve Bumeran's own filtering context.
- A direct job URL is the fastest way to collect one job record.
Keep QA and Local Tests Fast
- The bundled defaults are set to
20results and2pages for fast verification. - Leave
collectDetailsenabled when you need richer job records.
Proxy Configuration
For reliable results, residential proxies are recommended:
{"proxyConfiguration": {"useApifyProxy": true,"apifyProxyGroups": ["RESIDENTIAL"]}}
Integrations
Connect your data with:
- Google Sheets — Export for analysis
- Airtable — Build searchable databases
- Slack — Get notifications
- Webhooks — Send to custom endpoints
- Make — Create automated workflows
- Zapier — Trigger actions
Export Formats
Download data in multiple formats:
- JSON — For developers and APIs
- CSV — For spreadsheet analysis
- Excel — For business reporting
- XML — For system integrations
Frequently Asked Questions
What happens if I leave startUrl, keyword, and location empty?
The actor falls back to the local INPUT.json configuration. This keeps local runs and QA checks reproducible without hardcoding those values in the actor logic.
Can I still scrape a filtered Bumeran URL?
Yes. If you provide a Bumeran listing URL, the actor uses that URL context and extracts matching jobs from it.
Does the actor produce public job URLs?
Yes. Each enriched item includes the public Bumeran job URL whenever it is available.
How many items can I collect?
You can collect all available items. The practical limit depends on the website.
Can I scrape multiple pages?
Yes, the actor automatically handles pagination to reach your desired result count.
What if data is missing?
Some fields may be empty if the source doesn't provide that information.
Support
For issues or feature requests, contact support through the Apify Console.
Resources
Legal Notice
This actor is designed for legitimate data collection purposes. Users are responsible for ensuring compliance with website terms of service and applicable laws. Use data responsibly and respect rate limits.