Computrabajo Jobs Scraper
Pricing
Pay per usage
Computrabajo Jobs Scraper
Efficiently extract detailed job data from Computrabajo, the leading job portal in Latin America. Gather job titles, descriptions, companies, and locations with speed. To ensure the most reliable performance and prevent blocking, the use of residential proxies is strongly recommended.
Pricing
Pay per usage
Rating
0.0
(0)
Developer

Shahid Irfan
Actor stats
0
Bookmarked
8
Total users
2
Monthly active users
14 days ago
Last modified
Categories
Share
Extract job listings from Computrabajo across Latin America with structured, export-ready output. Collect job titles, companies, locations, salary details, posting dates, tags, and descriptions in one automated run. Built for recruitment teams, market analysts, and job intelligence workflows.
Features
- Multi-country coverage — Scrape Computrabajo listings from major LATAM markets.
- Complete job records — Collect core fields and enriched listing details in one dataset.
- Pagination support — Continue through result pages automatically until your target is reached.
- Deduplicated output — Reduce duplicate records during collection.
- Flexible exports — Download results as
JSON,CSV, and other Apify-supported formats. - Proxy-ready runs — Use proxy configuration for more stable large-scale extraction.
Use Cases
Recruitment Pipeline Building
Create fresh job datasets for candidate sourcing and talent mapping by country, city, or role category. Keep internal recruiting dashboards up to date without manual copy-paste.
Labor Market Analysis
Track demand signals across industries using volume, location, and job type data. Compare hiring trends over time for workforce planning and reporting.
Competitive Hiring Monitoring
Follow hiring activity from specific companies or sectors. Use repeated runs to detect changes in open roles and expansion patterns.
Job Aggregator Data Feeds
Populate external job boards, lead databases, or internal search tools with normalized job listing data. Export results in formats your downstream tools already support.
Input Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
searchUrl | String | Yes | https://ar.computrabajo.com/empleos-de-administracion-y-oficina | Direct Computrabajo results URL to scrape. |
maxJobs | Integer | No | 20 | Maximum jobs to collect. Use 0 for an expanded run up to platform limits. |
includeFullDescription | Boolean | No | true | Include extended description content when available. |
proxyConfiguration | Object | No | Apify Residential Proxy preset | Proxy settings for improved reliability on protected pages. |
Output Data
Each dataset item can include:
| Field | Type | Description |
|---|---|---|
id | String | Listing identifier when available. |
title | String | Job title. |
company | String | Hiring company name. |
companyId | String | Company identifier when available. |
location | String | Job location (city/region). |
salary | String | Salary text or normalized range when provided. |
jobType | String | Employment type or contract label. |
postedDate | String | Posted date label from the listing. |
updatedAt | String | Last update timestamp/label when available. |
descriptionHtml | String | Rich job description content. |
descriptionText | String | Plain-text description. |
applyUrl | String | Application URL when available. |
tags | Array | Skills or keywords attached to the listing. |
url | String | Direct listing URL. |
source | String | Source reference for extracted record type. |
scrapedAt | String | ISO timestamp of extraction. |
Usage Examples
Basic Run
Collect the first 20 jobs from a category URL:
{"searchUrl": "https://ar.computrabajo.com/empleos-de-administracion-y-oficina","maxJobs": 20}
Country-Specific Hiring Snapshot
Collect up to 80 listings from a technology vertical:
{"searchUrl": "https://mx.computrabajo.com/empleos-de-tecnologia-sistemas","maxJobs": 80,"includeFullDescription": true}
Proxy-Enabled Production Run
Use residential proxies for improved reliability at scale:
{"searchUrl": "https://co.computrabajo.com/empleos-de-ventas-en-bogota","maxJobs": 150,"includeFullDescription": true,"proxyConfiguration": {"useApifyProxy": true,"apifyProxyGroups": ["RESIDENTIAL"]}}
Sample Output
{"id": "F5DDB4D356B9B34A61373E686DCF3405","title": "Analista de Reclutamiento","company": "Empresa ABC","location": "Bogota, Cundinamarca","salary": "3000000 - 4200000 COP","jobType": "Tiempo completo","postedDate": "Hace 2 dias","descriptionText": "Importante compania del sector requiere analista de reclutamiento...","tags": ["reclutamiento", "talento humano", "seleccion"],"url": "https://co.computrabajo.com/ofertas-de-trabajo/oferta-de-trabajo-de-analista-de-reclutamiento-F5DDB4D356B9B34A61373E686DCF3405","scrapedAt": "2026-02-19T09:30:22.145Z"}
Tips for Best Results
Start with a Real Results URL
- Open Computrabajo in your browser and copy a working results URL.
- Prefer category/search pages with active listings.
Keep Initial Runs Small
- Begin with
maxJobs: 20to validate inputs quickly. - Increase limits after confirming the output structure you need.
Use Proxy Configuration for Larger Runs
- Enable residential proxies for better stability during longer sessions.
- Keep country/domain alignment in your
searchUrlfor consistent results.
Plan Recurring Collection
- Schedule runs daily or weekly to monitor hiring changes.
- Use consistent URLs between runs for cleaner trend comparisons.
Integrations
Connect your dataset with:
- Google Sheets — Share and review extracted jobs with non-technical teams.
- Airtable — Build searchable hiring intelligence bases.
- Looker Studio / BI tools — Visualize role and location trends.
- Webhooks — Trigger downstream workflows after each run.
- Zapier / Make — Automate no-code pipelines.
Export Formats
- JSON — API and custom application workflows.
- CSV — Spreadsheet and analytics workflows.
- Excel — Business reporting and stakeholder handoff.
- XML — Legacy integrations where needed.
Frequently Asked Questions
How many jobs can I collect in one run?
Set maxJobs to your target. For QA and fast validation, start with 20, then scale up as needed.
Can I scrape any Computrabajo country domain?
Yes. Use a valid Computrabajo URL for your target country (for example ar, mx, co, pe, cl domains).
Why do some records have missing fields?
Some listings do not publish every attribute (for example salary or tags). Missing fields are normal and source-dependent.
Is proxy configuration required?
Small tests may work without advanced setup, but proxy configuration is recommended for higher reliability and larger runs.
Can I schedule this actor?
Yes. You can run on-demand or schedule recurring runs in Apify to maintain fresh job datasets.
Support
For issues or feature requests, use the Apify Console issue/support channels for this actor.
Resources
Legal Notice
This actor is intended for legitimate data collection and research workflows. You are responsible for complying with website terms, local laws, and data-use requirements in your jurisdiction.