Jobberman Job Scraper
Pricing
Pay per usage
Jobberman Job Scraper
Extract job listings effortlessly with the Jobberman Job Scraper. This lightweight actor is optimized for speed and accuracy on Jobberman sites. To prevent blocking and ensure high-quality results, using residential proxies is strongly recommended. Streamline your recruitment data collection today!
Pricing
Pay per usage
Rating
0.0
(0)
Developer

Shahid Irfan
Actor stats
0
Bookmarked
3
Total users
2
Monthly active users
4 days ago
Last modified
Categories
Share
Extract comprehensive job listings from Jobberman.com, Africa's leading job search platform. This powerful web scraper automates the collection of job data including titles, companies, locations, descriptions, and more. Perfect for job market analysis, recruitment agencies, data analysts, and anyone requiring structured employment data from Jobberman.
Keywords: job scraper, Jobberman scraper, job listings extraction, job data scraping, employment data, job search automation, web scraping jobs, Jobberman API alternative, job market data, recruitment data, job postings scraper, career opportunities scraper, employment listings, job board scraper, African job market.
Table of Contents
- Overview
- Key Features
- Input
- Usage Examples
- Output
- How It Works
- Configuration & Runtime Notes
- Troubleshooting
- FAQ
- Files in This Repository
- License
Overview
The Jobberman Job Scraper is designed to extract job listings data from Jobberman.com, one of Africa's premier job search platforms. This Apify actor streamlines the process of scraping job postings, gathering detailed information such as job titles, company names, locations, posting dates, and full descriptions. It's ideal for job market research, recruitment agencies, data analysts, and professionals needing organized job data from Jobberman.
The scraper handles pagination automatically, supports advanced filtering options, and ensures reliable data extraction with configurable proxy and cookie settings for optimal performance.
Key Features
- Comprehensive Job Data Extraction: Scrapes job titles, companies, locations, posting dates, and full descriptions.
- Advanced Search & Filtering: Supports keyword searches, location-based filtering, and date-based job posting filters.
- Pagination Handling: Automatically navigates through multiple pages of job listings to collect extensive data.
- Detailed Job Descriptions: Optionally extracts rich HTML and plain text descriptions from individual job pages.
- Proxy & Cookie Support: Configurable proxy settings and custom cookies for bypassing restrictions and ensuring successful scraping.
- Safe & Polite Scraping: Built-in limits on results and pages to prevent overloading the target site.
- Data Sanitization: Cleans and normalizes extracted data, including HTML sanitization for safe storage.
- Flexible Input Options: Accepts JSON input for easy configuration and customization of scraping parameters.
Input
Configure the scraper using a JSON input object. All fields are optional, allowing for flexible job data extraction tailored to your needs. The input supports various parameters to refine your job search and control the scraping behavior.
| Field | Type | Description | Default |
|---|---|---|---|
keyword | string | The job title or keywords to search for on Jobberman.com. Example: "data analyst" or "marketing manager". | - |
location | string | Geographic location to filter job results. Supports cities, states, or regions. Example: "Lagos", "Abuja", or "Nigeria". | - |
posted_date | string | Filter jobs based on how recently they were posted. Options: "24h" (last 24 hours), "7d" (last 7 days), "30d" (last 30 days). | - |
startUrl | string | A complete Jobberman.com search URL to begin scraping from. Overrides automatic URL construction from keyword and location. | - |
results_wanted | integer | The maximum number of job records to extract. The scraper stops once this limit is reached. | 50 |
max_pages | integer | Maximum number of search result pages to visit. Acts as a safeguard against excessive scraping. | 10 |
collectDetails | boolean | When set to true, the scraper visits each job's detail page to gather full descriptions and additional metadata. | false |
cookies | string | object | Custom cookies for requests, useful for handling consent banners or site-specific requirements. Can be a header string or JSON object. | - |
proxyConfiguration | object | Proxy settings for the scraping process. Use platform-provided proxies for reliable and geo-targeted scraping. | - |
Usage Examples
Here are practical examples of how to configure and run the Jobberman Job Scraper for different use cases.
Example 1: Basic Job Search
Extract recent software engineering jobs in Lagos:
{"keyword": "software engineer","location": "Lagos","posted_date": "7d","results_wanted": 20,"max_pages": 5,"collectDetails": false}
Example 2: Detailed Job Extraction
Get full job descriptions for marketing positions in Nigeria:
{"keyword": "marketing","location": "Nigeria","results_wanted": 50,"max_pages": 10,"collectDetails": true,"cookies": "consent=yes; region=ng"}
Example 3: Custom Start URL
Scrape from a specific pre-filtered search page:
{"startUrl": "https://www.jobberman.com/jobs?q=data+science&l=remote&created_at=30+days","results_wanted": 30,"collectDetails": true}
Example 4: Large-Scale Data Collection
Collect extensive job data with proxy support:
{"keyword": "finance","location": "Africa","posted_date": "30d","results_wanted": 200,"max_pages": 50,"collectDetails": true,"proxyConfiguration": {"useApifyProxy": true,"apifyProxyGroups": ["RESIDENTIAL"]}}
Output
The scraper saves extracted job data to the default Apify dataset. Each job record is a JSON object containing structured information. Results can be exported in various formats (JSON, CSV, Excel) directly from the Apify platform.
Sample Output Record:
{"url": "https://www.jobberman.com/listings/software-engineer-abc123","title": "Senior Software Engineer","company": "Tech Innovations Ltd","location": "Lagos, Nigeria","date_posted": "2 days ago","description_html": "<p>We are looking for a skilled software engineer to join our team...</p>","description_text": "We are looking for a skilled software engineer to join our team...","job_type": "Full Time","salary_range": "NGN 500,000 - NGN 800,000","category": "Information Technology"}
Field Descriptions:
url: Direct link to the job posting on Jobberman.com.title: The job position title.company: Name of the hiring company.location: Job location or work arrangement (e.g., "Remote").date_posted: When the job was posted (relative or absolute date).description_html: Full job description in sanitized HTML format.description_text: Plain text version of the job description.job_type: Employment type (Full Time, Part Time, Contract, etc.).salary_range: Salary information if available.category: Job category or industry sector.
How It Works
The Jobberman Job Scraper operates in a systematic manner to ensure efficient and reliable data extraction:
- URL Construction: Builds or uses the provided search URL based on input parameters.
- Page Navigation: Visits job listing pages, following pagination links up to the specified max_pages limit.
- Link Collection: Extracts URLs of individual job postings from search results.
- Data Extraction: Scrapes basic job information from listing cards.
- Detail Collection (Optional): If enabled, visits each job detail page for comprehensive information.
- Data Processing: Sanitizes and normalizes extracted data, including HTML cleaning.
- Result Storage: Saves structured job records to the Apify dataset.
- Termination: Stops when results_wanted limit is reached or all pages are processed.
The scraper respects configured limits to ensure polite and responsible web scraping practices.
Configuration & Runtime Notes
- Polite Scraping: Use
results_wantedandmax_pagesto control scraping intensity and avoid overloading Jobberman.com servers. - Cookie Management: For sites with consent requirements, provide appropriate cookies to ensure access to full content.
- Proxy Usage: Leverage Apify's proxy services for geo-targeting or to handle IP-based restrictions.
- HTML Sanitization: Descriptions are cleaned to remove unsafe elements while preserving formatting and links.
- Performance: Detail collection increases runtime but provides richer data. Disable for faster, basic extractions.
- Error Handling: The scraper includes robust error handling for network issues and site changes.
Troubleshooting
- No Results Returned: Check keyword and location parameters. Try a broader search or use startUrl with a known working search page.
- Early Termination: Verify results_wanted and max_pages settings. Increase limits if needed for larger datasets.
- Geographic Blocking: Use proxy configuration with appropriate geographic groups to access region-specific content.
- Incomplete Descriptions: Enable collectDetails and ensure sufficient page load time in actor settings.
- Cookie Issues: If encountering access restrictions, provide valid cookies that match the site's requirements.
- Performance Problems: For large-scale scraping, reduce collectDetails or use higher memory/CPU allocations on the platform.
FAQ
Can I scrape jobs from specific companies?
While direct company filtering isn't supported, you can use keywords that include company names or filter results post-extraction.
How often should I run the scraper?
Job postings change frequently, so daily or weekly runs are recommended depending on your data freshness needs.
Is the data real-time?
The scraper extracts data as it appears on Jobberman.com at the time of the run. For real-time monitoring, schedule regular actor runs.
Can I export data in formats other than JSON?
Yes, Apify supports exporting datasets to CSV, Excel, XML, and other formats directly from the platform.
What if Jobberman changes their website structure?
The scraper is designed to be resilient, but significant site changes may require updates. Check actor logs and contact support if issues persist.
Is there a limit to how many jobs I can scrape?
The actor has configurable limits (results_wanted, max_pages) to prevent excessive scraping. Contact Apify for higher limits if needed.
Files in This Repository
src/main.js— Main scraper implementation and entry point.INPUT.json— Sample input configuration for testing and reference.package.json— Project dependencies and scripts.README.md— This documentation file.
License
This project is licensed under the MIT License. See the LICENSE file in the repository for full details.
For additional support, check the actor run logs, review Apify documentation, or reach out to the community. Happy scraping! 🚀