🔍 Wuzzuf Jobs Scraper
Pricing
Pay per usage
🔍 Wuzzuf Jobs Scraper
Extract job listings efficiently from Wuzzuf, Egypt's leading employment platform. This lightweight actor is designed for speed and ease of use. To ensure the best stability and avoid potential blocking, using residential proxies is highly recommended.
Pricing
Pay per usage
Rating
0.0
(0)
Developer

Shahid Irfan
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
2 days ago
Last modified
Categories
Share
Wuzzuf Jobs Scraper
Extract job listings from Wuzzuf.net, the leading job platform for Egypt and the MENA region. This high-performance scraper collects comprehensive job data including titles, companies, locations, salaries, requirements, and full descriptions.
Key Features
- Smart Data Extraction: Uses JSON API as primary source with automatic HTML fallback for maximum reliability
- Comprehensive Data: Extracts job title, company, location, salary, job type, career level, skills, posting date, and full descriptions
- Advanced Filtering: Search by keyword, location, category, career level, and employment type
- Efficient Pagination: Automatically handles multi-page results with configurable limits
- Structured Output: Clean, consistent JSON format ready for analysis and integration
- Production Ready: Built with enterprise-grade error handling and retry logic
Use Cases
- Job market research and analysis for Egypt and MENA region
- Salary benchmarking and compensation studies
- Recruitment pipeline automation
- Skills demand tracking and trend analysis
- Job aggregation platforms and job boards
- Career guidance and job recommendation systems
Input Configuration
Configure the scraper using these parameters:
Search Parameters
| Parameter | Type | Description | Example |
|---|---|---|---|
keyword | String | Search for specific job titles or keywords | "software engineer", "accountant" |
location | String | Filter jobs by city or region | "Cairo", "Alexandria", "Dubai" |
category | String | Filter by job category | "IT/Software Development" |
careerLevel | String | Filter by experience level | "Entry Level", "Experienced" |
jobType | String | Filter by employment type | "Full Time", "Remote" |
maxJobAge | String | Filter jobs by posting date age | "7 days", "30 days", "90 days", "all" |
startUrl | String | Custom Wuzzuf search URL (overrides other filters) | "https://wuzzuf.net/search/jobs/?q=..." |
Control Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
results_wanted | Integer | 100 | Maximum number of jobs to collect (1-1000) |
max_pages | Integer | 20 | Maximum search pages to process (~15 jobs per page) |
collectDetails | Boolean | true | Extract full job descriptions and details |
maxJobAge | String | all | Only include jobs posted within specified time frame |
proxyConfiguration | Object | Residential | Proxy settings for reliable scraping |
Input Example
{"keyword": "software engineer","location": "Cairo","category": "IT/Software Development","careerLevel": "Experienced","jobType": "Full Time","maxJobAge": "30 days","results_wanted": 50,"max_pages": 5,"collectDetails": true}
Output Format
Each job listing contains the following structured data:
{"title": "Senior Software Engineer","company": "Tech Company Egypt","location": "Maadi, Cairo, Egypt","salary": "Confidential","job_type": "Full Time","career_level": "Experienced","date_posted": "3 hours ago","skills": ["JavaScript","React","Node.js","MongoDB"],"description_html": "<div>Full HTML job description...</div>","description_text": "Plain text job description...","url": "https://wuzzuf.net/jobs/p/...","scraped_at": "2025-12-14T10:30:00.000Z"}
Output Fields
| Field | Type | Description |
|---|---|---|
title | String | Job title or position name |
company | String | Hiring company name |
location | String | Job location (city, country) |
salary | String | Salary range or "Confidential" |
job_type | String | Employment type (Full Time, Part Time, etc.) |
career_level | String | Required experience level |
date_posted | String | When the job was posted |
skills | Array | Required skills and technologies |
description_html | String | Full job description with HTML formatting |
description_text | String | Plain text version of description |
url | String | Direct link to job posting |
scraped_at | String | ISO timestamp of data extraction |
How to Use
Running on Apify Platform
- Navigate to the Apify Console
- Search for "Wuzzuf Jobs Scraper" in the Store
- Click "Try for free"
- Configure your search parameters in the Input tab
- Click "Start" to begin scraping
- Download results in JSON, CSV, Excel, or HTML format
API Integration
Use the Apify API to integrate job scraping into your applications:
const { ApifyClient } = require('apify-client');const client = new ApifyClient({token: 'YOUR_API_TOKEN',});const run = await client.actor('YOUR_ACTOR_ID').call({keyword: 'data scientist',location: 'Cairo',results_wanted: 100,});const { items } = await client.dataset(run.defaultDatasetId).listItems();console.log(items);
Python Example
from apify_client import ApifyClientclient = ApifyClient('YOUR_API_TOKEN')run = client.actor('YOUR_ACTOR_ID').call(run_input={'keyword': 'marketing manager','location': 'Dubai','results_wanted': 50})dataset = client.dataset(run['defaultDatasetId'])items = dataset.list_items().items
Performance and Costs
| Jobs Scraped | Compute Units | Runtime |
|---|---|---|
| 50 jobs (details) | ~0.02 CU | ~1-2 minutes |
| 100 jobs (details) | ~0.04 CU | ~2-4 minutes |
| 500 jobs (details) | ~0.15 CU | ~10-15 minutes |
Note: Actual costs may vary based on proxy usage and network conditions. Scraping without details (collectDetails: false) is significantly faster.
Best Practices
Optimal Configuration
- Use Specific Keywords: Narrow searches return more relevant results faster
- Set Reasonable Limits: Start with
results_wanted: 100to control costs - Enable Proxy: Use residential proxies for best reliability
- Schedule Regular Runs: Set up automated scraping to track new job postings
Error Handling
The scraper includes robust error handling:
- Automatic retries for failed requests (3 attempts)
- Graceful fallback from JSON API to HTML parsing
- Session management to handle rate limiting
- Detailed logging for troubleshooting
Limitations
- Respects website's rate limits and robots.txt
- Requires active Apify account and compute units
- Some job details may be behind authentication
- Output language depends on Wuzzuf's default (primarily Arabic and English)
Frequently Asked Questions
Can I scrape jobs from specific companies?
Yes, use the keyword parameter with the company name, or provide a custom startUrl filtering by company.
What languages are supported?
The scraper extracts content in both Arabic and English as provided by Wuzzuf.
How often can I run this scraper?
You can run it as frequently as needed. For job monitoring, we recommend daily or weekly schedules.
Can I export data to Google Sheets?
Yes, Apify integrations support direct export to Google Sheets, Excel, CSV, JSON, and more.
Is this scraper compliant with terms of service?
This scraper is designed for legitimate use cases like market research and recruitment. Users are responsible for ensuring their usage complies with applicable terms of service and regulations.
Support and Feedback
- Report issues on GitHub
- Contact via Apify Support
- Join the Apify Discord community