
Jobs Scraper 🔥
Pricing
$22.00/month + usage

Jobs Scraper 🔥
A comprehensive job 💼 scraping actor for Apify that collects job listings from multiple platforms including LinkedIn, Glassdoor, Google Jobs, Bayt, and Naukri.
5.0 (1)
Pricing
$22.00/month + usage
0
Total users
3
Monthly users
3
Runs succeeded
>99%
Last modified
2 days ago
Job Scraper
A comprehensive job scraping actor for Apify that collects job listings from multiple platforms including LinkedIn, Glassdoor, Google Jobs, Bayt, and Naukri.
Description
This Apify actor allows you to search and collect job listings from multiple job sites in a single run. It provides detailed job information with intelligent data processing, making it ideal for job market research, recruitment efforts, and career exploration.
The actor implements robust error handling and retry mechanisms to ensure reliable results even when dealing with anti-scraping measures. Data is automatically processed, enriched, and categorized to provide actionable insights into the job market.
Features
- Scrape job listings from multiple platforms in one run
- Customize search parameters (location, job type, remote, etc.)
- Retrieve detailed job information including salary data when available
- Convert salaries to annual format for easier comparison
- Proxy support for avoiding rate limits
- Resilient scraping with automatic retries
- Site-by-site approach to ensure some results even if one site fails
- Detailed error reporting and suggestions
- Automatic job categorization and data enrichment
- Clean data output by removing null/empty fields
- Job statistics and summary included in results
Input Parameters
Parameter | Type | Description | Default |
---|---|---|---|
site_names | Array | List of job sites to scrape from (supports "linkedin", "glassdoor", "google", "bayt", "naukri") | ["linkedin", "glassdoor", "google"] |
search_term | String | Job search keywords (e.g., "software engineer", "data scientist") | Required |
location | String | Job location (e.g., "San Francisco, CA", "New York, NY") | Required |
results_wanted | Integer | Number of job listings to retrieve per site | 20 |
hours_old | Integer | Only show jobs posted within this many hours | 72 |
country | String | Country for Glassdoor searches (e.g., "USA", "UK", "Canada") | "USA" |
distance | Integer | Maximum distance from the location in miles | 50 |
job_type | String | Type of job to search for ("fulltime", "parttime", "internship", "contract") | null |
is_remote | Boolean | Only show remote jobs | false |
offset | Integer | Number of results to skip (useful for pagination) | 0 |
proxies | Array | List of proxies to use for scraping (format: "user:pass@host:port" or "host:port") | [] |
Output
The actor outputs a dataset of job listings with the following information:
- Job title
- Company name
- Location (city, state, country)
- Remote status
- Job type (full-time, part-time, etc.)
- Salary information (when available)
- Job URL
- Job description
- Date posted
- Job category (automatically derived from title)
- Date scraped
- Search query used
- And more depending on the job board
Output Format Example
{"jobs": [{"company": "Cartesia","company_url": "https://www.linkedin.com/company/cartesia-ai","title": "Software Engineer, India","date_posted": "2025-07-27","job_url": "https://www.linkedin.com/jobs/view/4227402416","skills": ["Python", "React", "AWS", "Docker", "PostgreSQL"],"job_type": "Full-time","experience_range": "2-4 years","location": "Bengaluru, Karnataka, India","id": "li-4227402416","site": "linkedin","interval": "yearly","min_amount": 1200000,"max_amount": 1800000,"currency": "INR","is_remote": false,"job_level": "Mid-level","job_function": "Engineering","company_industry": "Artificial Intelligence","company_rating": 4.4,"work_from_home_type": "Hybrid","category": "Software Engineering"}],}
Advanced Features
Data Enrichment
The actor provides additional data enrichment:
- Automatic job categorization based on title keywords
- Derivation of job type from title if not explicitly provided
- Detection of remote jobs based on title and description
- Default values for missing fields
Clean Output
The clean_output
parameter allows you to control how the data is returned:
Site-by-Site Scraping
The actor scrapes each site individually, which means:
- If one site fails, the others will still be scraped
- More detailed error reporting for each site
- Random delays between sites to avoid detection
Retry Mechanism
The actor includes an automatic retry system:
- Configurable number of retries per site
- Increasing delays between retries
- Proxy rotation between retries (if multiple proxies provided)
Error Handling
Improved error handling with:
- Detailed error messages for each site
- Specific suggestions for different error types
- Comprehensive logging for troubleshooting
Usage Tips
-
Handling Site Blocking
- Many job sites implement anti-scraping measures
- Using proxies is highly recommended to avoid IP blocking
- Spread requests over time by reducing the number of sites you scrape at once
-
Google Jobs Scraping
- For Google Jobs, the search query format is important
- The actor will automatically generate a Google-specific search term if not provided
-
Improving Results
- Use specific search terms and locations for better results
- Setting
linkedin_fetch_description
totrue
provides more detailed job descriptions but is slower - For Glassdoor, setting the correct
country
parameter is important
Example Usage
{"easy_apply": false,"enforce_annual_salary": true,"is_remote": true,"location": "India","results_wanted": 50,"search_term": "software engineer","site_names": ["linkedin","glassdoor","google","bayt","naukri"],"hours_old": 72,"country": "USA","distance": 50,"description_format": "markdown","offset": 0}
Troubleshooting
If you encounter a 429 error (rate limiting) or 403 error (forbidden), try:
- Using proxies
- Reducing the number of
results_wanted
- Waiting some time between runs
- Targeting fewer job sites at once
- Increasing the
max_retries
value
Limitations
- LinkedIn has limitations on which parameters can be used together
- Google Jobs requires a specific search term format
- Rate limiting may occur when scraping a large number of jobs without proxies
- Some job sites may block scraping attempts even with proxies
- Search results may vary compared to manual searches due to personalization