Jobs Scraper 🔥 avatar
Jobs Scraper 🔥

Pricing

$22.00/month + usage

Go to Store
Jobs Scraper 🔥

Jobs Scraper 🔥

Developed by

WebScrap

WebScrap

Maintained by Community

A comprehensive job 💼 scraping actor for Apify that collects job listings from multiple platforms including LinkedIn, Glassdoor, Google Jobs, Bayt, and Naukri.

5.0 (1)

Pricing

$22.00/month + usage

0

Total users

3

Monthly users

3

Runs succeeded

>99%

Last modified

2 days ago

Job Scraper

A comprehensive job scraping actor for Apify that collects job listings from multiple platforms including LinkedIn, Glassdoor, Google Jobs, Bayt, and Naukri.

Description

This Apify actor allows you to search and collect job listings from multiple job sites in a single run. It provides detailed job information with intelligent data processing, making it ideal for job market research, recruitment efforts, and career exploration.

The actor implements robust error handling and retry mechanisms to ensure reliable results even when dealing with anti-scraping measures. Data is automatically processed, enriched, and categorized to provide actionable insights into the job market.

Features

  • Scrape job listings from multiple platforms in one run
  • Customize search parameters (location, job type, remote, etc.)
  • Retrieve detailed job information including salary data when available
  • Convert salaries to annual format for easier comparison
  • Proxy support for avoiding rate limits
  • Resilient scraping with automatic retries
  • Site-by-site approach to ensure some results even if one site fails
  • Detailed error reporting and suggestions
  • Automatic job categorization and data enrichment
  • Clean data output by removing null/empty fields
  • Job statistics and summary included in results

Input Parameters

ParameterTypeDescriptionDefault
site_namesArrayList of job sites to scrape from (supports "linkedin", "glassdoor", "google", "bayt", "naukri")["linkedin", "glassdoor", "google"]
search_termStringJob search keywords (e.g., "software engineer", "data scientist")Required
locationStringJob location (e.g., "San Francisco, CA", "New York, NY")Required
results_wantedIntegerNumber of job listings to retrieve per site20
hours_oldIntegerOnly show jobs posted within this many hours72
countryStringCountry for Glassdoor searches (e.g., "USA", "UK", "Canada")"USA"
distanceIntegerMaximum distance from the location in miles50
job_typeStringType of job to search for ("fulltime", "parttime", "internship", "contract")null
is_remoteBooleanOnly show remote jobsfalse
offsetIntegerNumber of results to skip (useful for pagination)0
proxiesArrayList of proxies to use for scraping (format: "user:pass@host:port" or "host:port")[]

Output

The actor outputs a dataset of job listings with the following information:

  • Job title
  • Company name
  • Location (city, state, country)
  • Remote status
  • Job type (full-time, part-time, etc.)
  • Salary information (when available)
  • Job URL
  • Job description
  • Date posted
  • Job category (automatically derived from title)
  • Date scraped
  • Search query used
  • And more depending on the job board

Output Format Example

{
"jobs": [
{
"company": "Cartesia",
"company_url": "https://www.linkedin.com/company/cartesia-ai",
"title": "Software Engineer, India",
"date_posted": "2025-07-27",
"job_url": "https://www.linkedin.com/jobs/view/4227402416",
"skills": ["Python", "React", "AWS", "Docker", "PostgreSQL"],
"job_type": "Full-time",
"experience_range": "2-4 years",
"location": "Bengaluru, Karnataka, India",
"id": "li-4227402416",
"site": "linkedin",
"interval": "yearly",
"min_amount": 1200000,
"max_amount": 1800000,
"currency": "INR",
"is_remote": false,
"job_level": "Mid-level",
"job_function": "Engineering",
"company_industry": "Artificial Intelligence",
"company_rating": 4.4,
"work_from_home_type": "Hybrid",
"category": "Software Engineering"
}
],
}

Advanced Features

Data Enrichment

The actor provides additional data enrichment:

  • Automatic job categorization based on title keywords
  • Derivation of job type from title if not explicitly provided
  • Detection of remote jobs based on title and description
  • Default values for missing fields

Clean Output

The clean_output parameter allows you to control how the data is returned:

Site-by-Site Scraping

The actor scrapes each site individually, which means:

  • If one site fails, the others will still be scraped
  • More detailed error reporting for each site
  • Random delays between sites to avoid detection

Retry Mechanism

The actor includes an automatic retry system:

  • Configurable number of retries per site
  • Increasing delays between retries
  • Proxy rotation between retries (if multiple proxies provided)

Error Handling

Improved error handling with:

  • Detailed error messages for each site
  • Specific suggestions for different error types
  • Comprehensive logging for troubleshooting

Usage Tips

  1. Handling Site Blocking

    • Many job sites implement anti-scraping measures
    • Using proxies is highly recommended to avoid IP blocking
    • Spread requests over time by reducing the number of sites you scrape at once
  2. Google Jobs Scraping

    • For Google Jobs, the search query format is important
    • The actor will automatically generate a Google-specific search term if not provided
  3. Improving Results

    • Use specific search terms and locations for better results
    • Setting linkedin_fetch_description to true provides more detailed job descriptions but is slower
    • For Glassdoor, setting the correct country parameter is important

Example Usage

{
"easy_apply": false,
"enforce_annual_salary": true,
"is_remote": true,
"location": "India",
"results_wanted": 50,
"search_term": "software engineer",
"site_names": [
"linkedin",
"glassdoor",
"google",
"bayt",
"naukri"
],
"hours_old": 72,
"country": "USA",
"distance": 50,
"description_format": "markdown",
"offset": 0
}

Troubleshooting

If you encounter a 429 error (rate limiting) or 403 error (forbidden), try:

  1. Using proxies
  2. Reducing the number of results_wanted
  3. Waiting some time between runs
  4. Targeting fewer job sites at once
  5. Increasing the max_retries value

Limitations

  • LinkedIn has limitations on which parameters can be used together
  • Google Jobs requires a specific search term format
  • Rate limiting may occur when scraping a large number of jobs without proxies
  • Some job sites may block scraping attempts even with proxies
  • Search results may vary compared to manual searches due to personalization