Naukri Job Scraper avatar

Naukri Job Scraper

Pricing

from $12.00 / 1,000 results

Go to Apify Store
Naukri Job Scraper

Naukri Job Scraper

Extract and normalize job listings from Naukri.com using search inputs like keyword, location, and job count. Returns structured job data including roles, company info, experience, skills, and descriptions, ready in dataset/CSV format for analysis or automation

Pricing

from $12.00 / 1,000 results

Rating

0.0

(0)

Developer

Komala Maran

Komala Maran

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

4 days ago

Last modified

Share

It is designed for scrapping Naukri job you provide search input (keyword, location, max jobs, etc.), the actor runs the underlying scraper, and pushes filtered job records to the dataset.

What this actor does

  1. Initializes the Apify Actor runtime.
  2. Reads input from Actor.getInput().
  3. Builds scraper input with defaults.
  4. Reads the run dataset items.
  5. Maps each item to a compact output schema.
  6. Pushes normalized results to this actor's dataset.

Project structure

  • main.js - Actor logic (input mapping, scraper call, output transformation).
  • package.json - Node package metadata and runtime dependencies.
  • Dockerfile - Container image definition used to run the actor.

Runtime and dependencies

  • Node.js via apify/actor-node:20 Docker base image.
  • Main dependency: apify (JavaScript SDK).
  • Start command: npm start (runs node main.js).

Input

The actor accepts the following input fields:

FieldTypeDefaultDescription
searchQuerystring"developer"Keyword/job title query.
locationstring"india"Target location for jobs.
maximumJobsnumber20Maximum jobs requested from underlying scraper.
platformstring"naukri"Platform passed to underlying scraper.
startUrlsarray[]Optional start URLs for scraping.
includeAmbitionBoxDetailsbooleanfalseWhether to include AmbitionBox-related details from source actor.
proxyobject{ useApifyProxy: true, apifyProxyGroups: ["RESIDENTIAL"] }Proxy setup passed to source actor.

Example input

{
"searchQuery": "data scientist",
"location": "Bengaluru",
"maximumJobs": 50,
"platform": "naukri",
"startUrls": [],
"includeAmbitionBoxDetails": false,
"proxy": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}

Output dataset schema

Each output item is normalized to the fields below:

  • companyName
  • applyCount
  • roleCategory
  • jobRole
  • companyDetail
  • functionalArea
  • description
  • industry
  • url
  • title
  • walkIn
  • maximumExperience
  • minimumExperience
  • locations
  • keySkills
  • shortDescription

Field mapping notes

  • companyName prefers item.companyDetail.name, then item.staticCompanyName.
  • description prefers item.jobDescription, then item.description.
  • url prefers item.staticUrl, then item.jdURL, then item.applyUrl.
  • locations is taken from item.locations or fallback item.location.
  • keySkills is taken from item.tagsAndSkills or fallback item.keySkills.

Error handling behavior

  • If the called scraper run is not SUCCEEDED, the actor fails with run ID details.
  • Any runtime exception is caught and reported using Actor.fail.

Important limitations

  • Output fields are transformed and may omit source fields not included in the mapping.
  • Current fetch limit for reading items from the source dataset is 99999.

Script entrypoint

  • Main entrypoint: main.js
  • NPM script: npm start