Naukri Job Scraper
Pricing
from $12.00 / 1,000 results
Naukri Job Scraper
Extract and normalize job listings from Naukri.com using search inputs like keyword, location, and job count. Returns structured job data including roles, company info, experience, skills, and descriptions, ready in dataset/CSV format for analysis or automation
Pricing
from $12.00 / 1,000 results
Rating
0.0
(0)
Developer
Komala Maran
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
4 days ago
Last modified
Categories
Share
It is designed for scrapping Naukri job you provide search input (keyword, location, max jobs, etc.), the actor runs the underlying scraper, and pushes filtered job records to the dataset.
What this actor does
- Initializes the Apify Actor runtime.
- Reads input from
Actor.getInput(). - Builds scraper input with defaults.
- Reads the run dataset items.
- Maps each item to a compact output schema.
- Pushes normalized results to this actor's dataset.
Project structure
main.js- Actor logic (input mapping, scraper call, output transformation).package.json- Node package metadata and runtime dependencies.Dockerfile- Container image definition used to run the actor.
Runtime and dependencies
- Node.js via
apify/actor-node:20Docker base image. - Main dependency:
apify(JavaScript SDK). - Start command:
npm start(runsnode main.js).
Input
The actor accepts the following input fields:
| Field | Type | Default | Description |
|---|---|---|---|
searchQuery | string | "developer" | Keyword/job title query. |
location | string | "india" | Target location for jobs. |
maximumJobs | number | 20 | Maximum jobs requested from underlying scraper. |
platform | string | "naukri" | Platform passed to underlying scraper. |
startUrls | array | [] | Optional start URLs for scraping. |
includeAmbitionBoxDetails | boolean | false | Whether to include AmbitionBox-related details from source actor. |
proxy | object | { useApifyProxy: true, apifyProxyGroups: ["RESIDENTIAL"] } | Proxy setup passed to source actor. |
Example input
{"searchQuery": "data scientist","location": "Bengaluru","maximumJobs": 50,"platform": "naukri","startUrls": [],"includeAmbitionBoxDetails": false,"proxy": {"useApifyProxy": true,"apifyProxyGroups": ["RESIDENTIAL"]}}
Output dataset schema
Each output item is normalized to the fields below:
companyNameapplyCountroleCategoryjobRolecompanyDetailfunctionalAreadescriptionindustryurltitlewalkInmaximumExperienceminimumExperiencelocationskeySkillsshortDescription
Field mapping notes
companyNameprefersitem.companyDetail.name, thenitem.staticCompanyName.descriptionprefersitem.jobDescription, thenitem.description.urlprefersitem.staticUrl, thenitem.jdURL, thenitem.applyUrl.locationsis taken fromitem.locationsor fallbackitem.location.keySkillsis taken fromitem.tagsAndSkillsor fallbackitem.keySkills.
Error handling behavior
- If the called scraper run is not
SUCCEEDED, the actor fails with run ID details. - Any runtime exception is caught and reported using
Actor.fail.
Important limitations
- Output fields are transformed and may omit source fields not included in the mapping.
- Current fetch limit for reading items from the source dataset is
99999.
Script entrypoint
- Main entrypoint:
main.js - NPM script:
npm start