Naukri Job Scraper — India’s Largest Job Board avatar

Naukri Job Scraper — India’s Largest Job Board

Pricing

from $2.00 / 1,000 results

Go to Apify Store
Naukri Job Scraper — India’s Largest Job Board

Naukri Job Scraper — India’s Largest Job Board

Scrape naukri.com - India's largest job board with 500K+ active listings. Salary data, recruiter contact details, skill requirements, and company profiles. Incremental mode detects new and changed listings. Compact output for AI agents and MCP workflows.

Pricing

from $2.00 / 1,000 results

Rating

0.0

(0)

Developer

Black Falcon Data

Black Falcon Data

Maintained by Community

Actor stats

2

Bookmarked

369

Total users

122

Monthly active users

a day ago

Last modified

Share

What does Naukri.com Jobs Feed do?

Naukri.com Jobs Feed extracts structured job data from naukri.com — including salary data, apply URLs, company metadata, full descriptions, and skill tags. It supports keyword search, location filters, and controllable result limits, so you can run the same query consistently over time. The actor also offers detail enrichment (full descriptions and company metadata) where the source provides them.

New to Apify? Sign up free and use the included $5 monthly platform credit to test this actor.

Key features

  • ♻️ Incremental mode — recurring runs emit only NEW / UPDATED / REAPPEARED records — UNCHANGED and EXPIRED are opt-in. First run builds the baseline; subsequent runs emit and charge only for the diff. Pair with notifications for daily "new jobs" alerts to your hiring team. Saves 80–95% on daily monitoring.
  • 🔔 Notifications — Telegram, Slack, Discord, WhatsApp Cloud API, generic webhook — out of the box. Pair with incremental + notifyOnlyChanges for daily "new Naukri jobs" pings to your hiring channel.
  • 🔗 Paste-mode — paste any naukri.com URL straight from your browser: a single-job page, a search-results URL, or an "X jobs in Y" SEO URL. Mix freely with keyword and jobIds in the same run.
  • 👤 Recruiter-spam filterpostedBy: "Company" drops listings posted by 3rd-party consultants and surfaces only direct-employer postings. Use "Consultant" for the inverse, "Both" (default) for everything.
  • 🚶 Walk-in interview details — for walk-in jobs, full structured walkInDetail — HR contact name + phone, venue address, walk-in dates, daily timing, sat/sun working flags, and a Google Maps URL when the employer published one.
  • ⭐ AmbitionBox enrichment — full AmbitionBox payload on every job — company info, top 3 employee reviews with likes/dislikes, role-level avg/min/max CTC, 30+ benefits list, and award badges. Industry-leading employer intelligence.
  • 📥 Triple-format descriptions — plain text + raw HTML + Markdown for every job description — pick the format your downstream RAG / LLM / search pipeline wants. No re-parsing.
  • 📦 Compact mode — AI-agent and MCP-friendly compact payloads with core fields only — pipe straight into your ATS, salary-benchmarking tool, or LLM context without parsing extras.
  • 🛡️ State lock — concurrent runs on the same stateKey are blocked by a 30-min soft lock — prevents incremental-state corruption when schedulers overlap.
  • 📧 Email + phone extraction — best-effort regex extraction of contact emails and Indian / international phone numbers from job descriptions, combined with structured walk-in HR phone numbers where available.

What data can you extract from naukri.com?

Each result includes Core listing fields (jobId, title, experienceText, minimumExperience, maximumExperience, salary, salaryMin, and salaryMax, and more), detail fields when enrichment is enabled (description, descriptionHtml, descriptionMarkdown, roleCategory, and jobRole), apply information (applyCount), and company metadata (companyName, companyId, companyWebsite, and companyDescription). In standard mode, all fields are always present — unavailable data points are returned as null, never omitted. In compact mode, only core fields are returned.

Enable detail enrichment in the input to get richer fields such as full descriptions and company metadata where the source provides them.

Input

The main inputs are a search keyword, an optional location filter, and a result limit. Additional filters and options are available in the input schema.

Key parameters:

  • keyword — Job search keyword (e.g. 'python developer', 'data analyst'). Not required if jobIds is provided.
  • skills — Filter by skill names (e.g. ['Python', 'Django', 'AWS']). Appended to the search keyword for server-side matching.
  • jobIds — Fetch specific jobs by their Naukri job IDs (skips keyword search). Each ID is fetched individually.
  • startUrls — Paste any naukri.com URL straight from your browser. Four shapes work: (1) single-job pages like https://www.naukri.com/job-listings-...-220426501512, (2) search-results like https://www.naukri.com/jobs?keyword=devops&location=mumbai, (3) 'X jobs in Y' SEO URLs like https://www.naukri.com/python-developer-jobs-in-bangalore (optionally -1-to-3-years), and (4) keyword-only SEO URLs like https://www.naukri.com/python-developer-jobs. Mix freely with keyword and jobIds; results dedupe by jobId.
  • ignoreUrlFailures — When true (default), unparseable startUrls are logged and skipped. When false, the run fails fast on the first bad URL. (default: true)
  • location — Filter by location (e.g. 'Bangalore', 'Mumbai', 'Delhi')
  • experienceMin — Minimum years of experience filter
  • experienceMax — Maximum years of experience filter
  • salary — Salary range filter in lakhs per annum. Note: Naukri's API does not enforce this filter strictly — many matched jobs still hide their actual salary (salaryDetail.hideSalary=true).
  • sortBy — Sort results by relevance or posting date
  • workMode — Filter by work arrangement
  • freshness — Only show jobs posted within this many days (1, 3, 7, 15, or 30)
  • ...and 35 more parameters

Input examples

Basic search — Keyword-driven search with a result cap.

→ Full payload per result — all standard fields populated where the source provides them.

{
"keyword": "python developer",
"maxResults": 50
}

Incremental tracking — Only emit jobs that changed since the previous run with this stateKey.

→ First run builds the baseline state. Subsequent runs emit only records that are new or whose tracked content changed. Set emitUnchanged: true to include unchanged records as well.

{
"keyword": "python developer",
"maxResults": 200,
"incremental": true,
"stateKey": "python-developer-tracker"
}

Compact filtered output — Combine filters with compact mode for a lightweight AI-agent or MCP data source.

→ Core fields only — ideal for piping into LLMs or downstream tools without token overhead.

{
"keyword": "python developer",
"workMode": "office",
"maxResults": 50,
"compact": true
}

Output

Each run produces a dataset of structured job records. Results can be downloaded as JSON, CSV, or Excel from the Dataset tab in Apify Console.

Example job record

{
"jobId": "100426916818",
"title": "Custom Software Engineer",
"companyName": "Accenture",
"companyId": 27117,
"experienceText": "0-1 Yrs",
"minimumExperience": 0,
"maximumExperience": 1,
"salary": "Not disclosed",
"salaryMin": null,
"salaryMax": null,
"salaryCurrency": null,
"location": "Chennai",
"skills": [
"software engineer",
"kubernetes",
"rest",
"restful",
"load balancing",
"api gateway",
"hibernate",
"redis"
],
"createdDate": "2026-04-10T10:28:15.508Z",
"portalUrl": "https://www.naukri.com/job-listings-custom-software-engineer-accenture-solutions-pvt-ltd-chennai-0-to-1-years-100426916818",
"logoPath": "https://img.naukimg.com/logo_images/groups/v1/10476.gif",
"industry": "IT Services & Consulting",
"viewCount": 2369,
"companyWebsite": null,
"ambitionBox": {
"url": "https://www.ambitionbox.com/reviews/accenture-reviews?utm_campaign=srp_ratings&utm_medium=desktop&utm_source=naukri",
"rating": "3.7",
"reviewsCount": 73082
},
"description": "Project Role : Custom Software Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements...",
"descriptionHtml": "<b>Project Role :</b>Custom Software Engineer<br /><b><br /><br />Project Role Description :</b>Analyze, design, code and test multiple components of application code across one or more clients. Perfo...",
"descriptionMarkdown": "**Project Role :**Custom Software Engineer\n**Project Role Description :**Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhance...",
"roleCategory": "Software Development",
"functionalArea": "Engineering - Software & QA",
"jobRole": "Search Engineer",
"employmentType": "Full Time, Permanent",
"educationUG": [
"B.Tech / B.E. in Any Specialization"
],
"educationPG": [
"Any Postgraduate"
],
"applyCount": 719,
"vacancy": 0,
"wfhType": "office",
"companyDescription": "About Accenture<br><br> <br><br>Accenture is a global professional services company with leading capabilities in digital, cloud and security. Combining unmatched experience and specialized skills acro...",
"scrapedAt": "2026-04-12T21:33:15.812Z",
"searchKeyword": "software engineer"
}

In compact mode, output is reduced to core fields: jobId, title, companyName, location, salary, experienceText, skills, createdDate, walkinJob, and consultant.

Incremental fields

When incremental: true, each record also carries:

  • changeType — one of NEW, UPDATED, UNCHANGED, REAPPEARED, EXPIRED. Default output covers NEW / UPDATED / REAPPEARED; set emitUnchanged: true or emitExpired: true to opt into the others.
  • firstSeenAt, lastSeenAt — ISO-8601 timestamps tracking the listing across runs.
  • isRepost, repostOfId, repostDetectedAt — populated when a new listing matches the tracked content of a previously expired one. Set skipReposts: true to drop detected reposts from the output.

How to scrape naukri.com

  1. Go to Naukri.com Jobs Feed in Apify Console.
  2. Enter a search keyword and optional location filter.
  3. Set maxResults to control how many results you need.
  4. Enable fetchDetails if you need full descriptions, company data.
  5. Click Start and wait for the run to finish.
  6. Export the dataset as JSON, CSV, or Excel.

Use cases

  • Extract job data from naukri.com for market research and competitive analysis.
  • Track salary trends across regions and categories over time.
  • Monitor new and changed listings on scheduled runs without processing the full dataset every time.
  • Auto-apply or feed apply URLs into your ATS / hiring pipeline.
  • Research company hiring patterns, employer profiles, and industry distribution.
  • Feed structured data into AI agents, MCP tools, and automated pipelines using compact mode.
  • Export clean, structured data to dashboards, spreadsheets, or data warehouses.
  • Analyze skill demand across listings using structured skill tags.

How much does it cost to scrape naukri.com?

Naukri.com Jobs Feed uses pay-per-event pricing. You pay a small fee when the run starts and then for each result that is actually produced.

  • Run start: $0.01 per run
  • Per result: $0.002 per job record

Example costs:

  • 10 results: $0.03
  • 100 results: $0.21
  • 500 results: $1.01

Example: recurring monitoring savings

These examples compare full re-scrapes with incremental runs at different churn rates. Churn is the share of listings that are new or whose tracked content changed since the previous run. Actual churn depends on your query breadth, source activity, and polling frequency — the scenarios below are examples, not predictions.

Example setup: 100 results per run, daily polling (30 runs/month). Event-pricing examples scale linearly with result count.

Churn rateFull re-scrape run costIncremental run costSavings vs full re-scrapeMonthly cost after baseline
5% — stable niche query$0.21$0.02$0.19 (90%)$0.60
15% — moderate broad query$0.21$0.04$0.17 (81%)$1.20
30% — high-volume aggregator$0.21$0.07$0.14 (67%)$2.10

Full re-scrape monthly cost at daily polling: $6.30. First month with incremental costs $0.79 / $1.37 / $2.24 for the 5% / 15% / 30% scenarios because the first run builds baseline state at full cost before incremental savings apply.

FAQ

How many results can I get from naukri.com?

The number of results depends on the search query and available listings on naukri.com. Use the maxResults parameter to control how many results are returned per run.

Does Naukri.com Jobs Feed support recurring monitoring?

Yes. Enable incremental mode to only receive new or changed listings on subsequent runs. This is ideal for scheduled monitoring where you want to track changes over time without re-processing the full dataset.

Can I integrate Naukri.com Jobs Feed with other apps?

Yes. Naukri.com Jobs Feed works with Apify's integrations to connect with tools like Zapier, Make, Google Sheets, Slack, and more. You can also use webhooks to trigger actions when a run completes.

Can I use Naukri.com Jobs Feed with the Apify API?

Yes. You can start runs, manage inputs, and retrieve results programmatically through the Apify API. Client libraries are available for JavaScript, Python, and other languages.

Can I use Naukri.com Jobs Feed through an MCP Server?

Yes. Apify provides an MCP Server that lets AI assistants and agents call this actor directly. Use compact mode and descriptionMaxLength to keep payloads manageable for LLM context windows.

This actor extracts publicly available data from naukri.com. Web scraping of public information is generally considered legal, but you should always review the target site's terms of service and ensure your use case complies with applicable laws and regulations, including GDPR where relevant.

Your feedback

If you have questions, need a feature, or found a bug, please open an issue on the actor's page in Apify Console. Your feedback helps us improve.

You might also like

Getting started with Apify

New to Apify? Create a free account with $5 credit — no credit card required.

  1. Sign up — $5 platform credit included
  2. Open this actor and configure your input
  3. Click Start — export results as JSON, CSV, or Excel

Need more later? See Apify pricing.