Naukri Job Scraper avatar

Naukri Job Scraper

Pricing

Pay per event

Go to Apify Store
Naukri Job Scraper

Naukri Job Scraper

Scrape job listings from Naukri.com, India's largest job board. Extract title, company, salary, location, experience, skills & description. Export JSON/CSV/Excel. No API key needed.

Pricing

Pay per event

Rating

0.0

(0)

Developer

Stas Persiianenko

Stas Persiianenko

Maintained by Community

Actor stats

0

Bookmarked

7

Total users

3

Monthly active users

20 hours ago

Last modified

Categories

Share

Scrape job listings from Naukri.com — India's #1 job board with 18 million+ active listings. Enter keywords and locations to collect job data in bulk. No API key required, no login needed. Export to JSON, CSV, or Excel in one click.

What does it do?

Naukri Job Scraper automates job searches on Naukri.com and extracts structured data from listings. You supply keywords (e.g. "Python Developer", "Product Manager") and an optional location filter. The scraper pages through search results and returns one clean record per job.

Each record includes:

  • Job title, company name, and direct job URL
  • Location (city / multiple cities for remote-friendly roles)
  • Salary range when disclosed (INR per annum in Lakhs)
  • Experience required (e.g. "2-5 Yrs", "Fresher")
  • Required skills as a parsed list
  • Full job description (HTML)
  • Employment type (Full Time, Contract, etc.)
  • Work mode (Remote, Hybrid, Work from Office)
  • Company logo URL
  • Posting date in ISO 8601 format

Who is it for?

  • HR & Talent Acquisition teams tracking competitor hiring or benchmarking salaries
  • Job boards and aggregators building India-specific job feeds
  • Market researchers analyzing India's tech job market, salary trends, or in-demand skills
  • Data scientists and analysts building job market datasets for ML or NLP projects
  • Developers and startups powering job alert tools, resume matchers, or career products
  • Recruiters monitoring specific companies for new openings

Why use this scraper?

No account needed — no login, no cookies, no API key setup
Multiple keywords in one run — search "Python Developer" and "Data Scientist" simultaneously
Experience and location filters — narrow results before scraping
32,000+ results per keyword — Naukri returns up to 1,000 pages of results
Pay-per-result pricing — only pay for jobs actually scraped
India residential proxy — uses India-geolocated proxies to bypass geo-restrictions
Structured output — every field is clean and typed (no raw HTML dumps)

Data extracted

FieldTypeDescription
jobIdstringNaukri internal job identifier
titlestringJob title as posted
companystringCompany name
urlstringDirect job listing URL on Naukri.com
locationstringCity or multiple cities (comma-separated)
workModestringRemote / Hybrid / null (Work from Office)
salarystringSalary range string (e.g. "5-10 Lacs PA") or null if not disclosed
salaryMinnumberMinimum salary in INR per annum (e.g. 500000)
salaryMaxnumberMaximum salary in INR per annum (e.g. 1000000)
experienceRequiredstringExperience range (e.g. "2-5 Yrs", "Fresher")
experienceMinnumberMinimum years of experience (parsed integer)
experienceMaxnumberMaximum years of experience (parsed integer)
postedDatestringISO 8601 posting date (e.g. "2026-03-25T16:22:36.000Z")
descriptionstringFull job description (may contain HTML tags)
skillsarrayList of required skill/keyword tags
employmentTypestringEmployment type (e.g. "Full Time, Permanent")
vacanciesnumberNumber of open positions (when shown)
logoUrlstringCompany logo image URL
scrapedAtstringISO 8601 timestamp when the record was scraped

Pricing

This actor uses Pay-per-event (PPE) pricing — you only pay for what you scrape.

PlanCost per job
Free$0.0023 / job
Bronze$0.0020 / job
Silver$0.00156 / job
Gold$0.0012 / job
Platinum$0.0008 / job
Diamond$0.00056 / job

Plus a $0.005 start fee per run (one-time, regardless of jobs scraped).

Example costs at Free tier:

  • 100 jobs → ~$0.23
  • 1,000 jobs → ~$2.30
  • 10,000 jobs → ~$23.00

Proxy costs (India residential) are included.

How to use

  1. Open the actor on Apify
  2. Enter one or more keywords in Keywords — each runs as a separate search
  3. Optionally enter a Location to filter by city
  4. Set Max jobs per keyword (default 40, max 1000)
  5. Optionally filter by experience range

Step 2: Run

Click Start and wait. A search for 100 jobs typically finishes in under 60 seconds.

Step 3: Export

Once complete, click Export to download as JSON, CSV, or Excel. You can also connect results to Google Sheets via Zapier or the Apify Google Sheets integration.

Input

{
"keywords": ["Python Developer", "Data Scientist"],
"location": "Bangalore",
"maxJobsPerKeyword": 100,
"experienceMin": 2,
"experienceMax": 8,
"maxRequestRetries": 3
}

Input fields

FieldTypeDefaultDescription
keywordsarrayrequiredJob keywords to search. Multiple keywords run in parallel.
locationstring""City or region filter. Leave blank for all India.
maxJobsPerKeywordinteger40Max jobs to scrape per keyword. Each Naukri page returns 20 jobs.
experienceMinintegerFilter: minimum years of experience required.
experienceMaxintegerFilter: maximum years of experience required.
maxRequestRetriesinteger3Number of retries on failed requests.

Output

Each run produces a dataset of job records. Here is a sample record:

{
"jobId": "250326927010",
"title": "Senior Data Scientist",
"company": "Microsoft India",
"url": "https://www.naukri.com/job-listings-senior-data-scientist-microsoft-india-bangalore-5-to-10-years-250326927010",
"location": "Bengaluru",
"workMode": "Hybrid",
"salary": "30-50 Lacs PA",
"salaryMin": 3000000,
"salaryMax": 5000000,
"experienceRequired": "5-10 Yrs",
"experienceMin": 5,
"experienceMax": 10,
"postedDate": "2026-03-25T16:22:36.000Z",
"description": "<p>We are looking for a Senior Data Scientist to join our...</p>",
"skills": ["python", "machine learning", "sql", "deep learning", "tensorflow"],
"employmentType": "Full Time, Permanent",
"vacancies": 3,
"logoUrl": "https://img.naukimg.com/logo_images/v2/mobile/1234.gif",
"scrapedAt": "2026-03-26T04:06:50.847Z"
}

Tips for best results

Use specific keywords — "React Developer" returns more targeted results than "Developer"

Use city names Naukri recognizes — "Bangalore" (not "Bengaluru"), "Delhi NCR", "Mumbai", "Hyderabad", "Pune", "Chennai", "Noida", "Gurgaon"

Scrape multiple cities separately — run with location: "Bangalore" then location: "Mumbai" to compare markets

Filter by experience — use experienceMin and experienceMax to target specific seniority levels

Expect null salaries — most Indian companies do not disclose salaries on Naukri; salary: null is normal

Large scrapes work well — the actor handles 1,000+ jobs per run reliably with automatic retries

Schedule for freshness — run daily to capture new postings. Naukri refreshes listings continuously.

Integrations

Zapier

Connect Naukri Job Scraper to 5,000+ apps via Zapier. Common workflows:

  • New Naukri jobs → Google Sheets row
  • Jobs matching criteria → Slack notification
  • Daily job digest → Email via Gmail

Google Sheets

Use the Apify Google Sheets integration to push results directly to a spreadsheet. Schedule the scraper daily for an auto-updating job tracker.

Webhooks

Configure a webhook on the actor to POST the dataset to any HTTP endpoint after each run. Use this for custom pipelines or databases.

Apify API

Results are accessible via the Apify API immediately after a run completes.

API usage

Node.js

const { ApifyClient } = require('apify-client');
const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });
const run = await client.actor('automation-lab/naukri-scraper').call({
keywords: ['Python Developer', 'Machine Learning Engineer'],
location: 'Bangalore',
maxJobsPerKeyword: 200,
experienceMin: 3,
});
const dataset = await client.dataset(run.defaultDatasetId).listItems();
console.log(dataset.items);

Python

from apify_client import ApifyClient
client = ApifyClient("YOUR_API_TOKEN")
run_input = {
"keywords": ["Python Developer", "Machine Learning Engineer"],
"location": "Bangalore",
"maxJobsPerKeyword": 200,
"experienceMin": 3,
}
run = client.actor("automation-lab/naukri-scraper").call(run_input=run_input)
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
print(item["title"], item["company"], item["salary"])

cURL

curl -X POST \
"https://api.apify.com/v2/acts/automation-lab~naukri-scraper/runs?token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"keywords": ["Python Developer"],
"location": "Bangalore",
"maxJobsPerKeyword": 100
}'

Then retrieve results:

$curl "https://api.apify.com/v2/datasets/RUN_DEFAULT_DATASET_ID/items?token=YOUR_API_TOKEN&format=json"

Use with AI assistants (MCP)

Naukri Job Scraper supports the Model Context Protocol (MCP), which lets AI assistants like Claude run the scraper and analyze results directly.

To connect:

  1. Get your Apify API token
  2. Add the Apify MCP server to your AI assistant's MCP config
  3. Ask questions like: "Scrape the top 50 Python Developer jobs in Bangalore and tell me the average salary range"

The AI assistant will run the scraper, receive the data, and give you instant analysis.

Legality and terms of service

Web scraping public data is generally legal in most jurisdictions. Naukri.com displays job listings publicly without requiring login, and this actor only accesses publicly available search results and listing data — the same information any visitor sees in their browser.

This actor does not:

  • Access private or login-gated pages
  • Bypass any CAPTCHA or security measure in an automated way
  • Store or redistribute scraped data beyond the user's own Apify storage
  • Impersonate real users or make excessive requests

Users are responsible for complying with Naukri.com's Terms of Service and applicable data protection laws (GDPR, DPDP Act, etc.) in their jurisdiction. Always use scraped data ethically and in accordance with local laws.

FAQ

Q: Does it scrape all Naukri results or just the first page?
A: The scraper paginates through results automatically. Set maxJobsPerKeyword to control how many jobs to collect. The default is 40 (2 pages). Set it up to 1000 for bulk scrapes.

Q: Why is salary null for most jobs?
A: Most Indian employers choose not to disclose salaries on Naukri — this is normal. The scraper correctly captures salary when it is disclosed (about 10-20% of listings).

Q: Can I scrape jobs from a specific company?
A: Yes — use the company name as a keyword, e.g. keywords: ["Infosys"] or keywords: ["TCS data analyst"]. This returns jobs at that company matching the keyword.

Q: Does it work for all job categories?
A: Yes — IT, finance, marketing, HR, healthcare, engineering, and all other categories on Naukri.com are supported.

Q: How fresh is the data?
A: Data is live from Naukri.com at the time of the run. Jobs posted minutes ago appear in results. Schedule the actor daily or hourly for fresh monitoring.

Q: Can I use experience filters?
A: Yes — set experienceMin and experienceMax to filter by years of experience. For example, experienceMin: 5, experienceMax: 10 returns mid-to-senior level roles.

Q: The run returned 0 jobs. What went wrong?
A: This can happen if (1) the keyword has no results on Naukri, (2) the location spelling doesn't match Naukri's format (try "Bangalore" not "Bengaluru"), or (3) a temporary proxy connectivity issue. Try running again or adjusting the keyword/location.

Q: How much does it cost to scrape 10,000 jobs?
A: At the Free tier: ~$23. At Gold tier: ~$12. Plus $0.005 per run start fee.

Q: Can I run it on a schedule?
A: Yes — use Apify's built-in scheduler to run the actor daily, weekly, or at any custom interval. Results accumulate in the dataset across runs.