Naukri.com Jobs Scraper
Pricing
from $8.00 / 1,000 results
Naukri.com Jobs Scraper
Scrape job listings from Naukri.com, India's largest job portal. Search by keyword, location, experience, work type, and sort order. Extract job title, company, salary, skills, description, ratings, and 15+ structured fields per listing. Fast API-based extraction, no browser needed.
Pricing
from $8.00 / 1,000 results
Rating
0.0
(0)
Developer
ParseForge
Actor stats
0
Bookmarked
24
Total users
5
Monthly active users
a day ago
Last modified
Categories
Share

🇮🇳 Naukri.com Jobs Scraper
🚀 Pull Indian job listings from Naukri.com in minutes. Title, company, salary, skills, experience, work type, ratings. No login, no API key.
🕒 Last updated: 2026-05-08 · 📊 15+ fields per listing · 🔍 Keyword + location + experience filters · 🚫 No auth required
Pull live job listings from Naukri.com, India's largest job portal. The actor accepts a search keyword plus location, experience, work-type, and sort filters, paginates through results, and returns one structured record per job ready for talent sourcing, recruitment intelligence, salary research, or labor market analysis.
Every run fetches data live so you get the current state of Naukri at run time, not a stale dump. Records include the job title, company name, salary range, required skills, experience range, location, work type (office/remote/hybrid), company rating, post date, full job description, and the canonical Naukri URL.
| 👥 Built for | 🎯 Primary use cases |
|---|---|
| Indian recruiters | Source candidates and benchmark roles |
| Recruitment agencies | Build pipelines by skill and location |
| HR analytics | Track salary bands and demand by skill |
| Talent acquisition | Monitor competitor hiring activity |
| Researchers | Study Indian labor market trends |
| Lead-gen and CRM | Source company contacts for B2B sales |
📋 What the Naukri Scraper does
- 🔍 Keyword search. Pass any Naukri search query (e.g.
python developer,data analyst,react). - 📍 Location filter. Filter by Indian city (
bangalore,mumbai,delhi,hyderabad, etc.). - ⏳ Experience filter. Set minimum years of experience required.
- 🏠 Work type filter. Office, Remote/Work-from-Home, or Hybrid.
- 🔄 Sort options. Relevance or Date Posted.
- 💰 Salary data. Salary range as advertised on the listing.
The scraper walks Naukri's API surface for your filter combination, fetches each job listing, and pushes structured records to the dataset. It runs without a browser for fast, lightweight pulls.
💡 Why it matters: Naukri.com lists millions of Indian jobs but its UI is paginated, JS-rendered, and lacks bulk export. A live, structured pull beats manual sourcing for recruiting, HR analytics, and competitive intelligence at scale.
🎬 Full Demo
🚧 Coming soon: a 3-minute walkthrough showing setup, a live run, and how to pipe results into Greenhouse via Apify integrations.
⚙️ Input
| Field | Type | Name | Description |
|---|---|---|---|
keyword | string | Search Keyword | Required. Job title or keyword (e.g. python developer, data analyst). |
maxItems | integer | Max Items | Free users: limited to 10 items (preview). Paid users: optional, max 1,000,000. |
location | string | Location | Optional. Indian city (e.g. bangalore, mumbai, delhi, hyderabad). |
experience | integer | Minimum Experience (years) | Optional minimum experience required. |
sortBy | enum | Sort By | relevance or date. |
workType | enum | Work Type | office, remote, hybrid, or empty for all. |
Example 1. Python developer roles in Bangalore, 3+ years experience.
{"keyword": "python developer","location": "bangalore","experience": 3,"sortBy": "date","maxItems": 50}
Example 2. Remote data analyst roles, sorted by relevance.
{"keyword": "data analyst","workType": "remote","sortBy": "relevance","maxItems": 100}
⚠️ Good to Know: the location field is matched as Naukri matches it. Use the standard city slug (e.g.
bangalorenotbengaluru).
📊 Output
The dataset returns one structured record per job listing. Each record carries identifiers, title, company, salary, skills, experience, location, work type, company rating, post date, full description, and a back-reference URL. Consume the dataset as JSON, CSV, Excel, XML, or RSS via the Apify console or API.
🧾 Schema
| Field | Type | Example |
|---|---|---|
🆔 jobId | string | 230325000123456 |
📝 title | string | Senior Python Developer |
🏢 company | string | Acme Tech Pvt Ltd |
⭐ companyRating | number or null | 4.2 |
💰 salary | string | 15-25 LPA |
📍 location | string | Bangalore |
🏠 workType | string | Hybrid |
⏳ experience | string | 5-8 years |
🛠️ skills | array | ["Python", "Django", "PostgreSQL", "AWS"] |
📃 description | string | We are looking for a Senior Python Developer... |
📅 postedAt | ISO datetime | 2026-04-12T00:00:00.000Z |
📅 postedAgo | string | 26d |
🔢 applicationsCount | number | 1245 |
🏷️ industry | string | IT Services & Consulting |
🔗 jobUrl | string (url) | https://www.naukri.com/job-listings-... |
📅 scrapedAt | ISO datetime | 2026-05-08T12:00:00.000Z |
📦 Sample records
1. Typical record (senior Python role in Bangalore)
{"jobId": "230325000123456","title": "Senior Python Developer","company": "Acme Tech Pvt Ltd","companyRating": 4.2,"salary": "15-25 LPA","location": "Bangalore","workType": "Hybrid","experience": "5-8 years","skills": ["Python", "Django", "PostgreSQL", "AWS"],"description": "We are looking for a Senior Python Developer with 5+ years of experience...","postedAt": "2026-04-12T00:00:00.000Z","postedAgo": "26d","applicationsCount": 1245,"industry": "IT Services & Consulting","jobUrl": "https://www.naukri.com/job-listings-senior-python-developer-acme-tech-bangalore-230325000123456","scrapedAt": "2026-05-08T12:00:00.000Z"}
2. Remote data analyst role
{"jobId": "230401000234567","title": "Data Analyst","company": "DataPro Solutions","companyRating": 3.8,"salary": "8-12 LPA","location": "Remote","workType": "Remote","experience": "2-4 years","skills": ["SQL", "Python", "Tableau", "Excel"],"postedAt": "2026-05-01T00:00:00.000Z","postedAgo": "1w","applicationsCount": 567,"industry": "Analytics / KPO","jobUrl": "https://www.naukri.com/job-listings-data-analyst-datapro-remote-230401000234567","scrapedAt": "2026-05-08T12:00:00.000Z"}
3. Sparse record (junior listing, minimal fields)
{"jobId": "230507000345678","title": "Junior React Developer","company": "Startup XYZ","salary": null,"location": "Hyderabad","experience": "0-2 years","skills": ["React", "JavaScript"],"postedAt": "2026-05-07T00:00:00.000Z","postedAgo": "1d","jobUrl": "https://www.naukri.com/job-listings-junior-react-developer-startup-xyz-hyderabad-230507000345678","scrapedAt": "2026-05-08T12:00:00.000Z"}
✨ Why choose this Actor
| Capability | |
|---|---|
| 🎯 | Built for the job. Scoped specifically to Naukri.com so you skip the parser engineering entirely. |
| 🔖 | Structured output. Clean, typed fields ready for analysis, dashboards, or downstream pipelines. |
| ⚡ | Fast. API-based extraction (no browser) returns results in seconds, not minutes. |
| 🔁 | Always fresh. Every run pulls live data, so the dataset reflects Naukri as of run time. |
| 🌐 | No infra to manage. Apify handles proxies, retries, scaling, scheduling, and storage. |
| 🛡️ | Reliable. Battle-tested across many runs and edge cases, with graceful error handling. |
| 🚫 | No code required. Configure in the UI, run from CLI, schedule via cron, or call from any language with the Apify SDK. |
📊 Production-grade structured Indian job data without the engineering overhead of building and maintaining your own scraper.
📈 How it compares to alternatives
| Approach | Cost | Coverage | Refresh | Filters | Setup |
|---|---|---|---|---|---|
| ⭐ Naukri Jobs Scraper (this Actor) | $5 free credit, then pay-per-use | Full Naukri catalog | Live per run | Keyword, location, experience, work type | ⚡ 2 min |
| Build your own scraper | Engineering hours | Full once built | Whenever you maintain it | Custom code | 🐢 Days to weeks |
| Paid recruiter ATS feeds | $$$ monthly | Vendor-defined | Periodic | Vendor-defined | ⏳ Hours |
| Manual searches | Hours per check | Limited | Stale | Manual | 🕒 Variable |
Pick this Actor when you want broad coverage, source-native filtering, and no pipeline maintenance.
🚀 How to use
- 📝 Sign up. Create a free account with $5 credit (takes 2 minutes).
- 🌐 Open the Actor. Go to the Naukri.com Jobs Scraper page on the Apify Store.
- 🎯 Set filters. Set a keyword and pick location, experience, and work type, then set
maxItems. - 🚀 Run it. Click Start and let the Actor collect your data.
- 📥 Download. Grab your results in the Dataset tab as CSV, Excel, JSON, or XML.
⏱️ Total time from signup to downloaded dataset: 3-5 minutes. No coding required.
💼 Business use cases
🌟 Beyond business use cases
Data like this powers more than commercial workflows. The same structured records support research, education, civic projects, and personal initiatives.
🔌 Automating Naukri Jobs Scraper
This Actor exposes a REST endpoint, so you can drive it from any language or workflow tool.
- Node.js - call it via the Apify JS SDK.
- Python - call it via the Apify Python SDK.
- REST - hit it directly through the Apify v2 API.
Schedules. Use Apify Scheduler to capture daily snapshots of new job postings. Combine with the Apify dataset diff tools to track new and changed listings between runs.
❓ Frequently Asked Questions
💳 Do I need a paid Apify plan to run this actor?
No. You can start right now on the free Apify plan, which includes $5 in monthly credit. That is enough to run the scraper several times and explore the output. Paid plans unlock higher item caps, more concurrent runs, and larger datasets. Create a free Apify account here.
🚨 What happens if my run fails or returns no results?
Failed runs are not charged. If Naukri changes its API, proxies get rate-limited, or your filters match nothing, re-run the actor or open our contact form and we will look into it.
📏 How many items can I scrape per run?
Free users are limited to 10 items per run so you can preview the output. Paid users can raise maxItems up to 1,000,000 per run.
🕒 How fresh is the data?
Every run fetches live data at the moment of execution. There is no cache or delay: records reflect what Naukri returned at run time. Schedule the actor to maintain a rolling snapshot.
🧑💻 Can I call this actor from my own code?
Yes. Apify exposes every actor as a REST endpoint and ships first-class SDKs for Node.js and Python. You can start a run, read the dataset, and handle webhooks from your own app in a few lines.
📤 How do I export the data?
Every Apify dataset can be downloaded in one click as CSV, JSON, JSONL, Excel, HTML, XML, or RSS. You can also pull results programmatically via the Apify API or stream into BigQuery, S3, and other destinations through built-in integrations.
📅 Can I schedule the actor to run automatically?
Yes. Use the Apify scheduler to run the actor on any cadence, from hourly to monthly. Results are saved to your dataset and can be delivered to webhooks, email, Slack, cloud storage, or automation tools such as Zapier and Make.
🏪 Can I use the data commercially?
Yes. The scraped data is yours to use in your own internal pipelines, products, and reports, subject to the terms of service of the source site and applicable Indian privacy laws.
💼 Which plan should I pick for production use?
Apify's Starter and Scale plans are designed for production workloads. They give you faster instances, more concurrent runs, and higher proxy quotas. Pick the plan that matches your dataset size and refresh cadence.
🛠️ The data I need is not in the output. Can you add it?
Most likely yes. Open the contact form and tell us which field you need. We add fields all the time when there is a clear use case and the source page exposes the data.
⚖️ Is scraping Naukri legal?
This Actor only collects data from publicly accessible Naukri.com pages, the same content any visitor can read. Public web scraping is generally legal in most jurisdictions for non-personal data, but laws vary by country and use case. You are responsible for compliance with the source site's Terms of Service and applicable law (including India's Information Technology Act and any sectoral data-protection rules).
🔌 Integrate with any app
Naukri Jobs Scraper connects to any cloud service via Apify integrations:
- Make - Automate multi-step workflows
- Zapier - Connect with 5,000+ apps
- Slack - Get run notifications in your channels
- Airbyte - Pipe results into your warehouse
- GitHub - Trigger runs from commits and releases
- Google Drive - Export datasets straight to Sheets
You can also use webhooks to trigger downstream actions when a run finishes.
🔗 Recommended Actors
- 💼 Indeed Scraper - Job listings with compensation and benefits
- 💼 Glassdoor Scraper - Company reviews, salaries, and ratings
- 💼 LinkedIn Profile Scraper - LinkedIn profiles with experience and skills
- 💼 Greenhouse Jobs Scraper - Company-specific Greenhouse postings
- 🏢 IndiaMART Scraper - Indian B2B supplier listings
💡 Pro Tip: browse the complete ParseForge collection for more reference-data scrapers.
🆘 Need Help? Open our contact form to request a new scraper, propose a custom project, or report an issue.
⚠️ Disclaimer. This Actor is an independent tool and is not affiliated with, endorsed by, or sponsored by Naukri.com or Info Edge (India) Ltd. All trademarks mentioned are the property of their respective owners. The scraper accesses only publicly available pages and is intended for legitimate research, analytics, and recruitment use. Users are responsible for compliance with the source site's Terms of Service and applicable law.