
Naukri Job(s) Scraper
Pricing
$1.00 / 1,000 results

Naukri Job(s) Scraper
Scrape Naukri job listings with full details: jobIds, company data, work mode, education, salary brackets, experience ranges, locations, skills, and apply links—ready for analytics pipelines.
0.0 (0)
Pricing
$1.00 / 1,000 results
0
3
3
Last modified
6 days ago
Naukri.com Job Scraper
Accelerate your talent intelligence – Capture, analyze, and monitor Naukri.com job listings at scale with enterprise-grade reliability. Whether you are tracking hiring demand, enriching recruiting platforms, or conducting market research, our scraper delivers fresh, structured job intelligence while minimizing manual effort.
"From newly posted openings to deep-dive job detail pages, we turn Naukri's listings into your competitive advantage."
Overview
The Naukri.com Job Scraper is your all-in-one utility for extracting hiring data from Naukri.com. Ideal for recruiting teams, workforce planners, and market researchers, it tracks search result pages and individual job details across India. With straightforward configuration and structured outputs, it's perfect for anyone building job intelligence pipelines or talent analytics.
What does Naukri.com Job Scraper do?
The Naukri.com Job Scraper is a powerful tool that enables you to:
Comprehensive Data Collection
- Job Search Results
- Capture structured job cards from Naukri search result pages
- Track pagination automatically to cover entire result sets
- Extract metadata such as job title, company, location, and posting highlights
- Job Detail Pages
- Scrape full job descriptions and requirements from individual Naukri job postings
- Collect recruiter/company contact information when available
- Preserve benefits, salary/compensation snippets, and application methods
- Market Insights
- Monitor hiring demand across Indian cities, industries, and job families
- Build time-series datasets to benchmark recruiting trends
- Feed downstream analytics, enrichment, and ATS/CRM workflows
Advanced Scraping Capabilities
- Pagination Handling: Automatically navigates through Naukri search results
- Efficient Processing: Processes only new or updated postings in subsequent runs
- Change Detection: Detects new openings and updates to existing job ads
- Incremental Data Collection: Build comprehensive hiring datasets over time
Flexible Scraping Options
- Job Search Results: Extract job listings by keywords, location, and filters
- Example:
https://www.naukri.com/development-programmer-jobs?k=development%2C%20programmer&nignbevent_src=jobsearchDeskGNB
- Example:
- Filtered Searches: Combine remote, experience, salary, function, and city filters
- Example:
https://www.naukri.com/data-jobs?k=data&wfhType=2&experience=6&functionAreaIdGid=5&ctcFilter=3to6&ctcFilter=6to10&ctcFilter=10to15&cityTypeGid=9508&glbl_qcrc=1028&jobPostType=1
- Example:
- Individual Job Details: Target specific job postings using direct URLs (mobile API compatible)
This tool is ideal for:
- Recruiting intelligence and competitive hiring analysis
- Talent market research across industries and geographies
- Workforce planning and compensation benchmarking
- Building job scraping pipelines for ATS/CRM enrichment
- Monitoring hiring signals for business development & sales
Features
- Comprehensive Data Extraction: Job metadata, descriptions, and employer insights
- Dual Scraping Modes:
- Search Results: Scrape all jobs from Naukri search result pages
- Individual Job Details: Target specific postings using job detail URLs
- Flexible Input: Supports multiple input formats:
- Search result URLs (keyword, location, filters)
- Direct job detail URLs
- Automatic Pagination: Handles multi-page result sets automatically
- Efficient Processing: Concurrent scraping with configurable concurrency settings
- Reliable Performance: Built-in retries, throttling, and proxy support
- Structured Data Export: Download job data in JSON or CSV for analytics
Supported Scenario Types
The Naukri.com Job Scraper can extract data from multiple job-hunting flows:
-
Search Result Pages – Keyword/location queries with optional filters
- Example:
https://www.naukri.com/data-jobs?k=data
- Fields:
jobId
,title
,company
,experience
,salary
,location
,postedOn
, etc.
- Example:
-
Filtered Market Research Runs – Jobs narrowed by remote, experience, compensation, and industry filters
- Example:
https://www.naukri.com/data-jobs?k=data&wfhType=2&experience=6&functionAreaIdGid=3&functionAreaIdGid=5&ctcFilter=3to6&ctcFilter=6to10&cityTypeGid=9508
- Fields:
jobId
,filterContext
,workMode
,functionArea
,ctcBuckets
, etc.
- Example:
-
Job Detail Pages (Mobile API) – Full descriptions, requirements, and company insights
- Example:
https://www.nma.mobi/post/v4/job/<JOB_ID>?src=jobsearchios
- Fields:
jobId
,jobDetails
,ambitionBoxDetails
,basicInfo
,apply
, etc.
- Example:
Each scenario returns a structured payload consistent across runs, making it straightforward to pipe into your analytics stack.
Quick Start
- Sign up for Apify: Create your free account at apify.com.
- Find the Scraper: Search for "Monster.com Job Scraper" in the Apify Store.
- Configure Input: Set your Naukri search URLs or direct job URLs in the input schema.
- Run the Scraper: Execute the scraper on Apify or locally with Node.js/TSX.
- Data Collection: Export raw job data as JSON or CSV for downstream processing.
Input Configuration
Here's an example of how to set up the input for the Naukri.com Job Scraper:
{"startUrls": ["https://www.naukri.com/development-programmer-jobs?k=development%2C%20programmer&nignbevent_src=jobsearchDeskGNB","https://www.naukri.com/data-jobs?k=data&wfhType=2&experience=6&functionAreaIdGid=5&ctcFilter=3to6&ctcFilter=6to10&ctcFilter=10to15&cityTypeGid=9508&glbl_qcrc=1028&jobPostType=1"],"maxConcurrency": 10,"minConcurrency": 1,"maxRequestRetries": 100,"proxyConfiguration": {"useApifyProxy": true}}
Input Fields Explanation
startUrls
: Array of strings containing any of these formats:- Search URL:
"https://www.naukri.com/data-jobs?k=data"
- Filtered search URL:
"https://www.naukri.com/data-jobs?k=data&wfhType=2&experience=6&functionAreaIdGid=5&ctcFilter=3to6&ctcFilter=6to10&ctcFilter=10to15&cityTypeGid=9508&glbl_qcrc=1028&jobPostType=1"
- Job detail URL:
"https://www.nma.mobi/post/v4/job/<JOB_ID>?src=jobsearchios"
- Search URL:
maxItems
: Maximum number of results to scrape (default: 1000).maxConcurrency
: Maximum number of pages processed simultaneously (default: 10).minConcurrency
: Minimum number of pages processed simultaneously (default: 1).maxRequestRetries
: Number of retries for failed requests (default: 100).proxyConfiguration
: Proxy settings for consistent scraping performance.
Output Structure
The scraper provides structured information about Monster job postings. Outputs are normalized for both search results and job detail pages. Key groups include:
{"template": "","savedJobFlag": 0,"education": {"ug": ["B.Tech/B.E. in Any Specialization"],"pg": ["M.Tech in Any Specialization"],"ppg": ["Doctorate Not Required"],"degreeCombination": "ugorpgorppg","premiumProcessed": false,"label": "","isSchool": null},"hideApplyButton": false,"applyCount": 105,"groupId": 11720113,"wfhLabel": "Temp. WFH due to covid","description": "<p>Location: Remote-<span>Delhi / NCR,Bangalore/Bengaluru,Hyderabad/Secunderabad,Chennai,Pune,Kolkata,Ahmedabad,Mumbai</span></p>Notice Period: Immediate<br /><br />iSource Services is hiring for one of their client for the position of Data Architect.<br /><br />About the Role - <br />Experience in architecting with AWS or Azure Cloud Data Platform<br />Successfully implemented large scale data warehouse data lake solutions in snowflake or AWS Redshift<br />Be proficient in Data modelling and data architecture design experienced in reviewing 3rd Normal Form and Dimensional models.<br />Implementing Master data management, process design and implementation<br />Implementing Data quality solutions including processes<br />IOT Design using AWS or Azure Cloud platforms<br />Designing and implementing machine learning solutions as part of high-volume data ingestion and transformation<br />Working with structured and unstructured data including geo-spatial data<br />Experience in technologies like python, SQL, no SQL, KAFKA, Elastic Search<br />Experience using snowflake, informatica, azure logic apps, azure functions, azure storage, azure data lake and azure search.<br />","staticCompanyName": "iospl-technology-services-jobs-careers-124248766","roleCategory": "DBA / Data warehousing","industry": "IT Services & Consulting","staticUrl": "https://www.naukri.com/job-listings-data-architect-iospl-technology-services-private-limited-kolkata-pune-chennai-10-to-15-years-190925913023","title": "Data Architect","mode": "jp","tagLabels": [],"walkIn": false,"maximumExperience": 15,"jobRole": "Data warehouse Architect / Consultant","logStr": "--jobsearchios-0-F-0-1---","viewCount": 428,"jobType": "fulltime","minimumExperience": 10,"isTopGroup": 0,"clientLogo": "https://img.naukimg.com/logo_images/groups/v1/mobile/11720113.gif","employmentType": "Full Time, Permanent","wfhType": "1","banner": "https://img.naukimg.com/logo_images/groups/v1/mobile/11720113.gif","microsite": false,"companyDetail": {"name": "IOSPL Technology Services Private Limited","websiteUrl": "","details": "leading client","address": ".","media": {"ppt": [],"video": [],"photos": []},"hiringFor": "Leading Client"},"jobIconType": "","shortDescription": "Be proficient in Data modelling and data architecture design experienced in reviewing 3rd Normal Form and Dimensional models|Experience in architecting with AWS or Azure Cloud Data Platform|Experience in technologies like python,SQL,no SQL,KAFKA,Elastic Search|Experience using snowflake,informatica,azure logic apps,azure functions,azure storage,azure data lake and azure search","consent": false,"jobId": "190925913023","companyId": 124248766,"createdDate": "2025-09-19 11:06:06","consultant": true,"brandingTags": [],"functionalArea": "Engineering - Software & QA","fatFooter": {},"experienceText": "10-15 Yrs","showRecruiterDetail": false,"locations": [{"localities": [],"label": "Kolkata","url": "https://www.naukri.com/jobs-in-kolkata"},{"localities": [],"label": "Pune","url": "https://www.naukri.com/jobs-in-pune"},{"localities": [],"label": "Chennai","url": "https://www.naukri.com/jobs-in-chennai"}],"keySkills": {"other": [{"clickable": "","label": "Pyspark"},{"clickable": "","label": "snowflake"},{"clickable": "azure","label": "Azure"},{"clickable": "","label": "azure data lake"},{"clickable": "","label": "azure logic apps"},{"clickable": "sql","label": "SQL"},{"clickable": "","label": "azure functions"},{"clickable": "","label": "azure storage"},{"clickable": "","label": "Databricks"},{"clickable": "etl informatica","label": "informatica"},{"clickable": "python","label": "Python"},{"clickable": "salesforce","label": "Salesforce"}],"preferred": [{"clickable": "data architecture","label": "Data architecture"}]},"vacancy": 1,"salaryDetail": {"minimumSalary": 1500000,"maximumSalary": 2000000,"currency": "INR","hideSalary": true,"variablePercentage": 0,"label": "15-20 Lacs"},"board": "1","basicInfo": {"title": "Data Architect","logoPath": "https://img.naukimg.com/logo_images/groups/v1/mobile/11720113.gif","logoPathV3": "https://img.naukimg.com/logo_images/groups/v1/mobile/11720113.gif","jobId": "190925913023","currency": "INR","footerPlaceholderLabel": "14 Days Ago","footerPlaceholderColor": "grey","companyName": "IOSPL Technology Services Private Limited","isSaved": false,"tagsAndSkills": "Data architecture,Pyspark,snowflake,Azure,azure data lake,azure logic apps,SQL,azure functions","placeholders": [{"type": "experience","label": "10-15 Yrs"},{"type": "salary","label": "Not disclosed"},{"type": "location","label": "Temp. WFH - Kolkata, Pune, Chennai"}],"companyId": 124248766,"jdURL": "/job-listings-data-architect-iospl-technology-services-private-limited-kolkata-pune-chennai-10-to-15-years-190925913023","staticUrl": "iospl-technology-services-jobs-careers-124248766","jobDescription": "Be proficient in Data modelling and data architecture design experienced in reviewing 3rd Normal Form and Dimensional models<br><br>Experience in architecting with AWS or Azure Cloud Data Platform<br><br>Experience in technologies like python,SQL,no SQL,KAFKA,Elastic Search<br><br>Experience using snowflake,informatica,azure logic apps,azure functions,azure storage,azure data lake and azure search","showMultipleApply": false,"groupId": 11720113,"isTopGroup": 0,"createdDate": 1758260166924,"hiringFor": "Leading Client","consultant": true,"hideClientName": false,"mode": "jp","clientLogo": "https://img.naukimg.com/logo_images/groups/v1/mobile/11720113.gif","board": "1","salaryDetail": {"minimumSalary": 0,"maximumSalary": 0,"currency": "INR","hideSalary": true,"variablePercentage": 0},"experienceText": "10-15 Yrs","minimumExperience": "10","maximumExperience": "15","saved": false}}
Output Fields Explanation
Top-Level Fields
template
: Internal template identifier returned by the mobile API (empty string when not set).savedJobFlag
: Numeric flag indicating whether the job is saved (0 = not saved).education
: Object describing education requirements.education.ug
,education.pg
,education.ppg
: Arrays listing accepted undergraduate, postgraduate, and post-postgraduate qualifications.education.degreeCombination
: Logical expression describing which education levels satisfy the requirement (e.g.,ugorpgorppg
).education.premiumProcessed
: Boolean-like flag showing whether education info was enriched by Naukri premium processing.education.label
: Preformatted string version of the requirement (empty when not provided).education.isSchool
: Indicates whether the requirement targets school qualifications (typicallynull
).
hideApplyButton
: Controls whether the mobile UI should hide the apply button.applyCount
: Total number of applications submitted via Naukri for the posting.groupId
: Internal identifier for the employer or consultant group that owns the posting.wfhLabel
: Human-readable description of the work-from-home arrangement extracted from filters (Temp. WFH due to covid
, etc.).description
: HTML job description as rendered in the mobile app.staticCompanyName
: URL slug for the company/consultant microsite.roleCategory
: Naukri role category classification (e.g.,DBA / Data warehousing
).industry
: Industry vertical assigned by Naukri (e.g.,IT Services & Consulting
).staticUrl
: Canonical desktop-friendly path for the job detail page.title
: Primary job title label.mode
: Channel in which the job is served (e.g.,jp
for job portal listings).tagLabels
: Array of marketing or badge labels associated with the listing.walkIn
: Boolean flag indicating whether the posting is for a walk-in interview event.maximumExperience
/minimumExperience
: Maximum and minimum required experience (years).jobRole
: Specific job role classification (e.g.,Data warehouse Architect / Consultant
).logStr
: Tracking string used by Naukri analytics to record search filters and UI placement.viewCount
: Number of times the job has been viewed in the mobile app.jobType
: Employment type summary (fulltime
,contract
, etc.).isTopGroup
: Indicates whether the job belongs to a premium "top company" grouping.clientLogo
/banner
: URLs of square and banner logos shown in the app.employmentType
: Human-readable employment arrangement text (e.g.,Full Time, Permanent
).wfhType
: Encoded work-from-home type (e.g.,1
= temporary WFH).microsite
:true
when the job links to a branded microsite experience.companyDetail
: Object summarizing employer or consultant information.companyDetail.name
: Display name in the listing header.companyDetail.websiteUrl
: Employer website (empty when not shared).companyDetail.details
: Short descriptive tagline.companyDetail.address
: Free-form address string.companyDetail.media
: Nested lists of rich-media assets (PowerPoints, videos, photos) for branding.companyDetail.hiringFor
: Label indicating whether the consultant is hiring for a client.
jobIconType
: Reserved field controlling icon overlays (empty when unused).shortDescription
: Concise summary of the job responsibilities and tech stack.consent
: Indicates whether user consent is required to show recruiter details.jobId
: Naukri job identifier used for follow-up requests.companyId
: Numeric identifier of the employer/consultant profile.createdDate
: Epoch timestamp (milliseconds) when the job was created in Naukri systems.consultant
:true
if the posting is published by a recruitment consultant rather than a direct employer.brandingTags
: Array of branding badges applied to the listing.functionalArea
: High-level functional classification (e.g.,Engineering - Software & QA
).fatFooter
: Object reserved for supplemental footer metadata (empty when not supplied).experienceText
: Human-readable experience requirement (10-15 Yrs
).showRecruiterDetail
: Controls visibility of recruiter contact information.locations
: Array describing job locations.locations[].label
: City/region label displayed to the applicant.locations[].url
: Link to filtered search results for the location.locations[].localities
: Additional locality names (empty array when not provided).
keySkills
: Object listing skill tags.keySkills.other[]
: General skills withlabel
and optionalclickable
slug for search deep-links.keySkills.preferred[]
: Preferred/priority skills, same structure asother
.
vacancy
: Number of open positions the job represents.salaryDetail
: Object containing compensation metadata for the public card.salaryDetail.minimumSalary
/salaryDetail.maximumSalary
: Salary boundaries in the specified currency (values0
when undisclosed).salaryDetail.currency
: ISO currency code.salaryDetail.hideSalary
: Boolean flag determining if exact salary is hidden from applicants.salaryDetail.variablePercentage
: Percentage of variable pay (0 when unspecified).salaryDetail.label
: Human-readable salary bracket (e.g.,15-20 Lacs
).
board
: Identifier for the job board or channel (e.g.,1
for primary portal).basicInfo
: Snapshot used for list cards and quick views.basicInfo.title
: Title displayed in the card.basicInfo.logoPath
/basicInfo.logoPathV3
: Logo assets for different app layouts.basicInfo.jobId
: Mirrors the mainjobId
for quick access.basicInfo.currency
: Currency code applied to salary fields in the card.basicInfo.footerPlaceholderLabel
/basicInfo.footerPlaceholderColor
: Text and styling hints for footer badges (e.g., "14 Days Ago").basicInfo.companyName
: Display name within the card header.basicInfo.isSaved
/basicInfo.saved
: Flags indicating whether the current user saved the job.basicInfo.tagsAndSkills
: Comma-separated list of prominent skills.basicInfo.placeholders[]
: Array of key-value objects used to render experience, salary, location chips.basicInfo.placeholders[].type
: Category of the placeholder (experience
,salary
,location
).basicInfo.placeholders[].label
: Human-readable value for that category.
basicInfo.companyId
: Employer/consultant identifier duplicated for quick fetches.basicInfo.jdURL
: Relative link to the desktop job description page.basicInfo.staticUrl
: Slug for the associated company/consultant microsite.basicInfo.jobDescription
: Short HTML snippet summarizing responsibilities (sanitized for cards).basicInfo.showMultipleApply
: Indicates whether multiple job applications can be submitted simultaneously.basicInfo.groupId
/basicInfo.isTopGroup
: Mirrors group metadata for rapid filtering.basicInfo.createdDate
: Timestamp of listing creation in milliseconds.basicInfo.hiringFor
: Label describing the hiring relationship (e.g., "Leading Client").basicInfo.consultant
/basicInfo.hideClientName
: Flags relevant when a consultant hires on behalf of a client.basicInfo.mode
: Publishing mode indicator (typicallyjp
).basicInfo.clientLogo
: Client logo displayed when the consultant hides their own brand.basicInfo.board
: Board/channel identifier for card rendering.basicInfo.salaryDetail
: Nested version of the public salary info used on cards.basicInfo.salaryDetail.minimumSalary
,maximumSalary
,currency
,hideSalary
,variablePercentage
: Same semantics as the top-level salary detail but specific to card presentation.
basicInfo.experienceText
,basicInfo.minimumExperience
,basicInfo.maximumExperience
: Readable and numeric experience requirements reused by the card view.
These explanations mirror the example payload so you can map each field directly when integrating the scraper output into downstream systems.
Explore More Scrapers
If you found this Apify Scraper useful, be sure to check out our other powerful scrapers and actors at memo23's Apify profile. We offer a wide range of tools to enhance your web scraping and automation needs across various platforms and use cases.
Support
- For issues or feature requests, please use the Issues section of this actor.
- If you need customization or have questions, feel free to contact the author:
- Author's website: https://muhamed-didovic.github.io/
- Email: muhamed.didovic@gmail.com
Additional Services
- Request customization or whole dataset: muhamed.didovic@gmail.com
- If you need anything else scraped, or this actor customized, email: muhamed.didovic@gmail.com
- For API services of this scraper (no Apify fee, just usage fee for the API), contact: muhamed.didovic@gmail.com
- Email: muhamed.didovic@gmail.com
On this page
Share Actor: