Naukri.com Jobs Scrapper
Pricing
Pay per event
Naukri.com Jobs Scrapper
Plug‑and‑play Naukri scraper: enter job title & city, run, and get clean CSV/JSON with title, company, location, experience, salary and full descriptions. No coding needed. Fast, scalable (up to 25 workers) — perfect for recruiters and market researchers.
0.0 (0)
Pricing
Pay per event
1
2
1
Last modified
5 days ago
Naukri Job Scraper — Apify Actor (Selenium)
Turn Naukri listings into instant, structured data — no coding required. This Apify‑ready actor scrapes job headlines and (optionally) full job descriptions from Naukri.com and delivers clean JSON/CSV outputs you can download or pipe to analytics, ATS, or spreadsheets.
Why use this actor
- Plug & play: enter a job title and location, set max results, and run.
- Non‑technical friendly: simple inputs, immediate JSON/CSV output, and a SUMMARY snapshot for quick metrics.
- Reliable at scale: configurable parallel workers (1–25) and pagination support to collect data across pages.
- Anti‑detection & stability: human‑like delays, home‑page warming, stealthy Chrome options, retries and robust selectors.
Key features
- Extracts: title, company, experience, location, salary, job URL, and full description (if enabled).
- Pagination aware: navigates Naukri pages to reach
max_jobs. - Fetch details toggle: set
fetch_details=trueto visit each job page and capture full descriptions and missing salary info. - Exports: JSON dataset (default) and CSV download for easy Excel/Sheets import.
- Includes
output_schema.jsondescribing the exact output fields for the actor UI.
Inputs (what you configure)
| Field | Type | Notes |
|---|---|---|
query | string | Job keyword (e.g., "Python Developer") — required |
location | string | City or region (e.g., "Nagpur") — required |
max_jobs | integer | Max results to collect (1–500) |
fetch_details | boolean | true to fetch full descriptions (slower) |
headless | boolean | Set false locally to view the browser and solve CAPTCHAs |
workers | integer | Parallel detail fetchers (1–25) — higher values use more CPU/RAM |
Output
- The actor produces a JSON dataset (array of job objects). See
output_schema.jsonin the repo for the exact schema (title, company, experience, location, salary, link, description, scraped_at). - A
SUMMARYkey‑value store contains quick run stats.
Quick start — on Apify
- Upload this repository as an actor on Apify or link it to your GitHub repo.
- Ensure the Docker build includes Chrome (the included Dockerfile does this).
- In the actor input, set
query,location,max_jobs, andfetch_detailsas needed. - Run the actor and download the dataset from the actor run page.
Tips & best practices
- Use
fetch_details=trueonly when you need full descriptions — it visits each job page and increases runtime. - Start with
workers=2and increase if your Apify plan and container resources allow it. - If you hit a CAPTCHA locally, run with
headless=falseand manually solve it once to warm cookies.
Support & customization Want extra fields, filters, or cloud scheduling? I can customize selectors, add filters, or integrate results with Google Sheets / AWS — open an issue or contact the developer.
Enjoy automated, reliable job market data from Naukri — faster sourcing, better insights, and less manual work.