WorkIndia Jobs Scraper
Pricing
Pay per usage
WorkIndia Jobs Scraper
Seamlessly extract postings from WorkIndia, India's premier job portal for blue and grey-collar roles. This powerful actor gathers job descriptions, skills, and employer data rapidly. Ideal for recruiters building robust datasets of the Indian job market!
Pricing
Pay per usage
Rating
0.0
(0)
Developer
Shahid Irfan
Actor stats
0
Bookmarked
6
Total users
4
Monthly active users
23 days ago
Last modified
Categories
Share
Collect fresh job listings from WorkIndia with rich details such as salary, company, location, experience, openings, and more. Run by URL, keyword, city, or a combined filter and get clean, structured output ready for analysis. This actor is designed for fast recurring job monitoring and lead generation workflows.
Features
- URL-first scraping — Start from WorkIndia listing URLs or a single job detail URL.
- Keyword and city filters — Narrow results with search terms and city targeting.
- Paginated collection — Control depth with
results_wantedandmax_pages. - Rich job fields — Collect detailed hiring information for each job.
- Clean dataset output — Null and empty values are removed from records.
Use Cases
Recruitment Intelligence
Track hiring activity by city, role type, and salary trends to support recruiting strategy.
Job Aggregation
Build your own searchable jobs dataset for internal tools, dashboards, or outbound campaigns.
Lead Generation
Find active companies with open positions and prioritize outreach by role category and location.
Market Monitoring
Monitor shifts in demand across industries and experience bands over time.
Input Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
url | String | No | https://www.workindia.in/jobs/ | WorkIndia listing URL or job detail URL. |
keyword | String | No | delivery | Optional keyword filter for job search. |
city | String | No | delhi | Optional city filter. |
results_wanted | Integer | No | 20 | Maximum number of records to save. |
max_pages | Integer | No | 10 | Maximum pages to paginate. |
includeDetails | Boolean | No | true | Enrich each listing with detailed fields. |
proxyConfiguration | Object | No | {"useApifyProxy": false} | Optional proxy settings. |
Output Data
Each dataset item can include:
| Field | Type | Description |
|---|---|---|
job_id | Integer | Unique WorkIndia job id. |
profile_job_title | String | Job title. |
branch_company_name | String | Company name. |
branch_location_city_name | String | Job city. |
branch_location_name | String | Area/locality. |
profile_salary_structure | String | Salary text/range. |
job_experience | String | Experience requirement. |
profile_qualification_required | String | Qualification requirement. |
profile_industry_display_name | String | Job industry/category. |
employment_type | String | Employment type. |
created_at | String | Published timestamp. |
expiry | String | Expiry date. |
api_detail_url | String | Job detail source link used internally. |
source_url | String | Input source URL for the run. |
collected_at | String | Collection timestamp. |
Usage Examples
Default Jobs Feed
{"url": "https://www.workindia.in/jobs/","results_wanted": 20,"max_pages": 5}
City + Keyword Search
{"keyword": "delivery","city": "delhi","results_wanted": 50,"max_pages": 10}
Industry URL Extraction
{"url": "https://www.workindia.in/delivery-jobs-in-delhi/","results_wanted": 30,"includeDetails": true}
Single Job Detail URL
{"url": "https://www.workindia.in/jobs/delivery_boy-rohini-delhi-9738578/"}
Sample Output
{"job_id": 9738578,"profile_job_title": "Delivery Boy","branch_company_name": "Blinkit","branch_location_city_name": "delhi","branch_location_name": "Rohini","profile_salary_structure": "Rs. 40000 - Rs. 75000","job_experience": "fresher","profile_qualification_required": "Tenth Pass","profile_industry_display_name": "Delivery","employment_type": "FULL_TIME","created_at": "2026-04-17T06:58:00Z","expiry": "2026-08-13","source_url": "https://www.workindia.in/delivery-jobs-in-delhi/","collected_at": "2026-04-17T09:30:00.000Z"}
Tips For Best Results
Start Small First
- Start with
results_wanted: 20for quick validation. - Increase limits after confirming output quality.
Use Strong Input URLs
- Prefer canonical WorkIndia listing URLs.
- Use city or category URLs for tighter result relevance.
Balance Page Depth
- Use
max_pagesas a safety cap to prevent over-collection. - Tune both
results_wantedandmax_pagestogether.
Use Proxy Only When Needed
- Keep defaults for normal runs.
- Enable proxy configuration if your environment requires it.
Integrations
Connect output with:
- Google Sheets — Share and review job data quickly.
- Airtable — Build a searchable hiring database.
- Make — Automate enrichment and notifications.
- Zapier — Trigger downstream actions from new jobs.
- Webhooks — Send run results to your internal systems.
Export Formats
- JSON — Best for APIs and developers.
- CSV — Best for spreadsheets and BI tools.
- Excel — Best for business reporting.
- XML — Best for legacy integrations.
Frequently Asked Questions
Can I use either keyword or URL input?
Yes. You can run with URL only, keyword only, or both together.
Does it support pagination?
Yes. Pagination is controlled by results_wanted and max_pages.
Can I scrape one specific job URL?
Yes. Provide a WorkIndia job detail URL to collect one enriched record.
Why are some optional fields missing in output?
If a source record does not include a value, that empty field is removed from output.
Can I use this for scheduled monitoring?
Yes. Schedule the actor and compare datasets over time.
Support
For issues or feature requests, contact support through the Apify Console.
Resources
Legal Notice
This actor is designed for legitimate data collection purposes. Users are responsible for complying with website terms, applicable laws, and data-use regulations in their jurisdiction.