WorkIndia Jobs Scraper avatar

WorkIndia Jobs Scraper

Pricing

Pay per usage

Go to Apify Store
WorkIndia Jobs Scraper

WorkIndia Jobs Scraper

Seamlessly extract postings from WorkIndia, India's premier job portal for blue and grey-collar roles. This powerful actor gathers job descriptions, skills, and employer data rapidly. Ideal for recruiters building robust datasets of the Indian job market!

Pricing

Pay per usage

Rating

0.0

(0)

Developer

Shahid Irfan

Shahid Irfan

Maintained by Community

Actor stats

0

Bookmarked

6

Total users

4

Monthly active users

23 days ago

Last modified

Share

Collect fresh job listings from WorkIndia with rich details such as salary, company, location, experience, openings, and more. Run by URL, keyword, city, or a combined filter and get clean, structured output ready for analysis. This actor is designed for fast recurring job monitoring and lead generation workflows.

Features

  • URL-first scraping — Start from WorkIndia listing URLs or a single job detail URL.
  • Keyword and city filters — Narrow results with search terms and city targeting.
  • Paginated collection — Control depth with results_wanted and max_pages.
  • Rich job fields — Collect detailed hiring information for each job.
  • Clean dataset output — Null and empty values are removed from records.

Use Cases

Recruitment Intelligence

Track hiring activity by city, role type, and salary trends to support recruiting strategy.

Job Aggregation

Build your own searchable jobs dataset for internal tools, dashboards, or outbound campaigns.

Lead Generation

Find active companies with open positions and prioritize outreach by role category and location.

Market Monitoring

Monitor shifts in demand across industries and experience bands over time.


Input Parameters

ParameterTypeRequiredDefaultDescription
urlStringNohttps://www.workindia.in/jobs/WorkIndia listing URL or job detail URL.
keywordStringNodeliveryOptional keyword filter for job search.
cityStringNodelhiOptional city filter.
results_wantedIntegerNo20Maximum number of records to save.
max_pagesIntegerNo10Maximum pages to paginate.
includeDetailsBooleanNotrueEnrich each listing with detailed fields.
proxyConfigurationObjectNo{"useApifyProxy": false}Optional proxy settings.

Output Data

Each dataset item can include:

FieldTypeDescription
job_idIntegerUnique WorkIndia job id.
profile_job_titleStringJob title.
branch_company_nameStringCompany name.
branch_location_city_nameStringJob city.
branch_location_nameStringArea/locality.
profile_salary_structureStringSalary text/range.
job_experienceStringExperience requirement.
profile_qualification_requiredStringQualification requirement.
profile_industry_display_nameStringJob industry/category.
employment_typeStringEmployment type.
created_atStringPublished timestamp.
expiryStringExpiry date.
api_detail_urlStringJob detail source link used internally.
source_urlStringInput source URL for the run.
collected_atStringCollection timestamp.

Usage Examples

Default Jobs Feed

{
"url": "https://www.workindia.in/jobs/",
"results_wanted": 20,
"max_pages": 5
}
{
"keyword": "delivery",
"city": "delhi",
"results_wanted": 50,
"max_pages": 10
}

Industry URL Extraction

{
"url": "https://www.workindia.in/delivery-jobs-in-delhi/",
"results_wanted": 30,
"includeDetails": true
}

Single Job Detail URL

{
"url": "https://www.workindia.in/jobs/delivery_boy-rohini-delhi-9738578/"
}

Sample Output

{
"job_id": 9738578,
"profile_job_title": "Delivery Boy",
"branch_company_name": "Blinkit",
"branch_location_city_name": "delhi",
"branch_location_name": "Rohini",
"profile_salary_structure": "Rs. 40000 - Rs. 75000",
"job_experience": "fresher",
"profile_qualification_required": "Tenth Pass",
"profile_industry_display_name": "Delivery",
"employment_type": "FULL_TIME",
"created_at": "2026-04-17T06:58:00Z",
"expiry": "2026-08-13",
"source_url": "https://www.workindia.in/delivery-jobs-in-delhi/",
"collected_at": "2026-04-17T09:30:00.000Z"
}

Tips For Best Results

Start Small First

  • Start with results_wanted: 20 for quick validation.
  • Increase limits after confirming output quality.

Use Strong Input URLs

  • Prefer canonical WorkIndia listing URLs.
  • Use city or category URLs for tighter result relevance.

Balance Page Depth

  • Use max_pages as a safety cap to prevent over-collection.
  • Tune both results_wanted and max_pages together.

Use Proxy Only When Needed

  • Keep defaults for normal runs.
  • Enable proxy configuration if your environment requires it.

Integrations

Connect output with:

  • Google Sheets — Share and review job data quickly.
  • Airtable — Build a searchable hiring database.
  • Make — Automate enrichment and notifications.
  • Zapier — Trigger downstream actions from new jobs.
  • Webhooks — Send run results to your internal systems.

Export Formats

  • JSON — Best for APIs and developers.
  • CSV — Best for spreadsheets and BI tools.
  • Excel — Best for business reporting.
  • XML — Best for legacy integrations.

Frequently Asked Questions

Can I use either keyword or URL input?

Yes. You can run with URL only, keyword only, or both together.

Does it support pagination?

Yes. Pagination is controlled by results_wanted and max_pages.

Can I scrape one specific job URL?

Yes. Provide a WorkIndia job detail URL to collect one enriched record.

Why are some optional fields missing in output?

If a source record does not include a value, that empty field is removed from output.

Can I use this for scheduled monitoring?

Yes. Schedule the actor and compare datasets over time.


Support

For issues or feature requests, contact support through the Apify Console.

Resources


This actor is designed for legitimate data collection purposes. Users are responsible for complying with website terms, applicable laws, and data-use regulations in their jurisdiction.