Timesjobs Scraper πŸ’Ό avatar

Timesjobs Scraper πŸ’Ό

Pricing

Pay per usage

Go to Apify Store
Timesjobs Scraper πŸ’Ό

Timesjobs Scraper πŸ’Ό

Extract job listings efficiently from Timesjobs, a leading Indian career portal. This lightweight actor is designed for fast data collection. For optimal stability and to prevent blocking, the use of residential proxies is strongly recommended.

Pricing

Pay per usage

Rating

0.0

(0)

Developer

Shahid Irfan

Shahid Irfan

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

15 days ago

Last modified

Share

TimesJobs Job Scraper

Extract job listings from TimesJobs quickly and reliably. Collect structured job data such as title, company, location, skills, salary, posting date, and description at scale. Useful for hiring research, job monitoring, and market analysis.

Features

  • Targeted Search β€” Filter by keyword, location, and experience range.
  • Automatic Pagination β€” Collects jobs automatically until results_wanted is reached.
  • Detailed Records β€” Includes job metadata and full description fields.
  • Clean Output β€” Structured dataset ready for analysis and automation.
  • Flexible Runs β€” Works for quick checks and large collection runs.

Use Cases

Recruitment Research

Build role-specific talent maps by collecting listings across locations and experience bands.

Job Market Monitoring

Track demand, salary patterns, and role trends over time with repeatable data collection.

Lead Generation

Find active hiring companies and open roles for outbound recruitment and staffing workflows.

Career Intelligence

Analyze which skills and requirements appear most often for your target roles.


Input Parameters

ParameterTypeRequiredDefaultDescription
startUrlStringNoβ€”Optional TimesJobs search URL. If provided, it can seed search filters.
keywordStringNo"software developer"Job keyword or role title to search.
locationStringNo"Bengaluru"City/location filter.
experienceStringNo"0-5"Experience range in min-max format.
results_wantedIntegerNo20Number of jobs to collect. Pagination is auto-calculated internally.
proxyConfigurationObjectNoDirect modeOptional proxy settings. Direct API mode is fastest; proxy fallback helps when blocking is detected.

Output Data

Each dataset item may include the following fields:

FieldTypeDescription
titleStringJob title.
companyStringHiring company name.
experienceStringExperience requirement text.
locationStringJob location text.
skillsArraySkill keywords list.
salaryStringSalary information when available.
job_typeStringEmployment/job type.
date_postedStringPosted date text.
description_htmlStringJob description in HTML format.
description_textStringJob description in plain text format.
urlStringJob detail URL.
job_idStringTimesJobs job identifier.
company_descriptionStringCompany description when available.
addressStringAddress field when available.
vacanciesNumberVacancy count when available.
external_job_urlStringExternal apply/job URL when available.
sourceStringSource marker.

Usage Examples

{
"keyword": "software developer",
"location": "Bengaluru",
"experience": "0-5",
"results_wanted": 20
}

Higher Volume Collection

{
"keyword": "data analyst",
"location": "Mumbai",
"experience": "2-8",
"results_wanted": 200
}

Start From Search URL

{
"startUrl": "https://www.timesjobs.com/candidate/job-search.html?searchType=personalizedSearch&from=submit&txtKeywords=python%20developer&txtLocation=Pune",
"results_wanted": 50
}

Sample Output

{
"title": "Software Engineer",
"company": "Example Technologies",
"experience": "2 - 5 Yrs",
"location": "Bengaluru",
"skills": ["JavaScript", "Node.js", "SQL"],
"salary": "6.00 LPA - 10.00 LPA",
"job_type": "Onsite",
"date_posted": "14 Feb, 2026",
"description_html": "<p>Role details...</p>",
"description_text": "Role details...",
"url": "https://www.timesjobs.com/job-detail/...",
"job_id": "12345678",
"company_description": null,
"address": "Bengaluru",
"vacancies": 2,
"external_job_url": null,
"source": "api"
}

Tips for Best Results

Start Small

  • Begin with results_wanted: 20 to validate filters quickly.
  • Increase volume after confirming result quality.

Improve Relevance

  • Use specific keywords like "react developer" instead of broad terms.
  • Combine keyword, location, and experience for tighter targeting.

Improve Reliability

  • Keep default direct mode for best speed.
  • Enable proxy only if you observe blocking in your runs.
  • Run in batches if collecting very large datasets.

Integrations

Connect output data with:

  • Google Sheets β€” Build reports and dashboards.
  • Airtable β€” Create searchable hiring databases.
  • Make β€” Trigger downstream automation.
  • Zapier β€” Connect with CRM and alerting workflows.
  • Webhooks β€” Send data to custom services.

Export Formats

  • JSON β€” Best for APIs and apps.
  • CSV β€” Best for spreadsheets.
  • Excel β€” Best for business reporting.
  • XML β€” Best for system interoperability.

Frequently Asked Questions

How many jobs can I collect in one run?

You can request as many as available, but run time increases with volume and detail depth.

Do I need to set page numbers manually?

No. Pagination is handled automatically based on results_wanted.

Why are some fields empty?

Some listings do not provide every field, so null or fallback values can appear.

Can I schedule this actor?

Yes. You can schedule recurring runs in Apify for daily or hourly monitoring.

Can I export to CSV?

Yes. Dataset exports are available in JSON, CSV, Excel, and more.


Support

For issues or feature requests, use the Apify Console.

Resources


This actor is intended for legitimate data collection use cases. You are responsible for complying with applicable laws and the target website terms. Use collected data responsibly.