Timesjobs Scraper πΌ
Pricing
Pay per usage
Timesjobs Scraper πΌ
Extract job listings efficiently from Timesjobs, a leading Indian career portal. This lightweight actor is designed for fast data collection. For optimal stability and to prevent blocking, the use of residential proxies is strongly recommended.
Pricing
Pay per usage
Rating
0.0
(0)
Developer

Shahid Irfan
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
15 days ago
Last modified
Categories
Share
TimesJobs Job Scraper
Extract job listings from TimesJobs quickly and reliably. Collect structured job data such as title, company, location, skills, salary, posting date, and description at scale. Useful for hiring research, job monitoring, and market analysis.
Features
- Targeted Search β Filter by keyword, location, and experience range.
- Automatic Pagination β Collects jobs automatically until
results_wantedis reached. - Detailed Records β Includes job metadata and full description fields.
- Clean Output β Structured dataset ready for analysis and automation.
- Flexible Runs β Works for quick checks and large collection runs.
Use Cases
Recruitment Research
Build role-specific talent maps by collecting listings across locations and experience bands.
Job Market Monitoring
Track demand, salary patterns, and role trends over time with repeatable data collection.
Lead Generation
Find active hiring companies and open roles for outbound recruitment and staffing workflows.
Career Intelligence
Analyze which skills and requirements appear most often for your target roles.
Input Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
startUrl | String | No | β | Optional TimesJobs search URL. If provided, it can seed search filters. |
keyword | String | No | "software developer" | Job keyword or role title to search. |
location | String | No | "Bengaluru" | City/location filter. |
experience | String | No | "0-5" | Experience range in min-max format. |
results_wanted | Integer | No | 20 | Number of jobs to collect. Pagination is auto-calculated internally. |
proxyConfiguration | Object | No | Direct mode | Optional proxy settings. Direct API mode is fastest; proxy fallback helps when blocking is detected. |
Output Data
Each dataset item may include the following fields:
| Field | Type | Description |
|---|---|---|
title | String | Job title. |
company | String | Hiring company name. |
experience | String | Experience requirement text. |
location | String | Job location text. |
skills | Array | Skill keywords list. |
salary | String | Salary information when available. |
job_type | String | Employment/job type. |
date_posted | String | Posted date text. |
description_html | String | Job description in HTML format. |
description_text | String | Job description in plain text format. |
url | String | Job detail URL. |
job_id | String | TimesJobs job identifier. |
company_description | String | Company description when available. |
address | String | Address field when available. |
vacancies | Number | Vacancy count when available. |
external_job_url | String | External apply/job URL when available. |
source | String | Source marker. |
Usage Examples
Basic Search
{"keyword": "software developer","location": "Bengaluru","experience": "0-5","results_wanted": 20}
Higher Volume Collection
{"keyword": "data analyst","location": "Mumbai","experience": "2-8","results_wanted": 200}
Start From Search URL
{"startUrl": "https://www.timesjobs.com/candidate/job-search.html?searchType=personalizedSearch&from=submit&txtKeywords=python%20developer&txtLocation=Pune","results_wanted": 50}
Sample Output
{"title": "Software Engineer","company": "Example Technologies","experience": "2 - 5 Yrs","location": "Bengaluru","skills": ["JavaScript", "Node.js", "SQL"],"salary": "6.00 LPA - 10.00 LPA","job_type": "Onsite","date_posted": "14 Feb, 2026","description_html": "<p>Role details...</p>","description_text": "Role details...","url": "https://www.timesjobs.com/job-detail/...","job_id": "12345678","company_description": null,"address": "Bengaluru","vacancies": 2,"external_job_url": null,"source": "api"}
Tips for Best Results
Start Small
- Begin with
results_wanted: 20to validate filters quickly. - Increase volume after confirming result quality.
Improve Relevance
- Use specific keywords like
"react developer"instead of broad terms. - Combine
keyword,location, andexperiencefor tighter targeting.
Improve Reliability
- Keep default direct mode for best speed.
- Enable proxy only if you observe blocking in your runs.
- Run in batches if collecting very large datasets.
Integrations
Connect output data with:
- Google Sheets β Build reports and dashboards.
- Airtable β Create searchable hiring databases.
- Make β Trigger downstream automation.
- Zapier β Connect with CRM and alerting workflows.
- Webhooks β Send data to custom services.
Export Formats
- JSON β Best for APIs and apps.
- CSV β Best for spreadsheets.
- Excel β Best for business reporting.
- XML β Best for system interoperability.
Frequently Asked Questions
How many jobs can I collect in one run?
You can request as many as available, but run time increases with volume and detail depth.
Do I need to set page numbers manually?
No. Pagination is handled automatically based on results_wanted.
Why are some fields empty?
Some listings do not provide every field, so null or fallback values can appear.
Can I schedule this actor?
Yes. You can schedule recurring runs in Apify for daily or hourly monitoring.
Can I export to CSV?
Yes. Dataset exports are available in JSON, CSV, Excel, and more.
Support
For issues or feature requests, use the Apify Console.
Resources
Legal Notice
This actor is intended for legitimate data collection use cases. You are responsible for complying with applicable laws and the target website terms. Use collected data responsibly.