Timesjobs Scraper πΌ
Pricing
Pay per usage
Timesjobs Scraper πΌ
Extract job listings efficiently from Timesjobs, a leading Indian career portal. This lightweight actor is designed for fast data collection. For optimal stability and to prevent blocking, the use of residential proxies is strongly recommended.
Pricing
Pay per usage
Rating
0.0
(0)
Developer

Shahid Irfan
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
2 days ago
Last modified
Categories
Share
Timesjobs Scraper
Extract job listings from Timesjobs.com efficiently and reliably
A high-performance job scraper designed for recruiters, job seekers, and data analysts
π Overview
The Timesjobs Scraper is a powerful automation tool that extracts job listings from Timesjobs.com, one of India's leading job portals. This scraper enables you to collect comprehensive job data including titles, company names, locations, required skills, experience requirements, salary ranges, and full job descriptions.
Why Use This Scraper?
- Fast & Efficient: Quickly extract hundreds of job listings in minutes
- Comprehensive Data: Get detailed information including skills, experience, salary, and full descriptions
- Flexible Filtering: Search by keyword, location, and experience level
- Reliable Extraction: Built with robust parsing logic to handle various page structures
- Structured Output: Receive data in clean, structured JSON format ready for analysis
π Features
| β¨ Advanced Filtering | Filter jobs by keyword, location, and experience range |
| π Detailed Information | Extract job titles, companies, skills, salary, descriptions, and more |
| π Pagination Support | Automatically navigate through multiple pages of search results |
| πΎ Structured Data | Export data in JSON, CSV, Excel, or other formats |
| β‘ High Performance | Optimized for speed with concurrent request handling |
| π‘οΈ Proxy Support | Built-in proxy rotation to ensure reliable scraping |
π‘ Use Cases
For Recruiters & HR Professionals
- Build comprehensive talent databases
- Monitor competitor job postings
- Analyze market salary trends
- Track skill demand across industries
For Job Seekers
- Aggregate job listings matching your criteria
- Monitor new opportunities in your field
- Compare job requirements across companies
- Track salary ranges for specific roles
For Data Analysts & Researchers
- Conduct labor market research
- Analyze hiring trends and patterns
- Study skill requirements across industries
- Generate employment market reports
π₯ Input Configuration
Configure the scraper using these parameters:
| Parameter | Type | Description | Example |
|---|---|---|---|
keyword | String | Job title or skills to search for | "software developer" |
location | String | City or region to filter jobs | "Bengaluru" |
experience | String | Experience range in years (format: "min-max") | "0-5" |
results_wanted | Integer | Maximum number of jobs to extract | 100 |
max_pages | Integer | Maximum pages to scrape (safety limit) | 10 |
collectDetails | Boolean | Visit job detail pages for full descriptions | true |
startUrl | String | Custom Timesjobs search URL (optional) | "https://www.timesjobs.com/..." |
proxyConfiguration | Object | Proxy settings for reliable scraping | See Apify Proxy docs |
Example Input
{"keyword": "python developer","location": "Bengaluru","experience": "2-5","results_wanted": 100,"max_pages": 10,"collectDetails": true,"proxyConfiguration": {"useApifyProxy": true}}
π€ Output Format
The scraper returns structured data for each job listing:
| Field | Type | Description |
|---|---|---|
title | String | Job title or position name |
company | String | Hiring company or organization name |
experience | String | Required years of experience |
location | String | Job location (city/cities) |
skills | Array | List of required skills and technologies |
salary | String | Salary range or compensation details |
job_type | String | Employment type (Full-time, Contract, etc.) |
date_posted | String | When the job was posted |
description_html | String | Full job description (HTML format) |
description_text | String | Full job description (plain text) |
url | String | Direct link to the job listing |
Example Output
{"title": "Senior Python Developer","company": "Tech Solutions Pvt Ltd","experience": "3 - 5 Yrs","location": "Bengaluru, Pune, Mumbai","skills": ["Python", "Django", "REST API", "PostgreSQL", "AWS"],"salary": "8 - 12 Lakhs","job_type": "Full Time","date_posted": "Posted 2 days ago","description_html": "<p>We are looking for...</p>","description_text": "We are looking for an experienced Python developer...","url": "https://www.timesjobs.com/job-detail/..."}
π― How to Use
Option 1: Using the Apify Platform
- Navigate to the Timesjobs Scraper on Apify
- Configure your search parameters in the input form
- Click "Start" to begin scraping
- Download your data in JSON, CSV, Excel, or other formats
Option 2: Using Apify API
import { ApifyClient } from 'apify-client';const client = new ApifyClient({token: 'YOUR_API_TOKEN',});const input = {keyword: "software developer",location: "Bengaluru",experience: "2-5",results_wanted: 100,collectDetails: true};const run = await client.actor("YOUR_ACTOR_ID").call(input);const { items } = await client.dataset(run.defaultDatasetId).listItems();console.log(items);
Option 3: Using Apify CLI
apify call YOUR_ACTOR_ID --input '{"keyword": "data scientist","location": "Mumbai","results_wanted": 50}'
βοΈ Configuration Tips
Optimizing Performance
- results_wanted: Set a reasonable limit (50-200) for faster runs
- max_pages: Use this as a safety limit to prevent excessive scraping
- collectDetails: Disable if you only need basic job information
- proxyConfiguration: Always use proxies for reliable, uninterrupted scraping
Best Practices
- Start with a small number of results to test your configuration
- Use specific keywords for more relevant results
- Combine keyword and location filters for targeted searches
- Enable
collectDetailsonly when you need full job descriptions - Use Apify Proxy to avoid rate limiting and IP blocks
π Data Export Options
Export your scraped data in multiple formats:
- JSON - Perfect for programmatic processing
- CSV - Ideal for Excel and data analysis tools
- Excel - Ready for immediate analysis and reporting
- HTML Table - Quick viewing in web browsers
- RSS Feed - For automated monitoring
π§ Technical Details
Architecture
The scraper is built using modern web scraping best practices:
- API-first: Queries the official TimesJobs JSON search endpoint for speed and reliability, with detail enrichment via the public job detail API.
- HTML fallback: If the API is blocked, it falls back to HTML parsing of provided URLs to salvage results.
- Efficient HTML Parsing: Extracts data directly from HTML structure
- Pagination Handling: Automatically navigates through result pages
- Error Recovery: Built-in retry logic for failed requests
- Data Validation: Ensures output data quality and consistency
- Proxy Rotation: Supports proxy configuration for reliable scraping
Performance
- Speed: Scrapes 50-100 jobs per minute (depending on configuration)
- Concurrency: Handles multiple requests simultaneously
- Memory: Optimized for efficient memory usage
- Reliability: Built-in error handling and retry mechanisms
β Frequently Asked Questions
How many jobs can I scrape?
You can scrape as many jobs as needed. However, we recommend setting reasonable limits (100-500 jobs per run) for optimal performance.
Does this scraper require proxies?
While not mandatory, using proxies (especially Apify Proxy) is highly recommended for reliable, uninterrupted scraping.
How fresh is the data?
The scraper fetches real-time data directly from Timesjobs.com, ensuring you get the most current job listings.
Can I schedule regular scraping?
Yes! Use Apify's scheduling feature to run the scraper daily, weekly, or at custom intervals.
What if the scraper stops working?
The scraper is regularly maintained and updated. If you encounter issues, please report them through Apify support.
π Support & Feedback
Need help or have suggestions?
- Issues: Report bugs or request features
- Questions: Contact through Apify platform
- Updates: The scraper is regularly maintained to ensure compatibility
π Legal & Ethics
This scraper is provided for legitimate use cases such as:
- Job market research
- Recruitment and talent acquisition
- Academic research
- Personal job hunting
Important: Always comply with:
- Timesjobs.com Terms of Service
- Applicable data protection laws (GDPR, etc.)
- Ethical web scraping practices
- Rate limiting and respectful scraping
π Why Choose This Scraper?
| β Reliable | Tested and maintained regularly |
| β Fast | Optimized for high-performance extraction |
| β Easy to Use | Simple configuration, no coding required |
| β Comprehensive | Extracts all relevant job information |
| β Flexible | Customizable for various use cases |
π¦ Getting Started
Ready to start scraping Timesjobs?
- Try it now on the Apify platform
- Configure your search criteria
- Start extracting job data in minutes
Start scraping Timesjobs today and unlock valuable job market insights!