Foundit Jobs Scraper
Pricing
Pay per usage
Foundit Jobs Scraper
Introducing the Foundit Jobs Scraper, a lightweight actor for efficiently scraping job listings from Foundit (formerly Monster). Fast and robust. For best results and reliable data extraction, the use of residential proxies is strongly advised. Streamline your recruitment data gathering today!
Pricing
Pay per usage
Rating
5.0
(2)
Developer

Shahid Irfan
Actor stats
0
Bookmarked
10
Total users
1
Monthly active users
18 days ago
Last modified
Categories
Share
Extract comprehensive job data from Foundit.in with ease. Collect job listings including titles, companies, locations, salaries, and descriptions at scale. Perfect for job market research, recruitment, and career analysis.
Features
- Comprehensive Job Data — Extract titles, companies, salaries, and full descriptions
- Flexible Search — Search by keyword, location, or custom URLs
- Automatic Pagination — Collect thousands of jobs across multiple pages
- Fast and Reliable — API-based extraction for consistent results
Use Cases
Job Market Research
Analyze current job trends and salary ranges across industries. Understand demand for specific skills and locations to make informed career decisions.
Recruitment Intelligence
Find qualified candidates by monitoring new job postings. Track competitor hiring patterns and requirements to build better talent strategies.
Career Planning
Research job opportunities and requirements in your field. Compare salaries and benefits across companies to negotiate better offers.
Input Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
url | String | No | https://www.foundit.in/search/software-engineer-jobs-in-bangalore | Foundit search URL to derive filters from |
keyword | String | No | software engineer | Job search keyword(s) |
location | String | No | Bangalore | Job location filter |
results_wanted | Integer | No | 20 | Max jobs to collect |
max_pages | Integer | No | 20 | Max API pages to visit |
proxyConfiguration | Object | No | - | Optional Apify proxy settings |
Output Data
Each item in the dataset contains:
| Field | Type | Description |
|---|---|---|
source | String | Data source identifier |
api_source | String | API endpoint used |
url | String | Job listing URL |
apply_url | String | Direct application URL |
title | String | Job title |
company | String | Company name |
location | String | Job location |
salary | String | Salary information |
experience | String | Required experience |
skills | Array | Required skills |
employment_type | String | Full-time, part-time, etc. |
industry | String | Industry category |
function | String | Job function |
occupational_category | String | Occupational category |
description_html | String | HTML job description |
description_text | String | Plain text job description |
date_posted | String | Posting date |
valid_through | String | Application deadline |
job_id | String | Unique job identifier |
page_no | Number | Page number where found |
keyword | String | Search keyword used |
input_location | String | Location filter used |
scraped_at | String | Timestamp of scraping |
Usage Examples
Basic Job Search
Extract jobs by keyword and location:
{"keyword": "software engineer","location": "Bangalore","results_wanted": 50}
Advanced Filtering
Use a specific search URL for precise results:
{"url": "https://www.foundit.in/search/data-scientist-jobs-in-mumbai","results_wanted": 100,"max_pages": 5}
Large Scale Collection
Collect extensive job data with proxy configuration:
{"keyword": "marketing manager","location": "Delhi","results_wanted": 500,"max_pages": 50,"proxyConfiguration": {"useApifyProxy": true,"apifyProxyGroups": ["RESIDENTIAL"]}}
Sample Output
{"source": "foundit","api_source": "searchResultsPage","url": "https://www.foundit.in/job/software-engineer-12345","apply_url": "https://www.foundit.in/apply/12345","title": "Senior Software Engineer","company": "Tech Solutions Pvt Ltd","location": "Bangalore, Karnataka","salary": "₹8,00,000 - ₹15,00,000 P.A.","experience": "3-5 years","skills": ["Java", "Spring Boot", "Microservices"],"employment_type": "Full Time","industry": "IT Services","function": "Engineering","occupational_category": "Software Development","description_html": "<p>Join our dynamic team...</p>","description_text": "Join our dynamic team as a Senior Software Engineer...","date_posted": "2024-01-15","valid_through": "2024-02-15","job_id": "12345","page_no": 1,"keyword": "software engineer","input_location": "Bangalore","scraped_at": "2024-01-15T10:30:00Z"}
Tips for Best Results
Choose Effective Keywords
- Use specific job titles (e.g., "data scientist" vs "data")
- Include location in keywords when possible
- Test different keyword variations
Optimize Collection Size
- Start with smaller batches (20-50) for testing
- Increase gradually for production runs
- Balance speed with data requirements
Use Proxy Configuration
- Enable residential proxies for better success rates
- Especially important for large-scale collections
- Reduces blocking and improves reliability
Integrations
Connect your data with:
- Google Sheets — Export for analysis and reporting
- Airtable — Build searchable job databases
- Slack — Get notifications for new opportunities
- Webhooks — Send data to custom endpoints
- Make — Create automated job monitoring workflows
- Zapier — Trigger actions based on job criteria
Export Formats
Download data in multiple formats:
- JSON — For developers and APIs
- CSV — For spreadsheet analysis
- Excel — For business reporting
- XML — For system integrations
Frequently Asked Questions
How many jobs can I collect?
You can collect all available jobs matching your criteria. The practical limit depends on the search parameters and website availability.
Can I search for jobs in multiple locations?
Yes, you can run separate searches for different locations or use broader location terms to capture multiple areas.
What if some job fields are empty?
Some fields may be empty if the job posting doesn't include that information. The actor collects all available data.
How often should I run the scraper?
Run frequency depends on your needs. Job postings change regularly, so daily or weekly runs are common for monitoring.
Can I filter by salary or experience?
Use specific keywords in your search or filter results after collection using the output data.
Support
For issues or feature requests, contact support through the Apify Console.
Resources
Legal Notice
This actor is designed for legitimate data collection purposes. Users are responsible for ensuring compliance with website terms of service and applicable laws. Use data responsibly and respect rate limits.