JobStreet Scraper
Pricing
Pay per usage
JobStreet Scraper
A simple and lightweight scraper for Jobstreet, designed to quickly extract essential job posting data. It provides a clean, minimal set of columns for easy integration. Important: For a smooth and reliable run, this actor requires the use of residential proxies.
Pricing
Pay per usage
Rating
5.0
(1)
Developer

Shahid Irfan
Actor stats
0
Bookmarked
13
Total users
2
Monthly active users
11 days ago
Last modified
Categories
Share
JobStreet Jobs Scraper
Overview
The JobStreet Jobs Scraper is an efficient tool designed to extract job listings from JobStreet, one of the leading job portals. This scraper collects detailed information about job opportunities, including titles, companies, locations, posting dates, and descriptions, making it ideal for job market research, recruitment analytics, and data-driven insights.
Features
- Comprehensive Data Extraction: Captures key job details such as title, company, location, posting date, and full descriptions.
- Flexible Search Parameters: Supports keyword-based searches, location filters, and date-based filtering to target specific job listings.
- Pagination Handling: Automatically navigates through multiple pages of search results to gather extensive data.
- Customizable Limits: Allows setting maximum jobs or pages to control the scope of scraping.
- Proxy Support: Integrates with proxy configurations to ensure reliable and uninterrupted data collection.
- Output Formats: Provides data in structured JSON format, suitable for further processing or integration.
Input Configuration
The scraper accepts input parameters to customize the scraping process. Below is a detailed list of supported input fields:
| Field | Type | Required | Description |
|---|---|---|---|
keyword | String | Yes | The job title or keywords to search for (e.g., "software engineer"). |
location | String | No | The geographic location to filter jobs by (e.g., "Singapore"). |
posted_date | String | No | Filter jobs by posting date. Options: "24h", "7d", "30d", "anytime". |
maxJobs | Number | No | The maximum number of jobs to scrape. Scraping stops once this limit is reached. |
maxPages | Number | No | A safety limit on the number of listing pages to visit. |
cookies | String | No | Custom cookies to handle consent dialogs or access restricted content. |
cookiesJson | Object | No | JSON object for custom cookies. |
proxyConfiguration | Object | No | Standard proxy configuration for enhanced scraping reliability. |
Output Schema
The scraper outputs a dataset of job listings in JSON format. Each item in the dataset includes the following fields:
| Field | Type | Description |
|---|---|---|
title | String | The job title. |
company | String | The name of the hiring company. |
location | String | The job location. |
date_posted | String | The date when the job was posted. |
description_html | String | The job description in HTML format. |
description_text | String | The job description in plain text. |
url | String | The direct URL to the job posting. |
Usage
Running the Scraper
To use the JobStreet Jobs Scraper:
- Access the Actor: Navigate to the actor's page on the Apify platform.
- Configure Inputs: Provide the required and optional input parameters as described above.
- Run the Actor: Execute the scraper and monitor the progress.
- Retrieve Results: Download the dataset from the actor's storage once the run completes.
Example Usage
Basic Search
- Input:
{"keyword": "data analyst","location": "Malaysia"}
- Description: Scrapes job listings for "data analyst" positions in Malaysia.
Advanced Search with Limits
- Input:
{"keyword": "developer","location": "Singapore","posted_date": "7d","maxJobs": 100,"maxPages": 10}
- Description: Scrapes up to 100 developer jobs posted in the last 7 days in Singapore, limiting to 10 pages.
Integration Tips
- API Access: Use Apify's API to run the actor programmatically and retrieve results.
- Scheduling: Set up recurring runs for continuous data collection.
- Data Processing: Export results to CSV or integrate with analytics tools for deeper insights.
Configuration Best Practices
- Keyword Optimization: Use specific keywords to narrow down results and improve relevance.
- Location Specificity: Provide precise locations to target regional job markets.
- Date Filtering: Use
posted_dateto focus on recent opportunities. - Limits and Safety: Set
maxJobsandmaxPagesto manage resource usage and avoid excessive data. - Proxy Usage: Enable proxies for large-scale scraping to prevent IP blocks.
Troubleshooting
- No Results: Ensure keywords and locations are correctly spelled and relevant.
- Incomplete Data: Check proxy settings if encountering access restrictions.
- Performance Issues: Reduce
maxPagesor increase wait times if the scraper is running slowly.
Support
For questions, issues, or feature requests, please refer to the Apify platform's support resources or contact the actor maintainer.

