Kariyer Job Scraper
Pricing
Pay per usage
Kariyer Job Scraper
Meet the Kariyer Job Scraper, your lightweight tool for extracting job postings from Kariyer.net. Designed for speed and simplicity. To ensure uninterrupted scraping and optimal performance, using residential proxies is highly recommended. Start collecting job data effortlessly!
Pricing
Pay per usage
Rating
0.0
(0)
Developer

Shahid Irfan
Actor stats
0
Bookmarked
6
Total users
1
Monthly active users
20 days ago
Last modified
Categories
Share
Kariyer.net Jobs Scraper
Extract job listings from Kariyer.net with rich structured output for analytics, recruitment workflows, and market monitoring. Collect job cards plus detailed description data in one run, including description_html and description_text. Built for fast, repeatable data collection with clean records and empty values removed.
Features
- Rich job coverage — Collects title, company, locations, work model, posting data, and direct job URL
- Description included — Returns both
description_htmlanddescription_textwhen available - Detail-enriched output — Adds deep detail blocks like candidate criteria, company detail, SEO, and salary budget
- Clean dataset records — Automatically removes null and empty values from output
- Deduplicated results — Prevents repeated jobs during pagination
- Flexible start options — Use ready search URLs or build search from keyword and location
- Scalable collection — Control total jobs and page depth for small tests or larger runs
Use Cases
Talent Market Research
Track active roles, demand patterns, and job requirements across cities and sectors. Use structured fields to compare hiring trends over time.
Recruitment Intelligence
Build internal datasets of relevant openings for benchmarking. Review required qualifications, work model, and role levels quickly.
Job Board Monitoring
Monitor fresh listings and sponsorship visibility in one pipeline. Feed output into reporting dashboards or alerts.
Data Enrichment Pipelines
Use the detail blocks to enrich existing recruitment datasets. Combine company, criteria, SEO, and description attributes in downstream systems.
Content and Salary Analysis
Analyze position language and compensation-related metadata where available. Use text descriptions for classification and NLP tasks.
Input Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
startUrls | Array | No | ["https://www.kariyer.net/is-ilanlari"] | One or more Kariyer listing/search URLs |
keyword | String | No | — | Fallback keyword if startUrls is empty |
location | String | No | — | Fallback location if startUrls is empty |
max_job_age | String | No | "all" | Backward-compatible age filter ("24 hours", "7 days", "30 days", "all") |
results_wanted | Integer | No | 20 | Maximum number of jobs to save |
max_pages | Integer | No | 20 | Maximum pages to process per start URL |
proxyConfiguration | Object | No | {"useApifyProxy": false} | Proxy configuration for reliability |
Output Data
Each dataset item includes core fields plus enriched detail sections.
| Field | Type | Description |
|---|---|---|
id | Integer | Job ID |
title | String | Job title |
company | String | Company name |
url | String | Job URL |
company_url | String | Company profile URL |
location | String | Primary location text |
all_locations | String | Extended location text |
posting_date | String | Posting date |
posted_relative | String | Relative publish text |
employment_type | String | Employment type text |
work_model | String | Work model value |
is_easy_apply | Boolean | Easy apply flag |
is_sponsored | Boolean | Sponsored listing flag |
sector_names | Array | Sector names |
district_names | Array | District names |
description_html | String | Full description HTML |
description_text | String | Plain-text description |
detail_job_general_information | Object | Detailed general job block |
detail_job_position_information | Object | Position detail block |
detail_job_candidate_criteria | Object | Candidate criteria detail block |
detail_job_company_information | Object | Company detail block |
detail_job_statistics | Object | Job statistics block |
detail_job_salary_budget | Object | Salary budget block |
detail_job_seo | Object | SEO detail block |
search_url | String | Source search URL |
search_page | Integer | Source page number |
crawled_at | String | Crawl timestamp |
Usage Examples
Basic Run
{"results_wanted": 20}
URL-Based Collection
{"startUrls": ["https://www.kariyer.net/is-ilanlari/istanbul"],"results_wanted": 50,"max_pages": 5}
Keyword and Location Fallback
{"keyword": "yazılım mühendisi","location": "İstanbul","results_wanted": 100,"max_pages": 10}
Multi-URL Collection
{"startUrls": ["https://www.kariyer.net/is-ilanlari/ankara","https://www.kariyer.net/is-ilanlari/izmir"],"results_wanted": 150,"max_pages": 8}
Sample Output
{"id": 4385220,"title": "Harita Mühendisi","company": "Alkazan İnş. End. Tic. Ltd. Şti","url": "https://www.kariyer.net/is-ilani/alkazan-ins-end-tic-ltd-sti-harita-muhendisi-4385220","location": "Bursa","employment_type": "Tam Zamanlı","work_model": "OnSite","posting_date": "2026-02-18","posted_relative": "1 saat","description_html": "<p>...</p>","description_text": "Alkazan; Sağlık, inşaat, gıda...","sector_names": ["Gıda", "İnşaat"],"detail_job_candidate_criteria": {"educationLevelText": ["Lisans"]},"search_url": "https://www.kariyer.net/is-ilanlari","search_page": 1,"crawled_at": "2026-02-18T16:00:00.000Z"}
Tips for Best Results
Start with a Small Test
- Run with
results_wanted: 20first - Confirm output structure for your use case
- Then scale up pages and total results
Prefer Targeted Start URLs
- Use city or niche URLs in
startUrls - This improves relevance and reduces post-filtering effort
- Keep separate runs for separate verticals
Tune Volume Carefully
- Increase
max_pagesonly as needed - Keep
results_wantedaligned with your downstream capacity - Run scheduled jobs for continuous monitoring
Proxy Configuration
For stronger reliability in larger runs:
{"proxyConfiguration": {"useApifyProxy": true,"apifyProxyGroups": ["RESIDENTIAL"]}}
Integrations
Connect your dataset with:
- Google Sheets — Share and analyze job data quickly
- Airtable — Build searchable hiring databases
- Looker Studio — Create trend dashboards
- Slack — Send notifications for new matching jobs
- Make — Automate enrichment and routing
- Zapier — Trigger actions in business tools
Export Formats
- JSON — For APIs and development workflows
- CSV — For spreadsheet analysis
- Excel — For reporting and handoff
- XML — For system integrations
Frequently Asked Questions
Does the actor return descriptions?
Yes. It returns both description_html and description_text when available for a listing.
Can I scrape multiple search pages?
Yes. Use max_pages to control per-URL pagination depth.
Can I scrape multiple URLs in one run?
Yes. Provide multiple entries in startUrls.
Why are some fields missing on certain records?
Some listings may not expose every field. Empty and null values are automatically removed from output.
Is data deduplicated?
Yes. Duplicate jobs are filtered during collection.
Can I run this on a schedule?
Yes. You can schedule runs in Apify Console for daily or hourly updates.
Support
For issues or feature requests, use the Apify Console issue/support channels.
Resources
Legal Notice
This actor is intended for legitimate data collection and research purposes. You are responsible for complying with website terms, local regulations, and applicable laws when using collected data.