Kariyer Job Scraper avatar

Kariyer Job Scraper

Pricing

Pay per usage

Go to Apify Store
Kariyer Job Scraper

Kariyer Job Scraper

Meet the Kariyer Job Scraper, your lightweight tool for extracting job postings from Kariyer.net. Designed for speed and simplicity. To ensure uninterrupted scraping and optimal performance, using residential proxies is highly recommended. Start collecting job data effortlessly!

Pricing

Pay per usage

Rating

0.0

(0)

Developer

Shahid Irfan

Shahid Irfan

Maintained by Community

Actor stats

0

Bookmarked

6

Total users

1

Monthly active users

20 days ago

Last modified

Share

Kariyer.net Jobs Scraper

Extract job listings from Kariyer.net with rich structured output for analytics, recruitment workflows, and market monitoring. Collect job cards plus detailed description data in one run, including description_html and description_text. Built for fast, repeatable data collection with clean records and empty values removed.

Features

  • Rich job coverage — Collects title, company, locations, work model, posting data, and direct job URL
  • Description included — Returns both description_html and description_text when available
  • Detail-enriched output — Adds deep detail blocks like candidate criteria, company detail, SEO, and salary budget
  • Clean dataset records — Automatically removes null and empty values from output
  • Deduplicated results — Prevents repeated jobs during pagination
  • Flexible start options — Use ready search URLs or build search from keyword and location
  • Scalable collection — Control total jobs and page depth for small tests or larger runs

Use Cases

Talent Market Research

Track active roles, demand patterns, and job requirements across cities and sectors. Use structured fields to compare hiring trends over time.

Recruitment Intelligence

Build internal datasets of relevant openings for benchmarking. Review required qualifications, work model, and role levels quickly.

Job Board Monitoring

Monitor fresh listings and sponsorship visibility in one pipeline. Feed output into reporting dashboards or alerts.

Data Enrichment Pipelines

Use the detail blocks to enrich existing recruitment datasets. Combine company, criteria, SEO, and description attributes in downstream systems.

Content and Salary Analysis

Analyze position language and compensation-related metadata where available. Use text descriptions for classification and NLP tasks.


Input Parameters

ParameterTypeRequiredDefaultDescription
startUrlsArrayNo["https://www.kariyer.net/is-ilanlari"]One or more Kariyer listing/search URLs
keywordStringNoFallback keyword if startUrls is empty
locationStringNoFallback location if startUrls is empty
max_job_ageStringNo"all"Backward-compatible age filter ("24 hours", "7 days", "30 days", "all")
results_wantedIntegerNo20Maximum number of jobs to save
max_pagesIntegerNo20Maximum pages to process per start URL
proxyConfigurationObjectNo{"useApifyProxy": false}Proxy configuration for reliability

Output Data

Each dataset item includes core fields plus enriched detail sections.

FieldTypeDescription
idIntegerJob ID
titleStringJob title
companyStringCompany name
urlStringJob URL
company_urlStringCompany profile URL
locationStringPrimary location text
all_locationsStringExtended location text
posting_dateStringPosting date
posted_relativeStringRelative publish text
employment_typeStringEmployment type text
work_modelStringWork model value
is_easy_applyBooleanEasy apply flag
is_sponsoredBooleanSponsored listing flag
sector_namesArraySector names
district_namesArrayDistrict names
description_htmlStringFull description HTML
description_textStringPlain-text description
detail_job_general_informationObjectDetailed general job block
detail_job_position_informationObjectPosition detail block
detail_job_candidate_criteriaObjectCandidate criteria detail block
detail_job_company_informationObjectCompany detail block
detail_job_statisticsObjectJob statistics block
detail_job_salary_budgetObjectSalary budget block
detail_job_seoObjectSEO detail block
search_urlStringSource search URL
search_pageIntegerSource page number
crawled_atStringCrawl timestamp

Usage Examples

Basic Run

{
"results_wanted": 20
}

URL-Based Collection

{
"startUrls": [
"https://www.kariyer.net/is-ilanlari/istanbul"
],
"results_wanted": 50,
"max_pages": 5
}

Keyword and Location Fallback

{
"keyword": "yazılım mühendisi",
"location": "İstanbul",
"results_wanted": 100,
"max_pages": 10
}

Multi-URL Collection

{
"startUrls": [
"https://www.kariyer.net/is-ilanlari/ankara",
"https://www.kariyer.net/is-ilanlari/izmir"
],
"results_wanted": 150,
"max_pages": 8
}

Sample Output

{
"id": 4385220,
"title": "Harita Mühendisi",
"company": "Alkazan İnş. End. Tic. Ltd. Şti",
"url": "https://www.kariyer.net/is-ilani/alkazan-ins-end-tic-ltd-sti-harita-muhendisi-4385220",
"location": "Bursa",
"employment_type": "Tam Zamanlı",
"work_model": "OnSite",
"posting_date": "2026-02-18",
"posted_relative": "1 saat",
"description_html": "<p>...</p>",
"description_text": "Alkazan; Sağlık, inşaat, gıda...",
"sector_names": ["Gıda", "İnşaat"],
"detail_job_candidate_criteria": {
"educationLevelText": ["Lisans"]
},
"search_url": "https://www.kariyer.net/is-ilanlari",
"search_page": 1,
"crawled_at": "2026-02-18T16:00:00.000Z"
}

Tips for Best Results

Start with a Small Test

  • Run with results_wanted: 20 first
  • Confirm output structure for your use case
  • Then scale up pages and total results

Prefer Targeted Start URLs

  • Use city or niche URLs in startUrls
  • This improves relevance and reduces post-filtering effort
  • Keep separate runs for separate verticals

Tune Volume Carefully

  • Increase max_pages only as needed
  • Keep results_wanted aligned with your downstream capacity
  • Run scheduled jobs for continuous monitoring

Proxy Configuration

For stronger reliability in larger runs:

{
"proxyConfiguration": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}

Integrations

Connect your dataset with:

  • Google Sheets — Share and analyze job data quickly
  • Airtable — Build searchable hiring databases
  • Looker Studio — Create trend dashboards
  • Slack — Send notifications for new matching jobs
  • Make — Automate enrichment and routing
  • Zapier — Trigger actions in business tools

Export Formats

  • JSON — For APIs and development workflows
  • CSV — For spreadsheet analysis
  • Excel — For reporting and handoff
  • XML — For system integrations

Frequently Asked Questions

Does the actor return descriptions?

Yes. It returns both description_html and description_text when available for a listing.

Can I scrape multiple search pages?

Yes. Use max_pages to control per-URL pagination depth.

Can I scrape multiple URLs in one run?

Yes. Provide multiple entries in startUrls.

Why are some fields missing on certain records?

Some listings may not expose every field. Empty and null values are automatically removed from output.

Is data deduplicated?

Yes. Duplicate jobs are filtered during collection.

Can I run this on a schedule?

Yes. You can schedule runs in Apify Console for daily or hourly updates.


Support

For issues or feature requests, use the Apify Console issue/support channels.

Resources


This actor is intended for legitimate data collection and research purposes. You are responsible for complying with website terms, local regulations, and applicable laws when using collected data.