Kununu Jobs Scraper
Pricing
Pay per usage
Kununu Jobs Scraper
Introducing the Kununu Jobs Scraper, a lightweight actor designed for efficiently scraping job listings from Kununu. Fast and simple to use. For the best results and reliable data extraction without blocking, the use of residential proxies is strongly advised. Get the job data you need!
Pricing
Pay per usage
Rating
0.0
(0)
Developer

Shahid Irfan
Actor stats
0
Bookmarked
3
Total users
1
Monthly active users
15 minutes ago
Last modified
Categories
Share
Extract comprehensive job listings from Kununu with structured fields ready for analysis, monitoring, and lead generation workflows. Collect job titles, companies, locations, salary ranges, posting dates, and full descriptions at scale. Built for reliable job market research across Germany, Austria, and Switzerland.
Features
- Comprehensive Job Data — Collect core listing fields plus rich detail content in one dataset.
- Description Coverage — Includes both
description_htmland cleaneddescription_text. - Smart Pagination — Automatically continues through result pages until your target count is reached.
- Flexible Filtering — Search by keyword, location, employment type, and career level.
- Deduplicated Output — Prevents duplicate job entries in final results.
- Production Ready — Works with Apify scheduling, datasets, integrations, and API access.
Use Cases
Job Market Research
Track demand for specific roles and locations over time. Build historical datasets to identify hiring trends and skill demand shifts.
Recruiting Intelligence
Monitor active roles from target employers and regions. Use structured outputs to support outreach planning and talent mapping.
Salary Benchmarking
Compare salary ranges across titles, locations, and companies. Feed compensation analytics dashboards with standardized salary fields.
Job Alert Automation
Run on schedule and notify teams when new roles match your criteria. Connect output to messaging or workflow tools.
BI and Reporting
Export normalized data into dashboards for weekly or monthly reporting. Combine with internal datasets for deeper workforce insights.
Input Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
jobTitle | String | No | "" | Job title or keyword to search (for example: Software Engineer). |
location | String | No | "" | City or region filter (for example: Berlin, Wien, Zürich). |
countryCode | String | No | "de" | Country scope: de, at, or ch. |
employmentType | String | No | "" | Employment type filter such as full-time, part-time, internship, and others. |
careerLevel | String | No | "" | Experience-level filter (entry level, professional, executive, etc.). |
results_wanted | Integer | No | 20 | Maximum number of jobs to collect. |
max_pages | Integer | No | 10 | Safety cap for how many result pages to process. |
proxyConfiguration | Object | No | Apify Proxy (Residential) | Proxy setup for higher reliability on larger runs. |
includeRawData | Boolean | No | false | Include additional raw payload fields for advanced post-processing. |
Output Data
Each dataset item contains structured job data such as:
| Field | Type | Description |
|---|---|---|
id | String | Unique job identifier. |
title | String | Job title. |
company | String | Company name. |
company_url | String | Company profile URL. |
location | String | Job location text. |
city | String | City value when available. |
region | String | Region/state value when available. |
country_code | String | Country code (de, at, ch). |
employment_type | String | Employment type summary. |
salary | String | Parsed salary range text. |
salary_lower_bound | Number | Lower bound salary value when available. |
salary_upper_bound | Number | Upper bound salary value when available. |
date_posted | String | Posting date. |
valid_through | String | Valid-through date when provided. |
description_type | String | Description format indicator. |
description_html | String | Full rich-text job description. |
description_text | String | Plain text description. |
status | String | Job status when available. |
url | String | Job detail URL. |
search_page | Number | Result page number where the job was found. |
Usage Examples
Basic Search
Collect up to 20 software engineering jobs in Berlin:
{"jobTitle": "Software Engineer","location": "Berlin","countryCode": "de","results_wanted": 20}
Multi-Page Collection
Collect a larger dataset for analytics:
{"jobTitle": "Data Analyst","location": "Hamburg","countryCode": "de","results_wanted": 200,"max_pages": 30}
Filtered Search with Proxy
Narrow by employment and career level:
{"jobTitle": "Cloud Engineer","location": "München","countryCode": "de","employmentType": "vollzeit","careerLevel": "berufserfahren","results_wanted": 100,"proxyConfiguration": {"useApifyProxy": true,"apifyProxyGroups": ["RESIDENTIAL"]}}
Sample Output
{"id": "ac894946-c351-421b-b991-2069fc20576c","title": "Software Engineer (w|m|d)","company": "ADAC Zentrale","company_url": "https://www.kununu.com/de/adac","location": "München, Bayern","employment_type": "JOB_EMPLOYMENT_FULLTIME","salary": "46,600 - 94,300 EUR","date_posted": "2026-02-06","valid_through": "2026-03-06","description_type": "html","description_html": "<div><h1>...</h1></div>","description_text": "Full cleaned job description text...","url": "https://www.kununu.com/de/job/ac894946-c351-421b-b991-2069fc20576c","search_page": 1}
Tips for Best Results
Start with a Focused Query
Use specific keywords and a single location first. Broader searches can be expanded after a baseline run.
Scale Gradually
Start with results_wanted around 20-100 for validation, then increase for production jobs and reporting pipelines.
Use Residential Proxy for Stability
Enable residential proxy for higher consistency on larger runs and scheduled workloads.
Keep max_pages Practical
Use max_pages as a safety limit to control runtime and avoid unnecessary requests.
Use Scheduled Runs
Schedule daily or weekly runs to maintain a fresh job dataset and detect market changes early.
Integrations
Connect this actor with:
- Google Sheets — Track jobs in a collaborative spreadsheet.
- Airtable — Build searchable tables for sourcing and research.
- Slack — Notify channels when new matching jobs appear.
- Make — Create low-code automation pipelines.
- Zapier — Trigger downstream actions across business apps.
- Webhooks — Send results to custom APIs and internal systems.
Export Formats
- JSON — Best for APIs and custom data processing.
- CSV — Ideal for spreadsheet workflows.
- Excel — Business-friendly reporting format.
- XML — Useful for system integrations requiring XML feeds.
Frequently Asked Questions
How many jobs can I collect?
You can collect large datasets by increasing results_wanted and max_pages. Practical limits depend on query size and runtime constraints.
Can I scrape multiple countries?
Yes. Use countryCode with de, at, or ch and run separate jobs for each country when needed.
Will descriptions always be available?
Most jobs include descriptions, but some postings may provide partial fields depending on source availability.
Why are some fields null?
Some employers or postings do not publish every field (for example salary or exact address values).
Can I automate daily tracking?
Yes. Use Apify schedules to run this actor automatically and compare new records over time.
How do I avoid blocked runs?
Use Apify residential proxies and keep filters focused for improved stability on high-volume runs.
Support
For issues or feature requests, open the actor in Apify Console and use the support/contact options available there.
Resources
Legal Notice
This actor is intended for legitimate data collection and analytics workflows. You are responsible for using it in compliance with applicable laws, platform terms, and data protection requirements.