Kununu Jobs Scraper avatar
Kununu Jobs Scraper

Pricing

Pay per usage

Go to Apify Store
Kununu Jobs Scraper

Kununu Jobs Scraper

Introducing the Kununu Jobs Scraper, a lightweight actor designed for efficiently scraping job listings from Kununu. Fast and simple to use. For the best results and reliable data extraction without blocking, the use of residential proxies is strongly advised. Get the job data you need!

Pricing

Pay per usage

Rating

0.0

(0)

Developer

Shahid Irfan

Shahid Irfan

Maintained by Community

Actor stats

0

Bookmarked

3

Total users

1

Monthly active users

15 minutes ago

Last modified

Share

Extract comprehensive job listings from Kununu with structured fields ready for analysis, monitoring, and lead generation workflows. Collect job titles, companies, locations, salary ranges, posting dates, and full descriptions at scale. Built for reliable job market research across Germany, Austria, and Switzerland.

Features

  • Comprehensive Job Data — Collect core listing fields plus rich detail content in one dataset.
  • Description Coverage — Includes both description_html and cleaned description_text.
  • Smart Pagination — Automatically continues through result pages until your target count is reached.
  • Flexible Filtering — Search by keyword, location, employment type, and career level.
  • Deduplicated Output — Prevents duplicate job entries in final results.
  • Production Ready — Works with Apify scheduling, datasets, integrations, and API access.

Use Cases

Job Market Research

Track demand for specific roles and locations over time. Build historical datasets to identify hiring trends and skill demand shifts.

Recruiting Intelligence

Monitor active roles from target employers and regions. Use structured outputs to support outreach planning and talent mapping.

Salary Benchmarking

Compare salary ranges across titles, locations, and companies. Feed compensation analytics dashboards with standardized salary fields.

Job Alert Automation

Run on schedule and notify teams when new roles match your criteria. Connect output to messaging or workflow tools.

BI and Reporting

Export normalized data into dashboards for weekly or monthly reporting. Combine with internal datasets for deeper workforce insights.


Input Parameters

ParameterTypeRequiredDefaultDescription
jobTitleStringNo""Job title or keyword to search (for example: Software Engineer).
locationStringNo""City or region filter (for example: Berlin, Wien, Zürich).
countryCodeStringNo"de"Country scope: de, at, or ch.
employmentTypeStringNo""Employment type filter such as full-time, part-time, internship, and others.
careerLevelStringNo""Experience-level filter (entry level, professional, executive, etc.).
results_wantedIntegerNo20Maximum number of jobs to collect.
max_pagesIntegerNo10Safety cap for how many result pages to process.
proxyConfigurationObjectNoApify Proxy (Residential)Proxy setup for higher reliability on larger runs.
includeRawDataBooleanNofalseInclude additional raw payload fields for advanced post-processing.

Output Data

Each dataset item contains structured job data such as:

FieldTypeDescription
idStringUnique job identifier.
titleStringJob title.
companyStringCompany name.
company_urlStringCompany profile URL.
locationStringJob location text.
cityStringCity value when available.
regionStringRegion/state value when available.
country_codeStringCountry code (de, at, ch).
employment_typeStringEmployment type summary.
salaryStringParsed salary range text.
salary_lower_boundNumberLower bound salary value when available.
salary_upper_boundNumberUpper bound salary value when available.
date_postedStringPosting date.
valid_throughStringValid-through date when provided.
description_typeStringDescription format indicator.
description_htmlStringFull rich-text job description.
description_textStringPlain text description.
statusStringJob status when available.
urlStringJob detail URL.
search_pageNumberResult page number where the job was found.

Usage Examples

Collect up to 20 software engineering jobs in Berlin:

{
"jobTitle": "Software Engineer",
"location": "Berlin",
"countryCode": "de",
"results_wanted": 20
}

Multi-Page Collection

Collect a larger dataset for analytics:

{
"jobTitle": "Data Analyst",
"location": "Hamburg",
"countryCode": "de",
"results_wanted": 200,
"max_pages": 30
}

Filtered Search with Proxy

Narrow by employment and career level:

{
"jobTitle": "Cloud Engineer",
"location": "München",
"countryCode": "de",
"employmentType": "vollzeit",
"careerLevel": "berufserfahren",
"results_wanted": 100,
"proxyConfiguration": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}

Sample Output

{
"id": "ac894946-c351-421b-b991-2069fc20576c",
"title": "Software Engineer (w|m|d)",
"company": "ADAC Zentrale",
"company_url": "https://www.kununu.com/de/adac",
"location": "München, Bayern",
"employment_type": "JOB_EMPLOYMENT_FULLTIME",
"salary": "46,600 - 94,300 EUR",
"date_posted": "2026-02-06",
"valid_through": "2026-03-06",
"description_type": "html",
"description_html": "<div><h1>...</h1></div>",
"description_text": "Full cleaned job description text...",
"url": "https://www.kununu.com/de/job/ac894946-c351-421b-b991-2069fc20576c",
"search_page": 1
}

Tips for Best Results

Start with a Focused Query

Use specific keywords and a single location first. Broader searches can be expanded after a baseline run.

Scale Gradually

Start with results_wanted around 20-100 for validation, then increase for production jobs and reporting pipelines.

Use Residential Proxy for Stability

Enable residential proxy for higher consistency on larger runs and scheduled workloads.

Keep max_pages Practical

Use max_pages as a safety limit to control runtime and avoid unnecessary requests.

Use Scheduled Runs

Schedule daily or weekly runs to maintain a fresh job dataset and detect market changes early.


Integrations

Connect this actor with:

  • Google Sheets — Track jobs in a collaborative spreadsheet.
  • Airtable — Build searchable tables for sourcing and research.
  • Slack — Notify channels when new matching jobs appear.
  • Make — Create low-code automation pipelines.
  • Zapier — Trigger downstream actions across business apps.
  • Webhooks — Send results to custom APIs and internal systems.

Export Formats

  • JSON — Best for APIs and custom data processing.
  • CSV — Ideal for spreadsheet workflows.
  • Excel — Business-friendly reporting format.
  • XML — Useful for system integrations requiring XML feeds.

Frequently Asked Questions

How many jobs can I collect?

You can collect large datasets by increasing results_wanted and max_pages. Practical limits depend on query size and runtime constraints.

Can I scrape multiple countries?

Yes. Use countryCode with de, at, or ch and run separate jobs for each country when needed.

Will descriptions always be available?

Most jobs include descriptions, but some postings may provide partial fields depending on source availability.

Why are some fields null?

Some employers or postings do not publish every field (for example salary or exact address values).

Can I automate daily tracking?

Yes. Use Apify schedules to run this actor automatically and compare new records over time.

How do I avoid blocked runs?

Use Apify residential proxies and keep filters focused for improved stability on high-volume runs.


Support

For issues or feature requests, open the actor in Apify Console and use the support/contact options available there.

Resources


This actor is intended for legitimate data collection and analytics workflows. You are responsible for using it in compliance with applicable laws, platform terms, and data protection requirements.