Foerderdatenbank.de Scraper avatar
Foerderdatenbank.de Scraper

Pricing

$39.00/month + usage

Go to Apify Store
Foerderdatenbank.de Scraper

Foerderdatenbank.de Scraper

Extract, monitor, and enrich German funding programs with the Foerderdatenbank.de Scraper. Get structured, up-to-date data on grants, providers, contacts, and eligibility from federal and state levels for research, CRM, or dashboards.

Pricing

$39.00/month + usage

Rating

5.0

(1)

Developer

Lexis Solutions

Lexis Solutions

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

5 days ago

Last modified

Share

banner

Welcome to the Foerderdatenbank.de Scraper! This actor gathers structured funding program data from the official Foerderdatenbank, including program metadata, providers, contacts, and external resources. It is built with Apify Actors and Crawlee (Cheerio) to paginate search results and deliver clean JSON.

Introduction

The scraper uses the expert search form on Foerderdatenbank.de, follows listing pages, and extracts detail pages. It normalizes contact info, provider links, and descriptive sections so you can monitor German funding programs across federal and state levels.

Use Cases

  • Funding intelligence and monitoring for federal and state programs
  • CRM enrichment for grant advisory or consulting teams
  • Alerts and dashboards for newly published or updated programs
  • Research on eligibility, regions, and provider levels

Input

Provide either search parameters or direct URLs:

  • startUrls (array, optional): Listing or detail URLs on foerderdatenbank.de. Off-domain links are ignored.
  • query (string, optional): Search text for the expert form. Required if startUrls is empty.
  • maxItems (integer, optional): Maximum items to collect.
  • page (integer, optional): Starting search page (1-based).
  • sort (enum, optional): title or date.
  • sortType (enum, optional): asc or desc (default: desc).
  • fundingArea (array, optional): Region codes such as _bundesweit, berlin, bayern, baden_wuerttemberg, etc. Accepts multiple values.
  • grwProgram (array, optional): Values like grw_foerderung, keine_grw_foerderung.
  • supportArea (array, optional): Domain focus such as digitalisierung, forschung_innovation_themenoffen, energieeffizienz_erneuerbare_energien, etc.
  • eligibleEntity (array, optional): unternehmen, privatperson, kommune, hochschule, bildungseinrichtung, etc.
  • fundingType (array, optional): Options such as beteiligung+darlehen, beteiligung+zuschuss, beteiligung+buergschaft.
  • fundingProviderLevel (array, optional): bund, bund+eu, bund+land.
  • companySize (array, optional): Combinations like grosses_unternehmen, grosses_unternehmen+kleines_unternehmen, grosses_unternehmen+mittleres_unternehmen, etc.
  • documentType (array, optional): ContactPoint, ContactPoint+FundingProgram, ContactPoint+PressRelease, ContactPoint+Publication.
  • filterCategories (array, optional): Defaults are applied when omitted; accepts FundingProgram, FundingOrganisation, in_den_weiteren_inhalten.
  • proxyConfiguration (object, optional): Standard Apify proxy settings.

Notes:

  • If both query and startUrls are provided, the scraper queues all valid URLs and also runs the search.
  • Pagination is automatic; the crawl stops when maxItems is reached or when no more pages are found.
  • Proxy is strongly recommended: the site applies IP blocking, so use Apify Proxy or your own proxies to avoid interruptions.

Input Examples

Query-based search

{
"query": "digitalisierung",
"page": 2,
"maxItems": 25,
"sort": "date",
"sortType": "desc",
"fundingArea": ["berlin", "bayern", "_bundesweit"],
"grwProgram": ["grw_foerderung"],
"supportArea": ["digitalisierung", "energieeffizienz_erneuerbare_energien"],
"eligibleEntity": ["unternehmen", "kommune"],
"fundingType": ["beteiligung+darlehen", "beteiligung+zuschuss"],
"fundingProviderLevel": ["bund", "bund+land"],
"companySize": ["grosses_unternehmen+mittleres_unternehmen"],
"documentType": ["ContactPoint", "ContactPoint+FundingProgram"],
"filterCategories": [
"FundingProgram",
"FundingOrganisation",
"in_den_weiteren_inhalten"
],
"proxyConfiguration": { "useApifyProxy": true }
}

Start URLs (listing + detail)

{
"startUrls": [
{
"url": "https://www.foerderdatenbank.de/SiteGlobals/FDB/Forms/Suche/Expertensuche_Formular.html?templateQueryString=digital"
}, // listing
{ "url": "https://www.foerderdatenbank.de/content/12345" } // detail
],
"maxItems": 50,
"proxyConfiguration": { "useApifyProxy": false }
}

Output

{
"title": "Digitalisierung von Produktionsprozessen",
"url": "https://www.foerderdatenbank.de/content/12345",
"pageType": "Foerderprogramm",
"websiteText": "Zur Programmseite",
"websiteUrl": "https://example-provider.de/programme/xyz",
"contactName": "Referat Digitalisierung",
"address": "Invalidenstrasse 44",
"locality": "10115 Berlin",
"email": "mailto:info@example-provider.de",
"telephone": "+49 30 1234567",
"fax": "+49 30 1234568",
"externalLinks": [
{
"name": "Programm PDF",
"url": "https://example-provider.de/docs/xyz.pdf"
}
],
"fundingProvider": [
{
"name": "Bundesministerium Beispiel",
"url": "https://www.foerderdatenbank.de/content/foerdergeber/1001"
}
],
"contactPoint": [
{
"name": "Projekttraeger Beispiel GmbH",
"url": "https://www.foerderdatenbank.de/content/foerdergeber/2002"
}
],
"othersData": {
"foerderart": "Zuschuss",
"antragsteller": "Unternehmen",
"foerdergebiet": "bundesweit"
},
"descriptions": {
"beschreibung": "Foerderung der digitalen Transformation in Produktionsbetrieben.",
"bedingungen": "Keine Rueckforderung bei fristgerechtem Nachweis."
}
}

Why use the Foerderdatenbank.de Scraper?

  • Fast: Expert-search URL builder and automatic pagination
  • Structured: Normalized contacts, providers, and dynamic sections
  • Flexible: Works with query searches or direct URLs
  • Reliable: Built on Apify Actors and Crawlee with retries
  • Coverage: Supports federal and state program filters

How it works

  • Validates URLs to stay on foerderdatenbank.de and skips off-domain links.
  • Builds expert-search URLs with your filters (region, provider level, program type, etc.).
  • Paginates result lists automatically and respects maxItems.
  • Extracts detail pages into consistent objects, including dynamic sections stored in othersData and descriptions.

FAQ

  • How many items can it extract?
    Use maxItems to cap results. The crawler stops at the cap or when no more pages remain.
  • Can I start from specific program URLs?
    Yes. Add them to startUrls; listing pages enqueue detail pages automatically.
  • Do I need proxies?
    Not always, but you can enable Apify Proxy via proxyConfiguration.

Need help or a custom data pipeline? Lexis Solutions is an Apify Partner. Contact us at scraping@lexis.solutions or LinkedIn.

Support Our Work

If this scraper helps you, a quick review on our Apify partner page and the scrapers you use is appreciated.