Arbeitsagentur Scraper avatar

Arbeitsagentur Scraper

Pricing

from $2.00 / 1,000 results

Go to Apify Store
Arbeitsagentur Scraper

Arbeitsagentur Scraper

Extract job listings from arbeitsagentur.de — Germany's official public employment portal with 1M+ listings. Structured data with location, salary type, contract type, remote options.

Pricing

from $2.00 / 1,000 results

Rating

0.0

(0)

Developer

Black Falcon Data

Black Falcon Data

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

2 hours ago

Last modified

Categories

Share

Arbeitsagentur Scraper

🔍 What is Arbeitsagentur Scraper?

Arbeitsagentur Scraper extracts structured job listings from arbeitsagentur.de — with salary fields, contact and apply details, company metadata, full descriptions, and geo-ready location fields. The input is built around keyword search, location filters, and controllable result limits, so you can rerun the same search universe consistently over time.

arbeitsagentur.de is a public job platform, but it does not provide the kind of structured export most teams need for recurring data workflows. This actor bridges that gap by turning the source into clean JSON with salary fields, contact and apply details, company metadata, full descriptions, and geo-ready location fields, with direct API access and a schema that is easier to reuse in dashboards, enrichment pipelines, and agent workflows.

🎯 What you can do with this actor

  • Use geo-ready listing data for regional analysis, location clustering, or map-based downstream workflows.
  • Feed compact listing data into AI agents, MCP tools, and ranking workflows without carrying full raw payloads every time.
  • Start with lightweight search runs, then enable detail enrichment only when you need deeper company or listing context.

✨ Why choose this actor?

FeatureThis actorTypical alternatives
Geo-ready outputIncludes structured location fields for regional analysisOften location text only
Collection strategyCan stay lightweight or add enrichment only when neededOften fixed to one scraping mode
AI-agent usabilityCompact output mode for smaller, more controllable payloadsOften full payload only
Schema qualityKeeps salary fields, contact and apply details, company metadata, full descriptions, and geo-ready location fields in a consistent output shapeOften inconsistent across runs

🚀 Quick start

Basic search:

{
"query": "Software Developer",
"location": "Berlin",
"maxResults": 50,
"includeDetails": true,
"mode": "full",
"includeContact": false,
"radius": 0,
"contractType": "",
"jobType": "",
"workType": "",
"bundesland": "",
"remoteOnly": false,
"publishedSince": 0,
"includeTemporaryWork": true,
"compact": false,
"descriptionMaxLength": 0
}

With enrichment:

{
"query": "Software Developer",
"location": "Berlin",
"maxResults": 50,
"includeDetails": true,
"mode": "full",
"includeContact": true,
"radius": 0,
"contractType": "",
"jobType": "",
"workType": "",
"bundesland": "",
"remoteOnly": false,
"publishedSince": 0,
"includeTemporaryWork": true,
"compact": false,
"descriptionMaxLength": 0
}

Incremental monitoring:

{
"query": "Software Developer",
"location": "Berlin",
"maxResults": 50,
"includeDetails": true,
"mode": "incremental",
"includeContact": false,
"radius": 0,
"contractType": "",
"jobType": "",
"workType": "",
"bundesland": "",
"remoteOnly": false,
"publishedSince": 0,
"includeTemporaryWork": true,
"compact": false,
"descriptionMaxLength": 0
}

📊 Sample output

{
"referenceId": null,
"title": null,
"employer": null,
"occupation": null,
"allOccupations": [],
"location": null,
"postalCode": null,
"region": null,
"country": null,
"lat": 0,
"lng": 0,
"isFullTime": false,
"isPartTime": false,
"isPartTimeMorning": false,
"isPartTimeAfternoon": false,
"isPartTimeEvening": false,
"isNightOrWeekendShift": false,
"isMiniJob": false,
"isRemote": false,
"remoteType": null,
"contractType": null,
"contractDurationMonths": 0,
"salary": null,
"startDate": null,
"publishedDate": null,
"firstPublishedDate": null,
"modifiedDate": null,
"isCareerChange": false,
"isTemporaryStaffing": false,
"isDisabilityFriendly": false,
"cipherNumber": null,
"externalUrl": "https://arbeitsagentur.de",
"portalUrl": "https://arbeitsagentur.de",
"distanceKm": 0,
"description": null,
"allianzPartnerName": null,
"allianzPartnerUrl": "https://arbeitsagentur.de",
"employerDescription": null,
"employerWebsite": null,
"employerSize": null,
"employerFoundedYear": 0,
"employerHQ": null,
"employerBenefits": [],
"employerSocialMedia": [],
"employerContactInfo": null,
"contactName": null,
"contactEmail": null,
"contactPhone": null,
"employerAddress": null,
"applyUrl": "https://arbeitsagentur.de",
"applyMethod": null,
"scrapedAt": null
}

⚙️ Input reference

ParameterTypeDefaultDescription
Search
querystringJob title or keyword (e.g., 'Software Developer', 'Krankenpfleger')
locationstringCity or region (e.g., 'Berlin', 'München', 'Hamburg')
maxResultsinteger50Maximum number of job listings to return
Mode
modeenum"full"Full mode returns all matching jobs. Incremental mode returns only new or modified jobs since the last run — ideal for scheduled monitoring.
includeContactbooleanfalseFetch contact person name, email, phone, and application URL for each listing. Slower due to additional verification per job. Requires CAPSOLVER_API_KEY environment variable.
includeDetailsbooleanfalseFetch full job description, employer profile, and benefits for each listing. Slower but richer data.
Filters
radiusinteger0Radius around location in km
contractTypeenum""Filter by contract duration
jobTypeenum""Filter by listing type
workTypeenum""Filter by working hours
bundeslandenum""Filter by German federal state (client-side post-fetch — may fetch many API pages for sparse state/broad query combinations; combine with a location for best performance)
remoteOnlybooleanfalseOnly show jobs with home office option
publishedSinceinteger0Only show jobs published within the last N days. 0 = any time, 1 = last 24h, 7 = last week, 30 = last month.
includeTemporaryWorkbooleantrueInclude jobs from temporary staffing agencies (Zeitarbeitsfirmen)
employerstringEmployer name (e.g., 'BMW', 'Deutsche Bahn', 'Robert Bosch GmbH'). Common brand names are resolved to the registered company name where possible. For guaranteed results, use the exact registered name (e.g., 'BMW AG').
AI / Agent Output
compactbooleanfalseWhen true, each result contains only the 10 most essential fields: referenceId, title, employer, location, region, publishedDate, contractType, isRemote, portalUrl, and description. Use this in AI-agent and MCP workflows where token budgets matter — compact output is significantly smaller than full output, reducing context window consumption and cost.
descriptionMaxLengthinteger0Truncate the description field to this many characters, appending '...' if truncated. 0 means no truncation. Use in AI-agent workflows to control context window usage — 200–500 characters is usually enough for an LLM to classify or summarise a job. Pairs well with compact mode.

📦 Output fields

Each result can include salary fields, contact and apply details, company metadata, full descriptions, and geo-ready location fields, depending on listing content and the enrichment options enabled for the run.

Core fields

FieldTypeDescription
titlestringJob Title
occupationstringOccupation
allOccupationsarrayAll Occupations
locationstringCity
postalCodestringnull
regionstringFederal State (Bundesland)
countrystringCountry
latnumbernull
lngnumbernull
isFullTimebooleanFull-time
isPartTimebooleanPart-time (Flexible)
isPartTimeMorningbooleanPart-time Morning
isPartTimeAfternoonbooleanPart-time Afternoon
isPartTimeEveningbooleanPart-time Evening
isNightOrWeekendShiftbooleanNight/Weekend Shift
isMiniJobbooleanMini Job
isRemotebooleanRemote/Home Office
remoteTypestringnull
contractTypestringnull
contractDurationMonthsintegernull
salarystringnull
startDatestringnull
publishedDatestringnull
firstPublishedDatestringnull
modifiedDatestringnull
isTemporaryStaffingbooleanTemporary Staffing
isDisabilityFriendlybooleanDisability Friendly
cipherNumberstringnull
externalUrlstringnull
portalUrlstringPortal URL
distanceKmnumbernull
allianzPartnerNamestringnull
allianzPartnerUrlstringnull

Detail and enrichment

FieldTypeDescription
descriptionstringnull

Contact and company

FieldTypeDescription
employerstringEmployer
employerDescriptionstringnull
employerWebsitestringnull
employerSizestringnull
employerFoundedYearintegernull
employerHQstringnull
employerBenefitsarrayEmployer Benefits
employerSocialMediaarrayEmployer Social Media
employerContactInfostringnull
contactNamestringnull
contactEmailstringnull
contactPhonestringnull
employerAddressstringnull
applyUrlstringnull
applyMethodstringnull

Operational fields

FieldTypeDescription
referenceIdstringReference Number
isCareerChangebooleanCareer Change Suitable
scrapedAtstringScraped At

⚠️ Known limitations

  • Company profile fields depend on source availability and may be limited for portals that do not expose employer metadata.
  • Field population rates always depend on the source site itself, so null values are normal for data points the source does not publish on every listing.

💰 How much does it cost to scrape arbeitsagentur scraper?

This actor uses pay-per-event pricing, so you pay a small run-start fee and then only for results that are actually emitted.

EventPriceWhen
actor-start$0.01Each run
result$0.002Per emitted record

Example costs:

ScenarioResultsCost
Quick test10$0.03
Daily monitor50$0.11
Full scrape500$1.01

💡 Use cases

Recruiting and sourcing

Pull arbeitsagentur.de listings into dashboards, triage queues, or recruiter workflows without re-normalizing the source on every run.

Recurring monitoring

Track only newly posted or changed listings on scheduled runs, which is better suited to alerts and daily pipeline jobs than repeated full exports.

Outreach and hiring-intent research

Use employer, contact, and apply fields to support account research, outreach queues, or company watchlists when the source provides those details.

Salary and market analysis

Track salary ranges, titles, and locations over time to build a more structured view of demand on arbeitsagentur.de.

Geo and regional analysis

Use coordinates, postal data, and structured addresses for regional reporting, mapping, or distance-based filtering in downstream tools.

🤖 AI-agent and MCP usage

This actor is suitable for AI-agent workflows because the output is structured and the input can intentionally reduce payload size for downstream tools.

  • compact returns a smaller core schema for ranking, classification, and MCP tool calls.
  • Compact output focuses on fields such as title, employer, location, region, contractType, isFullTime, isRemote, and startDate.
  • descriptionMaxLength lets you cap description size so larger batches stay practical in model context windows.
{
"query": "Software Developer",
"location": "Berlin",
"maxResults": 10,
"includeDetails": true,
"mode": "full",
"includeContact": false,
"radius": 0,
"contractType": "",
"jobType": "",
"workType": "",
"bundesland": "",
"remoteOnly": false,
"publishedSince": 0,
"includeTemporaryWork": true,
"compact": true,
"descriptionMaxLength": 300
}

🔄 Incremental mode

Incremental mode is intended for repeated monitoring runs where only new or changed listings should be emitted.

Change typeMeaning
NEWFirst time seen in the monitored result set
CHANGEDPreviously seen listing with updated content
UNCHANGEDSame listing and content as a prior run when unchanged emission is enabled
EXPIREDListing disappeared from the monitored result set when expired emission is enabled

📖 How to scrape arbeitsagentur scraper

  1. Open the actor in Apify Console and review the input schema.
  2. Enter your search query and location settings, then set maxResults for the amount of data you need.
  3. Enable optional enrichment fields only when you need richer output such as descriptions, contacts, or company data.
  4. Run the actor and export the dataset as JSON, CSV, or Excel for downstream analysis.

❓ FAQ

What data does this actor return from arbeitsagentur.de?

It returns structured listing records with fields such as salary fields, contact and apply details, company metadata, full descriptions, geo-ready location fields, plus the core identifiers and metadata defined in the dataset schema.

Can I fetch full descriptions and detail fields?

Yes. Enable the detail-related input options when you need richer fields such as descriptions, employer metadata, or contact details from the listing detail pages.

Does it support recurring monitoring?

Yes. Incremental mode is built for recurring runs where you only want newly seen or changed listings instead of a full repeat dataset every time.

Is it suitable for AI agents or MCP workflows?

Yes. Compact mode and output-size controls make it easier to use the actor in AI-agent workflows where predictable fields matter more than raw page size.

Why use this actor instead of scraping the site ad hoc?

Because it already handles direct API access, keeps a stable schema, and exposes filters and enrichment options in a form that is easier to automate repeatedly.

This actor is intended for publicly accessible data workflows. Always review the target site terms and your own legal requirements for the way you plan to use the data.