Profession.hu Job Scraper avatar

Profession.hu Job Scraper

Pricing

from $2.00 / 1,000 results

Go to Apify Store
Profession.hu Job Scraper

Profession.hu Job Scraper

Scrape profession.hu — Hungary's largest job portal with 50K+ active listings. Extract salary, company ratings, tech stack details, structured benefits, requirements, and full job descriptions. Incremental mode detects new and changed listings.

Pricing

from $2.00 / 1,000 results

Rating

0.0

(0)

Developer

Black Falcon Data

Black Falcon Data

Maintained by Community

Actor stats

0

Bookmarked

3

Total users

1

Monthly active users

7 hours ago

Last modified

Share

What does Profession.hu Job Scraper do?

Profession.hu Job Scraper extracts structured job data from profession.hu — Hungary's largest job portal. It pulls full job descriptions in three formats (HTML, Markdown, plain text), structured requirements, responsibilities and skills sections from detail pages, employer benefits, technology-stack tags for IT roles, and the canonical apply URL. It supports keyword search, location and category filters, and runs incremental monitoring across scheduled jobs so each run only emits jobs that are new, updated, or have come back after expiring.

New to Apify? Sign up free and use the included $5 monthly platform credit to test this actor.

Key features

  • 📋 Detail enrichment — every job is hydrated from its detail page with the full description, structured requirements, responsibilities, skills and benefits sections, education and experience requirements, work hours, remote/hybrid type, valid-through date, and the canonical apply URL.
  • 🛠️ Tech-stack extraction — for IT roles profession.hu lists the concrete technologies (e.g. JAVASCRIPT • REACT • POSTGRESQL • DOCKER); these are surfaced in a techStack field for skill-demand analysis and matching.
  • 📝 Triple-format descriptiondescription (plain text for AI prompts and search indexing), descriptionHtml (raw HTML preserved), descriptionMarkdown (paragraph-aware Markdown for rendering).
  • 📁 Category filter — narrow searches by Hungarian job category (e.g. it-programozas-fejlesztes, bank-biztositas-broker, mernok, marketing-media-pr); 23 categories supported, unknown slugs are warned and ignored.
  • ♻️ Incremental mode — recurring runs emit only NEW / UPDATED / REAPPEARED records — UNCHANGED and EXPIRED are opt-in. State is automatically scoped per query + filters + result-window so different monitoring profiles never collide. Repost detection flags jobs that come back with the same content under a new ID. Saves 80–95% on daily monitoring.
  • 📌 Change classification — every record carries a changeType of NEW / UPDATED / UNCHANGED / REAPPEARED / EXPIRED, plus firstSeenAt / lastSeenAt / previousSeenAt / expiredAt lifecycle timestamps so you can build dashboards on top of the dataset.
  • 🔔 Notifications — Telegram, Slack, Discord, WhatsApp Cloud API, generic webhook. Pair with incremental + notifyOnlyChanges for daily "new Profession jobs" pings to your hiring channel.
  • 📦 Compact mode — AI-agent and MCP-friendly compact payloads with core fields only — pipe straight into your ATS, salary-benchmarking tool, or LLM context without parsing extras.
  • ✂️ Description truncation — cap description length with descriptionMaxLength to control LLM prompt cost and dataset size.
  • 📤 Export anywhere — Download the dataset as JSON, CSV, or Excel from the Apify Console, or stream live via the Apify API and integrations (Make, Zapier, Google Sheets, n8n, …).

What data can you extract from profession.hu?

Each result includes core listing fields (jobId, numericId, title, url, portalUrl, companyName, location, postedAt, and more), structured detail-page sections (description, descriptionHtml, descriptionMarkdown, requirements, responsibilities, skills, benefits, techStack, educationRequirements, experienceRequired, workHours, remoteType, validThrough), apply information (applyUrl, applicationType), company metadata (companyId, companyLogoUrl, companyProfileUrl, companyRating), and incremental lifecycle fields (changeType, firstSeenAt, lastSeenAt, previousSeenAt, expiredAt, contentHash). Where profession.hu publishes a free-form salary string, it's surfaced in salaryText. All fields are always present — unavailable data points are returned as null, never omitted. In compact mode, only core fields are returned.

Input

The main inputs are a search keyword, an optional location filter, and a result limit. Additional filters and options are available in the input schema.

Key parameters:

  • query — Job search keywords (e.g. "developer", "marketing", "SAP").
  • location — City or county name (e.g. "Budapest", "Győr-Moson-Sopron"). Leave empty for all of Hungary.
  • locationId — Direct numeric location ID for profession.hu. Overrides the Location field when set. Find IDs from profession.hu URL segments.
  • category — Job category slug to filter by. Examples: "it-programozas-fejlesztes", "bank-biztositas-broker", "marketing-media-pr", "penzugy-konyveles", "mernok". Slugs match the category segment in profession.hu URLs. Unknown slugs are ignored with a warning.
  • employmentType — Filter by employment type.
  • remoteFilter — Filter by work arrangement.
  • experienceLevel — Filter by required experience.
  • maxResults — Maximum number of job listings to return. 0 = unlimited. (default: 25)
  • includeDetails — Fetch each job's detail page for full description, skills, benefits, and more. Slower but much richer data. (default: true)
  • descriptionMaxLength — Truncate description HTML to N characters. 0 = no truncation. (default: 0)
  • compact — Return only core fields (jobId, title, company, location, salary, employment type, URL, posted date). Ideal for AI-agent/MCP workflows. (default: false)
  • incrementalMode — Only output jobs that are new or changed since the previous run. (default: false)
  • ...and 15 more parameters

Input examples

Basic search — Keyword-driven search with a result cap.

→ Full payload per result — all standard fields populated where the source provides them.

{
"query": "developer",
"maxResults": 50
}

Filtered search — Narrow results with advanced filters — only matching jobs are returned.

→ Same field set as basic search; fewer, more relevant rows.

{
"query": "developer",
"employmentType": "",
"experienceLevel": "",
"category": "it-programozas-fejlesztes",
"maxResults": 100
}

Incremental tracking — Only emit jobs that changed since the previous run with this stateKey.

→ First run builds the baseline state. Subsequent runs emit only records that are new or whose tracked content changed. Set emitUnchanged: true to include unchanged records as well.

{
"query": "developer",
"maxResults": 200,
"incrementalMode": true,
"stateKey": "developer-tracker"
}

Compact filtered output — Combine filters with compact mode for a lightweight AI-agent or MCP data source.

→ Core fields only — ideal for piping into LLMs or downstream tools without token overhead.

{
"query": "developer",
"employmentType": "",
"experienceLevel": "",
"maxResults": 50,
"compact": true
}

Output

Each run produces a dataset of structured job records. Results can be downloaded as JSON, CSV, or Excel from the Dataset tab in Apify Console.

Example job record

{
"jobId": "c313a81daaed41a098189447ddf5c292fa23a7706824a1f7c1323df41a8c6fb4",
"numericId": 2885799,
"title": "Szoftverfejlesztő / .NET Developer",
"url": "https://www.profession.hu/allas/szoftverfejleszto-net-developer-quadro-byte-zrt-budapest-2885799/pro",
"portalUrl": "https://www.profession.hu/allas/szoftverfejleszto-net-developer-quadro-byte-zrt-budapest-2885799",
"companyName": "Quadro Byte zrt.",
"companyId": 8410,
"companyLogoUrl": "https://www.profession.hu/images/logos/8/8410_1505134649.jpg",
"companyProfileUrl": "https://www.profession.hu/allasok/quadro-byte-zrt/1,0,0,0,0,0,0,0,0,0,8410",
"companyRating": null,
"location": "Budapest III.kerület",
"locationDetail": "Budapest III.kerület",
"addressRegion": "Budapest",
"postalCode": "1031",
"country": "HU",
"category": "IT programozás, Fejlesztés",
"subcategory": "Programozó, Fejlesztő",
"occupationalCategory": "IT programozás, Fejlesztés - Programozó, Fejlesztő",
"employmentType": "Alkalmazotti jogviszony",
"experienceRequired": "3-5 év tapasztalat",
"educationRequirements": "Nem kell nyelvtudás, Egyetem",
"workHours": "",
"remoteType": null,
"salaryText": null,
"salaryMin": null,
"salaryMax": null,
"salaryCurrency": null,
"salaryType": null,
"benefits": "Cafeteria, Szakmai tréningek",
"requirements": "3-5 év tapasztalat • Nem kell nyelvtudás",
"description": "Főbb feladatok, munkák: Blazor alkalmazások fejlesztése és karbantartásaMeglévő ASP.NET MVC webalkalmazások támogatása, fejlesztéseLegacy rendszerek modernizálásaCsapatmunkában való részvétel Az állás...",
"descriptionHtml": "<p>Főbb feladatok, munkák:</p>Blazor alkalmazások fejlesztése és karbantartásaMeglévő ASP.NET MVC webalkalmazások támogatása, fejlesztéseLegacy rendszerek modernizálásaCsapatmunkában való részvétel<p>...",
"descriptionMarkdown": "Főbb feladatok, munkák:\n\nBlazor alkalmazások fejlesztése és karbantartásaMeglévő ASP.NET MVC webalkalmazások támogatása, fejlesztéseLegacy rendszerek modernizálásaCsapatmunkában való részvételAz állás...",
"responsibilities": "Blazor alkalmazások fejlesztése és karbantartásaMeglévő ASP.NET MVC webalkalmazások támogatása, fejlesztéseLegacy rendszerek modernizálásaCsapatmunkában való részvétel",
"skills": " ASP.NET MVC tapasztalatC# programozási nyelv magabiztos ismerete2+ év releváns tapasztalat.NET Framework / .NET Core / .NET 6+ ismereteMSSQL Server adatbázis kezelésT-SQL ismerete HTML, CSS, JavaScri...",
"techStack": "HTML • CSS • AZURE • GIT • JAVASCRIPT • SQL • FRONTEND • ASP.NET MVC • DEVOPS • MSSQL • Blazor",
"taskSnippet": "Főbb feladatok Blazor alkalmazások fejlesztése és karbantartása Meglévő ASP.NET MVC webalkalmazások támogatása, fejlesztése Legacy rendszerek modernizálása Csapatmunkában való részvétel",
"postedAt": "2026-04-24T15:07:07+02:00",
"validThrough": "2026-05-25T13:28:18",
"applyUrl": "https://www.profession.hu/jelentkezes/2885799",
"applicationType": "website",
"contactName": null,
"contactEmail": null,
"contactPhone": null,
"extractedEmails": [],
"extractedPhones": [],
"extractedUrls": [],
"socialProfiles": {
"linkedin": null,
"twitter": null,
"instagram": null,
"facebook": null,
"youtube": null,
"tiktok": null,
"github": null,
"xing": null,
"bluesky": null,
"threads": null,
"mastodon": null
},
"scrapedAt": "2026-05-04T11:28:16.061Z",
"contentHash": "021f371b5fa82aef",
"source": "profession.hu",
"firstSeenAt": "2026-05-04T11:28:16.061Z",
"lastSeenAt": "2026-05-04T11:28:16.061Z",
"previousSeenAt": null,
"expiredAt": null,
"changeType": "NEW",
"isRepost": null,
"repostOfId": null,
"repostDetectedAt": null
}

Incremental fields

When incremental: true, each record also carries:

  • changeType — one of NEW, UPDATED, UNCHANGED, REAPPEARED, EXPIRED. Default output covers NEW / UPDATED / REAPPEARED; set emitUnchanged: true or emitExpired: true to opt into the others.
  • firstSeenAt, lastSeenAt — ISO-8601 timestamps tracking the listing across runs.
  • isRepost, repostOfId, repostDetectedAt — populated when a new listing matches the tracked content of a previously expired one. Set skipReposts: true to drop detected reposts from the output.

How to scrape profession.hu

  1. Go to Profession.hu Job Scraper in Apify Console.
  2. Enter a search keyword and optional location filter.
  3. Set maxResults to control how many results you need.
  4. Enable includeDetails if you need full descriptions, company data.
  5. Click Start and wait for the run to finish.
  6. Export the dataset as JSON, CSV, or Excel.

Use cases

  • Build a daily Hungarian-jobs feed by category (IT, banking, engineering, …) and stream NEW / UPDATED jobs to Slack or Telegram.
  • Monitor job postings on scheduled runs without re-processing the full dataset every time, using incremental mode.
  • Analyse skill demand across IT roles using structured techStack tags and skills sections.
  • Feed enriched job descriptions (Markdown or plain text) into AI agents, MCP tools, and LLM prompts using compact mode and descriptionMaxLength.
  • Auto-apply or feed apply URLs into your ATS / hiring pipeline.
  • Track which employers post most actively in a category, with per-job companyId and company profile URLs.
  • Detect reposts and lifecycle events (REAPPEARED, EXPIRED) for hiring-pattern research.
  • Export structured Hungarian job data to dashboards, spreadsheets, or data warehouses.

How much does it cost to scrape profession.hu?

Profession.hu Job Scraper uses pay-per-event pricing. You pay a small fee when the run starts and then for each result that is actually produced.

  • Run start: $0.01 per run
  • Per result: $0.002 per job record

Example costs:

  • 10 results: $0.03
  • 100 results: $0.21
  • 500 results: $1.01

Example: recurring monitoring savings

These examples compare full re-scrapes with incremental runs at different churn rates. Churn is the share of listings that are new or whose tracked content changed since the previous run. Actual churn depends on your query breadth, source activity, and polling frequency — the scenarios below are examples, not predictions.

Example setup: 100 results per run, daily polling (30 runs/month). Event-pricing examples scale linearly with result count.

Churn rateFull re-scrape run costIncremental run costSavings vs full re-scrapeMonthly cost after baseline
5% — stable niche query$0.21$0.02$0.19 (90%)$0.60
15% — moderate broad query$0.21$0.04$0.17 (81%)$1.20
30% — high-volume aggregator$0.21$0.07$0.14 (67%)$2.10

Full re-scrape monthly cost at daily polling: $6.30. First month with incremental costs $0.79 / $1.37 / $2.24 for the 5% / 15% / 30% scenarios because the first run builds baseline state at full cost before incremental savings apply.

FAQ

How many results can I get from profession.hu?

The number of results depends on the search query and available listings on profession.hu. Use the maxResults parameter to control how many results are returned per run.

Does Profession.hu Job Scraper support recurring monitoring?

Yes. Enable incremental mode to only receive new or changed listings on subsequent runs. This is ideal for scheduled monitoring where you want to track changes over time without re-processing the full dataset.

Can I integrate Profession.hu Job Scraper with other apps?

Yes. Profession.hu Job Scraper works with Apify's integrations to connect with tools like Zapier, Make, Google Sheets, Slack, and more. You can also use webhooks to trigger actions when a run completes.

Can I use Profession.hu Job Scraper with the Apify API?

Yes. You can start runs, manage inputs, and retrieve results programmatically through the Apify API. Client libraries are available for JavaScript, Python, and other languages.

Can I use Profession.hu Job Scraper through an MCP Server?

Yes. Apify provides an MCP Server that lets AI assistants and agents call this actor directly. Use compact mode and descriptionMaxLength to keep payloads manageable for LLM context windows.

This actor extracts publicly available data from profession.hu. Web scraping of public information is generally considered legal, but you should always review the target site's terms of service and ensure your use case complies with applicable laws and regulations, including GDPR where relevant.

Your feedback

If you have questions, need a feature, or found a bug, please open an issue on the actor's page in Apify Console. Your feedback helps us improve.

You might also like

Getting started with Apify

New to Apify? Create a free account with $5 credit — no credit card required.

  1. Sign up — $5 platform credit included
  2. Open this actor and configure your input
  3. Click Start — export results as JSON, CSV, or Excel

Need more later? See Apify pricing.