Pracuj.pl Jobs Feed: Poland Job Listings with Salary & Filters
Pricing
from $2.00 / 1,000 results
Pracuj.pl Jobs Feed: Poland Job Listings with Salary & Filters
Extract structured job listings from pracuj.pl — Poland's #1 job board. Structured salary parsing, incremental mode, 30+ output fields.
Pricing
from $2.00 / 1,000 results
Rating
0.0
(0)
Developer
Black Falcon Data
Actor stats
0
Bookmarked
3
Total users
2
Monthly active users
4 days ago
Last modified
Categories
Share
🔍 What is pracuj-jobs-feed Jobs Feed?
Extract structured job listings from pracuj.pl — Poland's largest job board with 79,000+ active listings.

pracuj.pl is a public source platform, but it does not provide the kind of structured export most teams need for recurring data workflows. This actor bridges that gap by turning the source into clean JSON with salary fields, contact and apply details, company metadata, and full descriptions, with repeatable source access and a schema that is easier to reuse in dashboards, enrichment pipelines, and agent workflows.
🎯 What you can do with this actor
- Feed compact listing data into AI agents, MCP tools, and ranking workflows without carrying full raw payloads every time.
- Start with lightweight search runs, then enable detail enrichment only when you need deeper company or listing context.
✨ Why choose this actor?
| Feature | This actor | Typical alternatives |
|---|---|---|
| Collection strategy | Can stay lightweight or add enrichment only when needed | Often fixed to one scraping mode |
| AI-agent usability | Compact output mode for smaller, more controllable payloads | Often full payload only |
| Schema quality | Keeps salary fields, contact and apply details, company metadata, and full descriptions in a consistent output shape | Often inconsistent across runs |
| Stable downstream schema | Typed fields with nulls when unavailable | Often requires extra cleanup |
🚀 Quick start
Basic search:
{"query": "software engineer","contractType": "","workMode": "","positionLevel": "","maxResults": 50,"includeDetails": true,"descriptionMaxLength": 0,"compact": false,"incrementalMode": false}
Incremental monitoring:
{"query": "software engineer","contractType": "","workMode": "","positionLevel": "","maxResults": 50,"includeDetails": true,"descriptionMaxLength": 0,"compact": false,"incrementalMode": true,"stateKey": "daily-monitor"}
📊 Sample output
{"jobId": null,"offerId": 0,"title": null,"company": null,"companyId": 0,"companyProfileUrl": "https://pracuj.pl","location": null,"region": null,"country": null,"salaryText": null,"salaryMin": 0,"salaryMax": 0,"salaryCurrency": null,"salaryPeriod": null,"employmentType": null,"contractTypes": [],"workModes": [],"workSchedules": [],"positionLevels": [],"isRemote": false,"description": null,"technologies": [],"responsibilities": null,"requirements": null,"offered": null,"aiSummary": null,"applyUrl": "https://pracuj.pl","contactPhone": null,"remoteRecruitment": false,"isOneClickApply": false,"categories": [],"url": "https://pracuj.pl","allOfferUrls": [],"postedDate": null,"expirationDate": null,"scrapedAt": null,"portalUrl": "https://pracuj.pl","source": null,"changeType": null}
⚙️ Input reference
| Parameter | Type | Default | Description |
|---|---|---|---|
| Search | |||
query | string | — | Job search keywords (e.g. 'software engineer', 'python developer'). |
location | string | — | City or region (e.g. 'Warszawa', 'Kraków', 'Wrocław'). |
contractType | enum | "" | Filter by contract type. |
workMode | enum | "" | Filter by work mode. |
positionLevel | enum | "" | Filter by seniority level. |
| Limits | |||
maxResults | integer | 50 | Maximum total results. 0 = unlimited. |
| Enrichment | |||
includeDetails | boolean | true | Fetch full job details (description, technologies, apply URL). Slower but richer data. |
descriptionMaxLength | integer | 0 | Truncate description to N characters. 0 = no truncation. |
| Output | |||
compact | boolean | false | Return core fields only (for AI-agent/MCP workflows). |
| Incremental Tracking | |||
incrementalMode | boolean | false | Only return new or changed listings since the last run. |
stateKey | string | — | Unique identifier for this tracked universe. Different stateKeys maintain independent state. |
📦 Output fields
Each result can include salary fields, contact and apply details, company metadata, and full descriptions, depending on listing content and the enrichment options enabled for the run.
Core fields
| Field | Type | Description |
|---|---|---|
title | string | Title |
location | string | Location |
region | string | Region |
country | string | Country |
salaryText | string | Salary Text |
salaryMin | integer | Salary Min |
salaryMax | integer | Salary Max |
salaryCurrency | string | Salary Currency |
salaryPeriod | string | Salary Period |
employmentType | string | Employment Type |
contractTypes | array | Contract Types |
workModes | array | Work Modes |
workSchedules | array | Work Schedules |
positionLevels | array | Position Levels |
isRemote | boolean | Remote Allowed |
technologies | array | Technologies |
responsibilities | string | Responsibilities |
requirements | string | Requirements |
offered | string | What's Offered |
remoteRecruitment | boolean | Remote Recruitment |
categories | array | Categories |
url | string | Listing URL |
allOfferUrls | array | All Offer URLs |
postedDate | string | Posted Date |
expirationDate | string | Expiration Date |
portalUrl | string | Portal URL |
changeType | string | Change Type |
Detail and enrichment
| Field | Type | Description |
|---|---|---|
description | string | Description |
aiSummary | string | AI Summary |
Contact and company
| Field | Type | Description |
|---|---|---|
company | string | Company |
companyProfileUrl | string | Company Profile URL |
applyUrl | string | Apply URL |
contactPhone | string | Contact Phone |
isOneClickApply | boolean | One-Click Apply |
Operational fields
| Field | Type | Description |
|---|---|---|
jobId | string | Job ID |
offerId | integer | Offer ID |
companyId | integer | Company ID |
scrapedAt | string | Scraped At |
source | string | Source |
⚠️ Known limitations
- Contact information is only returned when the source exposes it directly; many listings will still rely on apply URLs rather than named contacts.
- Company profile fields depend on source availability and may be limited for portals that do not expose employer metadata.
- Field population rates always depend on the source site itself, so null values are normal for data points the source does not publish on every listing.
💰 How much does it cost to scrape pracuj jobs feed?
This actor uses pay-per-event pricing, so you pay a small run-start fee and then only for results that are actually emitted.
| Event | Price | When |
|---|---|---|
actor-start | $0.01 | Each run |
result | $0.002 | Per emitted record |
Example costs:
| Scenario | Results | Cost |
|---|---|---|
| Quick test | 10 | $0.03 |
| Daily monitor | 50 | $0.11 |
| Full scrape | 500 | $1.01 |
💡 Use cases
Recruiting and sourcing
Pull pracuj.pl listings into dashboards, triage queues, or recruiter workflows without re-normalizing the source on every run.
Recurring monitoring
Track only newly posted or changed listings on scheduled runs, which is better suited to alerts and daily pipeline jobs than repeated full exports.
Outreach and hiring-intent research
Use employer, contact, and apply fields to support account research, outreach queues, or company watchlists when the source provides those details.
Salary and market analysis
Track salary ranges, titles, and locations over time to build a more structured view of demand on pracuj.pl.
AI-agent and MCP workflows
Feed compact listing data into ranking, summarization, classification, or agent pipelines without burning unnecessary context on large descriptions.
🤖 AI-agent and MCP usage
This actor is suitable for AI-agent workflows because the output is structured and the input can intentionally reduce payload size for downstream tools.
compactreturns a smaller core schema for ranking, classification, and MCP tool calls.descriptionMaxLengthlets you cap description size so larger batches stay practical in model context windows.
{"query": "software engineer","contractType": "","workMode": "","positionLevel": "","maxResults": 10,"includeDetails": true,"descriptionMaxLength": 300,"compact": true,"incrementalMode": false}
🔄 Incremental mode
Incremental mode is intended for repeated monitoring runs where only new or changed listings should be emitted.
| Change type | Meaning |
|---|---|
NEW | First time seen in the monitored result set |
CHANGED | Previously seen listing with updated content |
UNCHANGED | Same listing and content as a prior run when unchanged emission is enabled |
EXPIRED | Listing disappeared from the monitored result set when expired emission is enabled |
📖 How to scrape pracuj jobs feed
- Open the actor in Apify Console and review the input schema.
- Enter your search query and location settings, then set
maxResultsfor the amount of data you need. - Enable optional enrichment fields only when you need richer output such as descriptions, contacts, or company data.
- Run the actor and export the dataset as JSON, CSV, or Excel for downstream analysis.
❓ FAQ
What data does this actor return from pracuj.pl?
It returns structured listing records with fields such as salary fields, contact and apply details, company metadata, full descriptions, plus the core identifiers and metadata defined in the dataset schema.
Can I fetch full descriptions and detail fields?
Yes. Enable the detail-related input options when you need richer fields such as descriptions, employer metadata, or contact details from the listing detail pages.
Does it support recurring monitoring?
Yes. Incremental mode is built for recurring runs where you only want newly seen or changed listings instead of a full repeat dataset every time.
Is it suitable for AI agents or MCP workflows?
Yes. Compact mode and output-size controls make it easier to use the actor in AI-agent workflows where predictable fields matter more than raw page size.
Why use this actor instead of scraping the site ad hoc?
Because it already handles repeatable source access, keeps a stable schema, and exposes filters and enrichment options in a form that is easier to automate repeatedly.
Is scraping pracuj.pl legal?
This actor is intended for publicly accessible data workflows. Always review the target site terms and your own legal requirements for the way you plan to use the data.
🔗 Related actors
- Arbeitsagentur Jobs Feed — German Federal Employment Agency — Alternative structured job-feed workflow
- Company Jobs Tracker — Alternative structured job-feed workflow
- Dice.com Jobs Feed — Tech Job Scraper with Salary Data — Alternative structured job-feed workflow
- Glassdoor Jobs Feed — Alternative structured job-feed workflow
- Indeed Jobs Feed — Alternative structured job-feed workflow