NoFluffJobs.com Job Listings Scraper avatar

NoFluffJobs.com Job Listings Scraper

Try for free

2 hours trial then $10.00/month - No credit card required now

View all Actors
NoFluffJobs.com Job Listings Scraper

NoFluffJobs.com Job Listings Scraper

memo23/apify-nofluffjobs-cheerio-scraper
Try for free

2 hours trial then $10.00/month - No credit card required now

Uncover hidden job opportunities across 5 European countries with one search! This NoFluffJobs scraper delivers comprehensive, real-time data on tech jobs, including salaries, skills, and company insights. Save time, expand your job search, and make informed career decisions with ease.

How it works

This actor allows you to scrape job listings from NoFluffJobs.com and extract comprehensive details about each job posting, including job title, company information, salary, location, required skills, benefits, recruitment process, and various other metadata. When you provide a search URL, the scraper automatically searches for matching jobs across all five regional versions of the site (Poland, Hungary, Czech Republic, Slovakia, and Netherlands). This ensures that you get a complete view of all relevant job listings, as different countries may have varying job opportunities.

Features

  • Multiple Search Queries: Supports scraping based on multiple search URLs (just copy and paste the link/url from nofluffjobs.com site).
  • Cross-Region Search: When you enter a URL from the site, the scraper searches for that job across all five versions (countries/regions) of the site. This is because each country can have different job listings.
  • Detailed Job Information: Extracts comprehensive data about each job listing, including company details, job requirements, benefits, and more.
  • Multilingual Support: Capable of handling job postings in multiple languages.

How to Use

  1. Set Up: Ensure you have an Apify account and access to the Apify platform.
  2. Configure input parameters:
    • Start URLs: Paste the NoFluffJobs search URLs you want to scrape.
    • Max Items (optional): Limit the number of job listings to scrape.
    • Max Concurrency (optional): Set the maximum number of concurrent requests.
    • Min Concurrency (optional): Set the minimum number of concurrent requests.
    • Max Request Retries (optional): Set the maximum number of request retries.
  3. (Optional) Configure proxy settings for enhanced reliability.
  4. Run the actor and obtain the extracted data in your preferred format (JSON, CSV, Excel, etc.).

Input Data

Here's an example of how to set up the input for the NoFluffJobs scraper:

1{
2    "startUrls": [
3        {
4            "url": "https://nofluffjobs.com/sk/backend?criteria=category%3Dfrontend,fullstack,mobile,embedded"
5        }
6    ],
7    "maxItems": 100,
8    "maxConcurrency": 100,
9    "minConcurrency": 1,
10    "maxRequestRetries": 8,
11    "proxyConfiguration": {
12        "useApifyProxy": true,
13        "apifyProxyGroups": [
14            "RESIDENTIAL"
15        ]
16    }
17}

Note: Even though you provide a URL for a specific region (e.g., 'sk' for Slovakia in the example above), the scraper will search for matching jobs across all regions: Poland (pl), Hungary (hu), Czech Republic (cz), Slovakia (sk), and Netherlands (nl).

Output Structure

The output data is highly detailed and includes the following main sections:

  1. Basic Job Information
  2. Company Details
  3. Job Description and Requirements
  4. Location Information
  5. Salary and Contract Details
  6. Benefits
  7. Application Process
  8. Recruitment Details
  9. Metadata and Analytics

Here's a comprehensive breakdown of the output structure:

1{
2  "id": "data-engineer-ework-group-remote-2",
3  "title": "Data Engineer",
4  "apply": {
5    "option": "email",
6    "leadCollection": false,
7    "leadCollectionInfoClause": ""
8  },
9  "specs": {
10    "details": {
11      "custom": []
12    },
13    "help4Ua": false,
14    "dailyTasks": [
15      "Collaborate with cross-functional teams to design, develop and maintain data pipelines and analytics solutions.",
16      "Design and build a foundational platform for a modern data lake architecture, optimizing it for scalability, flexibility, and performance.",
17      "Develop automated test to ensure data accuracy and quality.",
18      "You will assist with planning and maintaining the Azure architectural runway and pipeline for multiple products, ensuring their stability and efficient operation.",
19      "Continuously secure improvement that can make developers on the platform work even more efficiently and act as a sparring partner on use of Azure services for the organisation.",
20      "Leverage your expertise in cloud development to design and implement innovative digital solutions focused on delivering business insights and patient care in real time.",
21      "Overall, our goal is to improve the clinical experience for patients, doctors and nurses world-wide, and your role will support this journey."
22    ],
23    "referral": {
24      "allowed": true
25    }
26  },
27  "basics": {
28    "category": "data",
29    "seniority": ["Senior"],
30    "technology": "Python"
31  },
32  "company": {
33    "url": "www.eworkgroup.com",
34    "logo": {
35      "original": "companies/logos/original/ework_group_20210531_122823.png",
36      "jobs_details": "companies/logos/jobs_details/ework_group_20210531_122823.png",
37      "jobs_listing": "companies/logos/jobs_listing/ework_group_20210531_122823.png"
38    },
39    "name": "Ework Group",
40    "size": "100+",
41    "video": ""
42  },
43  "details": {
44    "quote": "",
45    "position": "",
46    "description": "<p>For our client - a company from pharmaceutical area, we are looking for Data Engineer.</p>\n<p><strong>What you will be doing</strong></p>\n<p>You will be close to the heart of our client's clinical operations where you will play a key role in shaping the future of clinical trials and patient care, by building scalable solutions in the cloud.</p>\n<p><br></p>",
47    "quoteAuthor": ""
48  },
49  "benefits": {
50    "benefits": [
51      "Sport subscription",
52      "Private healthcare",
53      "International projects"
54    ],
55    "equipment": {
56      "computer": "",
57      "monitors": "",
58      "operatingSystems": {
59        "lin": false,
60        "mac": false,
61        "win": false
62      }
63    },
64    "officePerks": []
65  },
66  "consents": {
67    "infoClause": "The Controller of your personal data is Ework Group, with registered office at Plac Stanisława Małachowskiego 2, Warsaw. Your data is processed for the purpose of the current recruitment process. Providing data is voluntary but necessary for this purpose. Processing your data is lawful because it is necessary in order to take steps at the request of the data subject prior to entering into a contract (article 6 point 1b of Regulation EU 2016/679 - GDPR). Your personal data will be deleted when the current recruitment process is finished, unless a separate consent is provided below. You have the right to access, correct, modify, update, rectify, request for the transfer or deletion of data, withdrawal of consent or objection.",
68    "personalDataRequestLink": "monika.jozwik@eworkgroup.com"
69  },
70  "location": {
71    "places": [
72      {
73        "city": "Remote",
74        "url": "data-engineer-ework-group-remote-2"
75      },
76      {
77        "country": {
78          "code": "POL",
79          "name": "Poland"
80        },
81        "province": "opole",
82        "url": "data-engineer-ework-group-opole-1",
83        "provinceOnly": true
84      }
85    ],
86    "remote": 5,
87    "multicityCount": 100,
88    "covidTimeRemotely": false,
89    "remoteFlexible": false,
90    "fieldwork": false,
91    "defaultIndex": 1
92  },
93  "essentials": {
94    "contract": {
95      "start": "ASAP",
96      "duration": {}
97    },
98    "originalSalary": {
99      "currency": "PLN",
100      "types": {
101        "b2b": {
102          "period": "Month",
103          "range": [25716, 32146],
104          "paidHoliday": false
105        }
106      },
107      "disclosedAt": "VISIBLE"
108    }
109  },
110  "methodology": [],
111  "recruitment": {
112    "languages": [
113      {"code": "pl"},
114      {"code": "en"}
115    ],
116    "onlineInterviewAvailable": true
117  },
118  "requirements": {
119    "musts": [
120      {"value": "Python", "type": "main"},
121      {"value": "Azure", "type": "main"},
122      {"value": "Azure Data Factory", "type": "main"},
123      {"value": "Azure Databricks", "type": "main"},
124      {"value": "Spark", "type": "main"}
125    ],
126    "nices": [
127      {"value": "SQL", "type": "main"},
128      {"value": "CI", "type": "main"},
129      {"value": "CD pipelines", "type": "main"},
130      {"value": "Azure DevOps", "type": "main"}
131    ],
132    "description": "<p>We are seeking a candidate with an educational background in Computer Science and Software Development , as well as experience in some of the following areas:</p>\n<ul>\n<li>Strong proficiency in Python programming</li>\n<li>Extensive experience with Azure, including Azure Data Factory and Azure Databricks, and a deep understanding of Azure architecture and services</li>\n<li>Experience in using Spark, including Spark SQL and understanding of how to optimize Spark performance.</li>\n<li>Automated unit testing and code quality inspection</li>\n<li>CI/CD Pipelines using Azure DevOps (or similar)</li>\n<li>Working in pharma domain or other regulated area is considered an advantage</li>\n</ul>",
133    "languages": [
134      {"type": "MUST", "code": "en", "level": "C1"},
135      {"type": "MUST", "code": "pl", "level": "C1"}
136    ]
137  },
138  "posted": 1725032841570,
139  "postedOrRenewedDaysAgo": 0,
140  "status": "PUBLISHED",
141  "postingUrl": "data-engineer-ework-group-remote-2",
142  "metadata": {
143    "sectionLanguages": {
144      "daily-tasks": "en",
145      "description": "en",
146      "requirements.description": "en"
147    }
148  },
149  "regions": ["pl"],
150  "reference": "WZOXW66Z",
151  "meta": {
152    "videosInCompanyProfileVisible": true
153  },
154  "companyUrl": "/company/ework-group-rlrciwbo",
155  "seo": {
156    "title": "Data Engineer @ Ework Group",
157    "description": "Data Engineer @ Ework Group Fully remote job 25.7k-32.1k (B2B) PLN / month"
158  },
159  "analytics": {
160    "lastBump": 0,
161    "lastBumpType": "SYSTEM",
162    "previousBumpCount": 0,
163    "nextBump": 1,
164    "nextBumpType": "SYSTEM",
165    "nextBumpCount": 6,
166    "emissionDay": 0,
167    "productType": "EXPERT",
168    "emissionBumps": 6,
169    "emissionLength": 30,
170    "emission": "R1461A",
171    "addons": {
172      "bump": false,
173      "publication": true,
174      "offerOfTheDay": false,
175      "topInSearch": false,
176      "highlighted": false
177    },
178    "topInSearchConfig": {
179      "pairs": []
180    }
181  }
182}

Output Fields Explanation

  • id: Unique identifier for the job listing
  • title: Job title
  • apply: Application method and related information
  • specs: Job specifications, including daily tasks
  • basics: Basic job information (category, seniority, main technology)
  • company: Detailed company information
  • details: Job description and position details
  • benefits: List of benefits and perks offered
  • consents: GDPR and data processing information
  • location: Detailed location information, including remote work options
  • essentials: Contract and salary information
  • recruitment: Recruitment process details, including required languages
  • requirements: Required and nice-to-have skills, and language requirements
  • posted: Timestamp of when the job was posted
  • status: Current status of the job listing
  • postingUrl: URL slug for the job posting
  • metadata: Additional metadata, including language information for different sections
  • regions: Regions where the job is available
  • reference: Reference code for the job
  • seo: SEO-related information for the job listing
  • analytics: Analytics data related to the job posting on the platform

Support

Additional Services

Developer
Maintained by Community
Actor metrics
  • 2 monthly users
  • 0 stars
  • 100.0% runs succeeded
  • Created in May 2024
  • Modified 3 months ago