Profesia.sk Scraper avatar
Profesia.sk Scraper
Try for free

3 days trial then $25.00/month - No credit card required now

View all Actors
Profesia.sk Scraper

Profesia.sk Scraper

jurooravec/profesia-sk-scraper
Try for free

3 days trial then $25.00/month - No credit card required now

One-stop-shop for all data on Profesia.sk Extract job offers, list of companies, positions, locations... Job offers include salary, textual info, company, and more

The code examples below show how to run the Actor and get its results. To run the code, you need to have an Apify account. Replace <YOUR_API_TOKEN> in the code with your API token, which you can find under Settings > Integrations in Apify Console. Learn mode

Node.js

Python

curl

1from apify_client import ApifyClient
2
3# Initialize the ApifyClient with your Apify API token
4client = ApifyClient("<YOUR_API_TOKEN>")
5
6# Prepare the Actor input
7run_input = {
8    "datasetType": "jobOffers",
9    "jobOfferFilterMinSalaryPeriod": "month",
10    "inputExtendFromFunction": """
11/**
12 * Inputs:
13 *
14 * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.
15 * `ctx.input` - The input object that was passed to this Actor.
16 * `ctx.state` - An object you can use to persist state across all your custom functions.
17 * `ctx.sendRequest` - Fetch remote data. Uses 'got-scraping', same as Apify's `sendRequest`.
18 *                       See https://crawlee.dev/docs/guides/got-scraping
19 * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.
20 *                        It takes the entry itself, and a list of properties to be used for hashing.
21 *                        By default, you should pass `input.cachePrimaryKeys` to it.
22 *
23 */
24// async ({ io, input, state, sendRequest, itemCacheKey }) => {
25//   // Example: Load Actor config from GitHub URL (public)
26//   const config = await sendRequest.get('https://raw.githubusercontent.com/username/project/main/config.json').json();
27//   
28//   // Increase concurrency during off-peak hours
29//   // NOTE: Imagine we're targetting a small server, that can be slower during the day
30//   const hours = new Date().getUTCHours();
31//   const isOffPeak = hours < 6 || hours > 20;
32//   config.maxConcurrency = isOffPeak ? 8 : 3;
33//   
34//   return config;
35//
36//   /* ========== SEE BELOW FOR MORE EXAMPLES ========= */
37//
38//   /**
39//    * ======= ACCESSING DATASET ========
40//    * To save/load/access entries in Dataset.
41//    * Docs:
42//    * - https://docs.apify.com/platform/storage/dataset
43//    * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset
44//    * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce
45//    */
46//   // const dataset = await io.openDataset('MyDatasetId');
47//   // const info = await dataset.getInfo();
48//   // console.log(info.itemCount);
49//   // // => 0
50//
51//   /**
52//    * ======= ACCESSING REMOTE DATA ========
53//    * Use `sendRequest` to get data from the internet:
54//    * Docs:
55//    * - https://github.com/apify/got-scraping
56//    */
57//   // const catFact = await sendRequest.get('https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889').json();
58//   // console.log(catFact.text);
59//   // // => \"Cats make about 100 different sounds. Dogs make only about 10.\",
60//
61//   /**
62//    * ======= USING CACHE ========
63//    * To save the entry to the KeyValue cache (or retrieve it), you can use
64//    * `itemCacheKey` to create the entry's ID for you:
65//    */
66//   // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);
67//   // const cache = await io.openKeyValueStore('MyStoreId');
68//   // cache.setValue(cacheId, entry);
69// };""",
70    "startUrlsFromFunction": """
71/**
72 * Inputs:
73 *
74 * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.
75 * `ctx.input` - The input object that was passed to this Actor.
76 * `ctx.state` - An object you can use to persist state across all your custom functions.
77 * `ctx.sendRequest` - Fetch remote data. Uses 'got-scraping', same as Apify's `sendRequest`.
78 *                       See https://crawlee.dev/docs/guides/got-scraping
79 * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.
80 *                        It takes the entry itself, and a list of properties to be used for hashing.
81 *                        By default, you should pass `input.cachePrimaryKeys` to it.
82 *
83 */
84// async ({ io, input, state, sendRequest, itemCacheKey }) => {
85//   // Example: Create and load URLs from a Dataset by combining multiple fields
86//   const dataset = await io.openDataset(datasetNameOrId);
87//   const data = await dataset.getData();
88//   const urls = data.items.map((item) => `https://example.com/u/${item.userId}/list/${item.listId}`);
89//   return urls;
90//
91//   /* ========== SEE BELOW FOR MORE EXAMPLES ========= */
92//
93//   /**
94//    * ======= ACCESSING DATASET ========
95//    * To save/load/access entries in Dataset.
96//    * Docs:
97//    * - https://docs.apify.com/platform/storage/dataset
98//    * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset
99//    * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce
100//    */
101//   // const dataset = await io.openDataset('MyDatasetId');
102//   // const info = await dataset.getInfo();
103//   // console.log(info.itemCount);
104//   // // => 0
105//
106//   /**
107//    * ======= ACCESSING REMOTE DATA ========
108//    * Use `sendRequest` to get data from the internet:
109//    * Docs:
110//    * - https://github.com/apify/got-scraping
111//    */
112//   // const catFact = await sendRequest.get('https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889').json();
113//   // console.log(catFact.text);
114//   // // => \"Cats make about 100 different sounds. Dogs make only about 10.\",
115//
116//   /**
117//    * ======= USING CACHE ========
118//    * To save the entry to the KeyValue cache (or retrieve it), you can use
119//    * `itemCacheKey` to create the entry's ID for you:
120//    */
121//   // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);
122//   // const cache = await io.openKeyValueStore('MyStoreId');
123//   // cache.setValue(cacheId, entry);
124// };""",
125    "requestMaxEntries": 50,
126    "requestTransform": """
127/**
128 * Inputs:
129 * `request` - Request holding URL to be scraped.
130 * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.
131 * `ctx.input` - The input object that was passed to this Actor.
132 * `ctx.state` - An object you can use to persist state across all your custom functions.
133 * `ctx.sendRequest` - Fetch remote data. Uses 'got-scraping', same as Apify's `sendRequest`.
134 *                       See https://crawlee.dev/docs/guides/got-scraping
135 * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.
136 *                        It takes the entry itself, and a list of properties to be used for hashing.
137 *                        By default, you should pass `input.cachePrimaryKeys` to it.
138 *
139 */
140// async (request, { io, input, state, sendRequest, itemCacheKey }) => {
141//   // Example: Tag requests
142//   // (maybe because we use RequestQueue that pools multiple scrapers)
143//   request.userData.tag = \"VARIANT_A\";
144//   return requestQueue;
145//
146//   /* ========== SEE BELOW FOR MORE EXAMPLES ========= */
147//
148//   /**
149//    * ======= ACCESSING DATASET ========
150//    * To save/load/access entries in Dataset.
151//    * Docs:
152//    * - https://docs.apify.com/platform/storage/dataset
153//    * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset
154//    * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce
155//    */
156//   // const dataset = await io.openDataset('MyDatasetId');
157//   // const info = await dataset.getInfo();
158//   // console.log(info.itemCount);
159//   // // => 0
160//
161//   /**
162//    * ======= ACCESSING REMOTE DATA ========
163//    * Use `sendRequest` to get data from the internet:
164//    * Docs:
165//    * - https://github.com/apify/got-scraping
166//    */
167//   // const catFact = await sendRequest.get('https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889').json();
168//   // console.log(catFact.text);
169//   // // => \"Cats make about 100 different sounds. Dogs make only about 10.\",
170//
171//   /**
172//    * ======= USING CACHE ========
173//    * To save the entry to the KeyValue cache (or retrieve it), you can use
174//    * `itemCacheKey` to create the entry's ID for you:
175//    */
176//   // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);
177//   // const cache = await io.openKeyValueStore('MyStoreId');
178//   // cache.setValue(cacheId, entry);
179// };""",
180    "requestTransformBefore": """
181/**
182 * Inputs:
183 *
184 * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.
185 * `ctx.input` - The input object that was passed to this Actor.
186 * `ctx.state` - An object you can use to persist state across all your custom functions.
187 * `ctx.sendRequest` - Fetch remote data. Uses 'got-scraping', same as Apify's `sendRequest`.
188 *                       See https://crawlee.dev/docs/guides/got-scraping
189 * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.
190 *                        It takes the entry itself, and a list of properties to be used for hashing.
191 *                        By default, you should pass `input.cachePrimaryKeys` to it.
192 *
193 */
194// async ({ io, input, state, sendRequest, itemCacheKey }) => {
195//   // Example: Fetch data or run code BEFORE requests are processed.
196//   state.categories = await sendRequest.get('https://example.com/my-categories').json();
197//
198//   /* ========== SEE BELOW FOR MORE EXAMPLES ========= */
199//
200//   /**
201//    * ======= ACCESSING DATASET ========
202//    * To save/load/access entries in Dataset.
203//    * Docs:
204//    * - https://docs.apify.com/platform/storage/dataset
205//    * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset
206//    * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce
207//    */
208//   // const dataset = await io.openDataset('MyDatasetId');
209//   // const info = await dataset.getInfo();
210//   // console.log(info.itemCount);
211//   // // => 0
212//
213//   /**
214//    * ======= ACCESSING REMOTE DATA ========
215//    * Use `sendRequest` to get data from the internet:
216//    * Docs:
217//    * - https://github.com/apify/got-scraping
218//    */
219//   // const catFact = await sendRequest.get('https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889').json();
220//   // console.log(catFact.text);
221//   // // => \"Cats make about 100 different sounds. Dogs make only about 10.\",
222//
223//   /**
224//    * ======= USING CACHE ========
225//    * To save the entry to the KeyValue cache (or retrieve it), you can use
226//    * `itemCacheKey` to create the entry's ID for you:
227//    */
228//   // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);
229//   // const cache = await io.openKeyValueStore('MyStoreId');
230//   // cache.setValue(cacheId, entry);
231// };""",
232    "requestTransformAfter": """
233/**
234 * Inputs:
235 *
236 * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.
237 * `ctx.input` - The input object that was passed to this Actor.
238 * `ctx.state` - An object you can use to persist state across all your custom functions.
239 * `ctx.sendRequest` - Fetch remote data. Uses 'got-scraping', same as Apify's `sendRequest`.
240 *                       See https://crawlee.dev/docs/guides/got-scraping
241 * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.
242 *                        It takes the entry itself, and a list of properties to be used for hashing.
243 *                        By default, you should pass `input.cachePrimaryKeys` to it.
244 *
245 */
246// async ({ io, input, state, sendRequest, itemCacheKey }) => {
247//   // Example: Fetch data or run code AFTER requests are processed.
248//   delete state.categories;
249//
250//   /* ========== SEE BELOW FOR MORE EXAMPLES ========= */
251//
252//   /**
253//    * ======= ACCESSING DATASET ========
254//    * To save/load/access entries in Dataset.
255//    * Docs:
256//    * - https://docs.apify.com/platform/storage/dataset
257//    * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset
258//    * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce
259//    */
260//   // const dataset = await io.openDataset('MyDatasetId');
261//   // const info = await dataset.getInfo();
262//   // console.log(info.itemCount);
263//   // // => 0
264//
265//   /**
266//    * ======= ACCESSING REMOTE DATA ========
267//    * Use `sendRequest` to get data from the internet:
268//    * Docs:
269//    * - https://github.com/apify/got-scraping
270//    */
271//   // const catFact = await sendRequest.get('https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889').json();
272//   // console.log(catFact.text);
273//   // // => \"Cats make about 100 different sounds. Dogs make only about 10.\",
274//
275//   /**
276//    * ======= USING CACHE ========
277//    * To save the entry to the KeyValue cache (or retrieve it), you can use
278//    * `itemCacheKey` to create the entry's ID for you:
279//    */
280//   // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);
281//   // const cache = await io.openKeyValueStore('MyStoreId');
282//   // cache.setValue(cacheId, entry);
283// };""",
284    "requestFilter": """
285/**
286 * Inputs:
287 * `request` - Request holding URL to be scraped.
288 * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.
289 * `ctx.input` - The input object that was passed to this Actor.
290 * `ctx.state` - An object you can use to persist state across all your custom functions.
291 * `ctx.sendRequest` - Fetch remote data. Uses 'got-scraping', same as Apify's `sendRequest`.
292 *                       See https://crawlee.dev/docs/guides/got-scraping
293 * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.
294 *                        It takes the entry itself, and a list of properties to be used for hashing.
295 *                        By default, you should pass `input.cachePrimaryKeys` to it.
296 *
297 */
298// async (request, { io, input, state, sendRequest, itemCacheKey }) => {
299//   // Example: Filter requests based on their tag
300//   // (maybe because we use RequestQueue that pools multiple scrapers)
301//   return request.userData.tag === \"VARIANT_A\";
302//
303//   /* ========== SEE BELOW FOR MORE EXAMPLES ========= */
304//
305//   /**
306//    * ======= ACCESSING DATASET ========
307//    * To save/load/access entries in Dataset.
308//    * Docs:
309//    * - https://docs.apify.com/platform/storage/dataset
310//    * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset
311//    * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce
312//    */
313//   // const dataset = await io.openDataset('MyDatasetId');
314//   // const info = await dataset.getInfo();
315//   // console.log(info.itemCount);
316//   // // => 0
317//
318//   /**
319//    * ======= ACCESSING REMOTE DATA ========
320//    * Use `sendRequest` to get data from the internet:
321//    * Docs:
322//    * - https://github.com/apify/got-scraping
323//    */
324//   // const catFact = await sendRequest.get('https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889').json();
325//   // console.log(catFact.text);
326//   // // => \"Cats make about 100 different sounds. Dogs make only about 10.\",
327//
328//   /**
329//    * ======= USING CACHE ========
330//    * To save the entry to the KeyValue cache (or retrieve it), you can use
331//    * `itemCacheKey` to create the entry's ID for you:
332//    */
333//   // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);
334//   // const cache = await io.openKeyValueStore('MyStoreId');
335//   // cache.setValue(cacheId, entry);
336// };""",
337    "requestFilterBefore": """
338/**
339 * Inputs:
340 *
341 * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.
342 * `ctx.input` - The input object that was passed to this Actor.
343 * `ctx.state` - An object you can use to persist state across all your custom functions.
344 * `ctx.sendRequest` - Fetch remote data. Uses 'got-scraping', same as Apify's `sendRequest`.
345 *                       See https://crawlee.dev/docs/guides/got-scraping
346 * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.
347 *                        It takes the entry itself, and a list of properties to be used for hashing.
348 *                        By default, you should pass `input.cachePrimaryKeys` to it.
349 *
350 */
351// async ({ io, input, state, sendRequest, itemCacheKey }) => {
352//   // Example: Fetch data or run code BEFORE requests are processed.
353//   state.categories = await sendRequest.get('https://example.com/my-categories').json();
354//
355//   /* ========== SEE BELOW FOR MORE EXAMPLES ========= */
356//
357//   /**
358//    * ======= ACCESSING DATASET ========
359//    * To save/load/access entries in Dataset.
360//    * Docs:
361//    * - https://docs.apify.com/platform/storage/dataset
362//    * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset
363//    * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce
364//    */
365//   // const dataset = await io.openDataset('MyDatasetId');
366//   // const info = await dataset.getInfo();
367//   // console.log(info.itemCount);
368//   // // => 0
369//
370//   /**
371//    * ======= ACCESSING REMOTE DATA ========
372//    * Use `sendRequest` to get data from the internet:
373//    * Docs:
374//    * - https://github.com/apify/got-scraping
375//    */
376//   // const catFact = await sendRequest.get('https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889').json();
377//   // console.log(catFact.text);
378//   // // => \"Cats make about 100 different sounds. Dogs make only about 10.\",
379//
380//   /**
381//    * ======= USING CACHE ========
382//    * To save the entry to the KeyValue cache (or retrieve it), you can use
383//    * `itemCacheKey` to create the entry's ID for you:
384//    */
385//   // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);
386//   // const cache = await io.openKeyValueStore('MyStoreId');
387//   // cache.setValue(cacheId, entry);
388// };""",
389    "requestFilterAfter": """
390/**
391 * Inputs:
392 *
393 * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.
394 * `ctx.input` - The input object that was passed to this Actor.
395 * `ctx.state` - An object you can use to persist state across all your custom functions.
396 * `ctx.sendRequest` - Fetch remote data. Uses 'got-scraping', same as Apify's `sendRequest`.
397 *                       See https://crawlee.dev/docs/guides/got-scraping
398 * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.
399 *                        It takes the entry itself, and a list of properties to be used for hashing.
400 *                        By default, you should pass `input.cachePrimaryKeys` to it.
401 *
402 */
403// async ({ io, input, state, sendRequest, itemCacheKey }) => {
404//   // Example: Fetch data or run code AFTER requests are processed.
405//   delete state.categories;
406//
407//   /* ========== SEE BELOW FOR MORE EXAMPLES ========= */
408//
409//   /**
410//    * ======= ACCESSING DATASET ========
411//    * To save/load/access entries in Dataset.
412//    * Docs:
413//    * - https://docs.apify.com/platform/storage/dataset
414//    * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset
415//    * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce
416//    */
417//   // const dataset = await io.openDataset('MyDatasetId');
418//   // const info = await dataset.getInfo();
419//   // console.log(info.itemCount);
420//   // // => 0
421//
422//   /**
423//    * ======= ACCESSING REMOTE DATA ========
424//    * Use `sendRequest` to get data from the internet:
425//    * Docs:
426//    * - https://github.com/apify/got-scraping
427//    */
428//   // const catFact = await sendRequest.get('https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889').json();
429//   // console.log(catFact.text);
430//   // // => \"Cats make about 100 different sounds. Dogs make only about 10.\",
431//
432//   /**
433//    * ======= USING CACHE ========
434//    * To save the entry to the KeyValue cache (or retrieve it), you can use
435//    * `itemCacheKey` to create the entry's ID for you:
436//    */
437//   // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);
438//   // const cache = await io.openKeyValueStore('MyStoreId');
439//   // cache.setValue(cacheId, entry);
440// };""",
441    "outputMaxEntries": 50,
442    "outputTransform": """
443/**
444 * Inputs:
445 * `entry` - Scraped entry.
446 * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.
447 * `ctx.input` - The input object that was passed to this Actor.
448 * `ctx.state` - An object you can use to persist state across all your custom functions.
449 * `ctx.sendRequest` - Fetch remote data. Uses 'got-scraping', same as Apify's `sendRequest`.
450 *                       See https://crawlee.dev/docs/guides/got-scraping
451 * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.
452 *                        It takes the entry itself, and a list of properties to be used for hashing.
453 *                        By default, you should pass `input.cachePrimaryKeys` to it.
454 *
455 */
456// async (entry, { io, input, state, sendRequest, itemCacheKey }) => {
457//   // Example: Add extra custom fields like aggregates
458//   return {
459//     ...entry,
460//     imagesCount: entry.images.length,
461//   };
462//
463//   /* ========== SEE BELOW FOR MORE EXAMPLES ========= */
464//
465//   /**
466//    * ======= ACCESSING DATASET ========
467//    * To save/load/access entries in Dataset.
468//    * Docs:
469//    * - https://docs.apify.com/platform/storage/dataset
470//    * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset
471//    * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce
472//    */
473//   // const dataset = await io.openDataset('MyDatasetId');
474//   // const info = await dataset.getInfo();
475//   // console.log(info.itemCount);
476//   // // => 0
477//
478//   /**
479//    * ======= ACCESSING REMOTE DATA ========
480//    * Use `sendRequest` to get data from the internet:
481//    * Docs:
482//    * - https://github.com/apify/got-scraping
483//    */
484//   // const catFact = await sendRequest.get('https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889').json();
485//   // console.log(catFact.text);
486//   // // => \"Cats make about 100 different sounds. Dogs make only about 10.\",
487//
488//   /**
489//    * ======= USING CACHE ========
490//    * To save the entry to the KeyValue cache (or retrieve it), you can use
491//    * `itemCacheKey` to create the entry's ID for you:
492//    */
493//   // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);
494//   // const cache = await io.openKeyValueStore('MyStoreId');
495//   // cache.setValue(cacheId, entry);
496// };""",
497    "outputTransformBefore": """
498/**
499 * Inputs:
500 *
501 * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.
502 * `ctx.input` - The input object that was passed to this Actor.
503 * `ctx.state` - An object you can use to persist state across all your custom functions.
504 * `ctx.sendRequest` - Fetch remote data. Uses 'got-scraping', same as Apify's `sendRequest`.
505 *                       See https://crawlee.dev/docs/guides/got-scraping
506 * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.
507 *                        It takes the entry itself, and a list of properties to be used for hashing.
508 *                        By default, you should pass `input.cachePrimaryKeys` to it.
509 *
510 */
511// async ({ io, input, state, sendRequest, itemCacheKey }) => {
512//   // Example: Fetch data or run code BEFORE entries are scraped.
513//   state.categories = await sendRequest.get('https://example.com/my-categories').json();
514//
515//   /* ========== SEE BELOW FOR MORE EXAMPLES ========= */
516//
517//   /**
518//    * ======= ACCESSING DATASET ========
519//    * To save/load/access entries in Dataset.
520//    * Docs:
521//    * - https://docs.apify.com/platform/storage/dataset
522//    * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset
523//    * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce
524//    */
525//   // const dataset = await io.openDataset('MyDatasetId');
526//   // const info = await dataset.getInfo();
527//   // console.log(info.itemCount);
528//   // // => 0
529//
530//   /**
531//    * ======= ACCESSING REMOTE DATA ========
532//    * Use `sendRequest` to get data from the internet:
533//    * Docs:
534//    * - https://github.com/apify/got-scraping
535//    */
536//   // const catFact = await sendRequest.get('https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889').json();
537//   // console.log(catFact.text);
538//   // // => \"Cats make about 100 different sounds. Dogs make only about 10.\",
539//
540//   /**
541//    * ======= USING CACHE ========
542//    * To save the entry to the KeyValue cache (or retrieve it), you can use
543//    * `itemCacheKey` to create the entry's ID for you:
544//    */
545//   // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);
546//   // const cache = await io.openKeyValueStore('MyStoreId');
547//   // cache.setValue(cacheId, entry);
548// };""",
549    "outputTransformAfter": """
550/**
551 * Inputs:
552 *
553 * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.
554 * `ctx.input` - The input object that was passed to this Actor.
555 * `ctx.state` - An object you can use to persist state across all your custom functions.
556 * `ctx.sendRequest` - Fetch remote data. Uses 'got-scraping', same as Apify's `sendRequest`.
557 *                       See https://crawlee.dev/docs/guides/got-scraping
558 * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.
559 *                        It takes the entry itself, and a list of properties to be used for hashing.
560 *                        By default, you should pass `input.cachePrimaryKeys` to it.
561 *
562 */
563// async ({ io, input, state, sendRequest, itemCacheKey }) => {
564//   // Example: Fetch data or run code AFTER entries are scraped.
565//   delete state.categories;
566//
567//   /* ========== SEE BELOW FOR MORE EXAMPLES ========= */
568//
569//   /**
570//    * ======= ACCESSING DATASET ========
571//    * To save/load/access entries in Dataset.
572//    * Docs:
573//    * - https://docs.apify.com/platform/storage/dataset
574//    * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset
575//    * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce
576//    */
577//   // const dataset = await io.openDataset('MyDatasetId');
578//   // const info = await dataset.getInfo();
579//   // console.log(info.itemCount);
580//   // // => 0
581//
582//   /**
583//    * ======= ACCESSING REMOTE DATA ========
584//    * Use `sendRequest` to get data from the internet:
585//    * Docs:
586//    * - https://github.com/apify/got-scraping
587//    */
588//   // const catFact = await sendRequest.get('https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889').json();
589//   // console.log(catFact.text);
590//   // // => \"Cats make about 100 different sounds. Dogs make only about 10.\",
591//
592//   /**
593//    * ======= USING CACHE ========
594//    * To save the entry to the KeyValue cache (or retrieve it), you can use
595//    * `itemCacheKey` to create the entry's ID for you:
596//    */
597//   // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);
598//   // const cache = await io.openKeyValueStore('MyStoreId');
599//   // cache.setValue(cacheId, entry);
600// };""",
601    "outputFilter": """
602/**
603 * Inputs:
604 * `entry` - Scraped entry.
605 * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.
606 * `ctx.input` - The input object that was passed to this Actor.
607 * `ctx.state` - An object you can use to persist state across all your custom functions.
608 * `ctx.sendRequest` - Fetch remote data. Uses 'got-scraping', same as Apify's `sendRequest`.
609 *                       See https://crawlee.dev/docs/guides/got-scraping
610 * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.
611 *                        It takes the entry itself, and a list of properties to be used for hashing.
612 *                        By default, you should pass `input.cachePrimaryKeys` to it.
613 *
614 */
615// async (entry, { io, input, state, sendRequest, itemCacheKey }) => {
616//   // Example: Filter entries based on number of images they have (at least 5)
617//   return entry.images.length > 5;
618//
619//   /* ========== SEE BELOW FOR MORE EXAMPLES ========= */
620//
621//   /**
622//    * ======= ACCESSING DATASET ========
623//    * To save/load/access entries in Dataset.
624//    * Docs:
625//    * - https://docs.apify.com/platform/storage/dataset
626//    * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset
627//    * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce
628//    */
629//   // const dataset = await io.openDataset('MyDatasetId');
630//   // const info = await dataset.getInfo();
631//   // console.log(info.itemCount);
632//   // // => 0
633//
634//   /**
635//    * ======= ACCESSING REMOTE DATA ========
636//    * Use `sendRequest` to get data from the internet:
637//    * Docs:
638//    * - https://github.com/apify/got-scraping
639//    */
640//   // const catFact = await sendRequest.get('https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889').json();
641//   // console.log(catFact.text);
642//   // // => \"Cats make about 100 different sounds. Dogs make only about 10.\",
643//
644//   /**
645//    * ======= USING CACHE ========
646//    * To save the entry to the KeyValue cache (or retrieve it), you can use
647//    * `itemCacheKey` to create the entry's ID for you:
648//    */
649//   // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);
650//   // const cache = await io.openKeyValueStore('MyStoreId');
651//   // cache.setValue(cacheId, entry);
652// };""",
653    "outputFilterBefore": """
654/**
655 * Inputs:
656 *
657 * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.
658 * `ctx.input` - The input object that was passed to this Actor.
659 * `ctx.state` - An object you can use to persist state across all your custom functions.
660 * `ctx.sendRequest` - Fetch remote data. Uses 'got-scraping', same as Apify's `sendRequest`.
661 *                       See https://crawlee.dev/docs/guides/got-scraping
662 * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.
663 *                        It takes the entry itself, and a list of properties to be used for hashing.
664 *                        By default, you should pass `input.cachePrimaryKeys` to it.
665 *
666 */
667// async ({ io, input, state, sendRequest, itemCacheKey }) => {
668//   // Example: Fetch data or run code BEFORE entries are scraped.
669//   state.categories = await sendRequest.get('https://example.com/my-categories').json();
670//
671//   /* ========== SEE BELOW FOR MORE EXAMPLES ========= */
672//
673//   /**
674//    * ======= ACCESSING DATASET ========
675//    * To save/load/access entries in Dataset.
676//    * Docs:
677//    * - https://docs.apify.com/platform/storage/dataset
678//    * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset
679//    * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce
680//    */
681//   // const dataset = await io.openDataset('MyDatasetId');
682//   // const info = await dataset.getInfo();
683//   // console.log(info.itemCount);
684//   // // => 0
685//
686//   /**
687//    * ======= ACCESSING REMOTE DATA ========
688//    * Use `sendRequest` to get data from the internet:
689//    * Docs:
690//    * - https://github.com/apify/got-scraping
691//    */
692//   // const catFact = await sendRequest.get('https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889').json();
693//   // console.log(catFact.text);
694//   // // => \"Cats make about 100 different sounds. Dogs make only about 10.\",
695//
696//   /**
697//    * ======= USING CACHE ========
698//    * To save the entry to the KeyValue cache (or retrieve it), you can use
699//    * `itemCacheKey` to create the entry's ID for you:
700//    */
701//   // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);
702//   // const cache = await io.openKeyValueStore('MyStoreId');
703//   // cache.setValue(cacheId, entry);
704// };""",
705    "outputFilterAfter": """
706/**
707 * Inputs:
708 *
709 * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.
710 * `ctx.input` - The input object that was passed to this Actor.
711 * `ctx.state` - An object you can use to persist state across all your custom functions.
712 * `ctx.sendRequest` - Fetch remote data. Uses 'got-scraping', same as Apify's `sendRequest`.
713 *                       See https://crawlee.dev/docs/guides/got-scraping
714 * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.
715 *                        It takes the entry itself, and a list of properties to be used for hashing.
716 *                        By default, you should pass `input.cachePrimaryKeys` to it.
717 *
718 */
719// async ({ io, input, state, sendRequest, itemCacheKey }) => {
720//   // Example: Fetch data or run code AFTER entries are scraped.
721//   delete state.categories;
722//
723//   /* ========== SEE BELOW FOR MORE EXAMPLES ========= */
724//
725//   /**
726//    * ======= ACCESSING DATASET ========
727//    * To save/load/access entries in Dataset.
728//    * Docs:
729//    * - https://docs.apify.com/platform/storage/dataset
730//    * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset
731//    * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce
732//    */
733//   // const dataset = await io.openDataset('MyDatasetId');
734//   // const info = await dataset.getInfo();
735//   // console.log(info.itemCount);
736//   // // => 0
737//
738//   /**
739//    * ======= ACCESSING REMOTE DATA ========
740//    * Use `sendRequest` to get data from the internet:
741//    * Docs:
742//    * - https://github.com/apify/got-scraping
743//    */
744//   // const catFact = await sendRequest.get('https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889').json();
745//   // console.log(catFact.text);
746//   // // => \"Cats make about 100 different sounds. Dogs make only about 10.\",
747//
748//   /**
749//    * ======= USING CACHE ========
750//    * To save the entry to the KeyValue cache (or retrieve it), you can use
751//    * `itemCacheKey` to create the entry's ID for you:
752//    */
753//   // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);
754//   // const cache = await io.openKeyValueStore('MyStoreId');
755//   // cache.setValue(cacheId, entry);
756// };""",
757    "maxRequestRetries": 3,
758    "maxRequestsPerMinute": 120,
759    "minConcurrency": 1,
760    "requestHandlerTimeoutSecs": 180,
761    "logLevel": "info",
762    "errorReportingDatasetId": "REPORTING",
763}
764
765# Run the Actor and wait for it to finish
766run = client.actor("jurooravec/profesia-sk-scraper").call(run_input=run_input)
767
768# Fetch and print Actor results from the run's dataset (if there are any)
769for item in client.dataset(run["defaultDatasetId"]).iterate_items():
770    print(item)
Developer
Maintained by Community
Actor metrics
  • 1 monthly users
  • 41.1% runs succeeded
  • 0.0 days response time
  • Created in Apr 2023
  • Modified 8 months ago
Categories