Profesia.sk Scraper
3 days trial then $25.00/month - No credit card required now
Profesia.sk Scraper
3 days trial then $25.00/month - No credit card required now
One-stop-shop for all data on Profesia.sk Extract job offers, list of companies, positions, locations... Job offers include salary, textual info, company, and more
You can access the Profesia.sk Scraper programmatically from your own applications by using the Apify API. You can choose the language preference from below. To use the Apify API, you’ll need an Apify account and your API token, found in Integrations settings in Apify Console.
1# Set API token
2API_TOKEN=<YOUR_API_TOKEN>
3
4# Prepare Actor input
5cat > input.json << 'EOF'
6{
7 "datasetType": "jobOffers",
8 "jobOfferFilterMinSalaryPeriod": "month",
9 "inputExtendFromFunction": "\n/**\n * Inputs:\n *\n * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.\n * `ctx.input` - The input object that was passed to this Actor.\n * `ctx.state` - An object you can use to persist state across all your custom functions.\n * `ctx.sendRequest` - Fetch remote data. Uses 'got-scraping', same as Apify's `sendRequest`.\n * See https://crawlee.dev/docs/guides/got-scraping\n * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.\n * It takes the entry itself, and a list of properties to be used for hashing.\n * By default, you should pass `input.cachePrimaryKeys` to it.\n *\n */\n// async ({ io, input, state, sendRequest, itemCacheKey }) => {\n// // Example: Load Actor config from GitHub URL (public)\n// const config = await sendRequest.get('https://raw.githubusercontent.com/username/project/main/config.json').json();\n// \n// // Increase concurrency during off-peak hours\n// // NOTE: Imagine we're targetting a small server, that can be slower during the day\n// const hours = new Date().getUTCHours();\n// const isOffPeak = hours < 6 || hours > 20;\n// config.maxConcurrency = isOffPeak ? 8 : 3;\n// \n// return config;\n//\n// /* ========== SEE BELOW FOR MORE EXAMPLES ========= */\n//\n// /**\n// * ======= ACCESSING DATASET ========\n// * To save/load/access entries in Dataset.\n// * Docs:\n// * - https://docs.apify.com/platform/storage/dataset\n// * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset\n// * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce\n// */\n// // const dataset = await io.openDataset('MyDatasetId');\n// // const info = await dataset.getInfo();\n// // console.log(info.itemCount);\n// // // => 0\n//\n// /**\n// * ======= ACCESSING REMOTE DATA ========\n// * Use `sendRequest` to get data from the internet:\n// * Docs:\n// * - https://github.com/apify/got-scraping\n// */\n// // const catFact = await sendRequest.get('https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889').json();\n// // console.log(catFact.text);\n// // // => \"Cats make about 100 different sounds. Dogs make only about 10.\",\n//\n// /**\n// * ======= USING CACHE ========\n// * To save the entry to the KeyValue cache (or retrieve it), you can use\n// * `itemCacheKey` to create the entry's ID for you:\n// */\n// // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);\n// // const cache = await io.openKeyValueStore('MyStoreId');\n// // cache.setValue(cacheId, entry);\n// };",
10 "startUrlsFromFunction": "\n/**\n * Inputs:\n *\n * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.\n * `ctx.input` - The input object that was passed to this Actor.\n * `ctx.state` - An object you can use to persist state across all your custom functions.\n * `ctx.sendRequest` - Fetch remote data. Uses 'got-scraping', same as Apify's `sendRequest`.\n * See https://crawlee.dev/docs/guides/got-scraping\n * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.\n * It takes the entry itself, and a list of properties to be used for hashing.\n * By default, you should pass `input.cachePrimaryKeys` to it.\n *\n */\n// async ({ io, input, state, sendRequest, itemCacheKey }) => {\n// // Example: Create and load URLs from a Dataset by combining multiple fields\n// const dataset = await io.openDataset(datasetNameOrId);\n// const data = await dataset.getData();\n// const urls = data.items.map((item) => `https://example.com/u/${item.userId}/list/${item.listId}`);\n// return urls;\n//\n// /* ========== SEE BELOW FOR MORE EXAMPLES ========= */\n//\n// /**\n// * ======= ACCESSING DATASET ========\n// * To save/load/access entries in Dataset.\n// * Docs:\n// * - https://docs.apify.com/platform/storage/dataset\n// * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset\n// * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce\n// */\n// // const dataset = await io.openDataset('MyDatasetId');\n// // const info = await dataset.getInfo();\n// // console.log(info.itemCount);\n// // // => 0\n//\n// /**\n// * ======= ACCESSING REMOTE DATA ========\n// * Use `sendRequest` to get data from the internet:\n// * Docs:\n// * - https://github.com/apify/got-scraping\n// */\n// // const catFact = await sendRequest.get('https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889').json();\n// // console.log(catFact.text);\n// // // => \"Cats make about 100 different sounds. Dogs make only about 10.\",\n//\n// /**\n// * ======= USING CACHE ========\n// * To save the entry to the KeyValue cache (or retrieve it), you can use\n// * `itemCacheKey` to create the entry's ID for you:\n// */\n// // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);\n// // const cache = await io.openKeyValueStore('MyStoreId');\n// // cache.setValue(cacheId, entry);\n// };",
11 "requestMaxEntries": 50,
12 "requestTransform": "\n/**\n * Inputs:\n * `request` - Request holding URL to be scraped.\n * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.\n * `ctx.input` - The input object that was passed to this Actor.\n * `ctx.state` - An object you can use to persist state across all your custom functions.\n * `ctx.sendRequest` - Fetch remote data. Uses 'got-scraping', same as Apify's `sendRequest`.\n * See https://crawlee.dev/docs/guides/got-scraping\n * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.\n * It takes the entry itself, and a list of properties to be used for hashing.\n * By default, you should pass `input.cachePrimaryKeys` to it.\n *\n */\n// async (request, { io, input, state, sendRequest, itemCacheKey }) => {\n// // Example: Tag requests\n// // (maybe because we use RequestQueue that pools multiple scrapers)\n// request.userData.tag = \"VARIANT_A\";\n// return requestQueue;\n//\n// /* ========== SEE BELOW FOR MORE EXAMPLES ========= */\n//\n// /**\n// * ======= ACCESSING DATASET ========\n// * To save/load/access entries in Dataset.\n// * Docs:\n// * - https://docs.apify.com/platform/storage/dataset\n// * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset\n// * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce\n// */\n// // const dataset = await io.openDataset('MyDatasetId');\n// // const info = await dataset.getInfo();\n// // console.log(info.itemCount);\n// // // => 0\n//\n// /**\n// * ======= ACCESSING REMOTE DATA ========\n// * Use `sendRequest` to get data from the internet:\n// * Docs:\n// * - https://github.com/apify/got-scraping\n// */\n// // const catFact = await sendRequest.get('https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889').json();\n// // console.log(catFact.text);\n// // // => \"Cats make about 100 different sounds. Dogs make only about 10.\",\n//\n// /**\n// * ======= USING CACHE ========\n// * To save the entry to the KeyValue cache (or retrieve it), you can use\n// * `itemCacheKey` to create the entry's ID for you:\n// */\n// // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);\n// // const cache = await io.openKeyValueStore('MyStoreId');\n// // cache.setValue(cacheId, entry);\n// };",
13 "requestTransformBefore": "\n/**\n * Inputs:\n *\n * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.\n * `ctx.input` - The input object that was passed to this Actor.\n * `ctx.state` - An object you can use to persist state across all your custom functions.\n * `ctx.sendRequest` - Fetch remote data. Uses 'got-scraping', same as Apify's `sendRequest`.\n * See https://crawlee.dev/docs/guides/got-scraping\n * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.\n * It takes the entry itself, and a list of properties to be used for hashing.\n * By default, you should pass `input.cachePrimaryKeys` to it.\n *\n */\n// async ({ io, input, state, sendRequest, itemCacheKey }) => {\n// // Example: Fetch data or run code BEFORE requests are processed.\n// state.categories = await sendRequest.get('https://example.com/my-categories').json();\n//\n// /* ========== SEE BELOW FOR MORE EXAMPLES ========= */\n//\n// /**\n// * ======= ACCESSING DATASET ========\n// * To save/load/access entries in Dataset.\n// * Docs:\n// * - https://docs.apify.com/platform/storage/dataset\n// * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset\n// * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce\n// */\n// // const dataset = await io.openDataset('MyDatasetId');\n// // const info = await dataset.getInfo();\n// // console.log(info.itemCount);\n// // // => 0\n//\n// /**\n// * ======= ACCESSING REMOTE DATA ========\n// * Use `sendRequest` to get data from the internet:\n// * Docs:\n// * - https://github.com/apify/got-scraping\n// */\n// // const catFact = await sendRequest.get('https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889').json();\n// // console.log(catFact.text);\n// // // => \"Cats make about 100 different sounds. Dogs make only about 10.\",\n//\n// /**\n// * ======= USING CACHE ========\n// * To save the entry to the KeyValue cache (or retrieve it), you can use\n// * `itemCacheKey` to create the entry's ID for you:\n// */\n// // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);\n// // const cache = await io.openKeyValueStore('MyStoreId');\n// // cache.setValue(cacheId, entry);\n// };",
14 "requestTransformAfter": "\n/**\n * Inputs:\n *\n * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.\n * `ctx.input` - The input object that was passed to this Actor.\n * `ctx.state` - An object you can use to persist state across all your custom functions.\n * `ctx.sendRequest` - Fetch remote data. Uses 'got-scraping', same as Apify's `sendRequest`.\n * See https://crawlee.dev/docs/guides/got-scraping\n * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.\n * It takes the entry itself, and a list of properties to be used for hashing.\n * By default, you should pass `input.cachePrimaryKeys` to it.\n *\n */\n// async ({ io, input, state, sendRequest, itemCacheKey }) => {\n// // Example: Fetch data or run code AFTER requests are processed.\n// delete state.categories;\n//\n// /* ========== SEE BELOW FOR MORE EXAMPLES ========= */\n//\n// /**\n// * ======= ACCESSING DATASET ========\n// * To save/load/access entries in Dataset.\n// * Docs:\n// * - https://docs.apify.com/platform/storage/dataset\n// * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset\n// * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce\n// */\n// // const dataset = await io.openDataset('MyDatasetId');\n// // const info = await dataset.getInfo();\n// // console.log(info.itemCount);\n// // // => 0\n//\n// /**\n// * ======= ACCESSING REMOTE DATA ========\n// * Use `sendRequest` to get data from the internet:\n// * Docs:\n// * - https://github.com/apify/got-scraping\n// */\n// // const catFact = await sendRequest.get('https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889').json();\n// // console.log(catFact.text);\n// // // => \"Cats make about 100 different sounds. Dogs make only about 10.\",\n//\n// /**\n// * ======= USING CACHE ========\n// * To save the entry to the KeyValue cache (or retrieve it), you can use\n// * `itemCacheKey` to create the entry's ID for you:\n// */\n// // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);\n// // const cache = await io.openKeyValueStore('MyStoreId');\n// // cache.setValue(cacheId, entry);\n// };",
15 "requestFilter": "\n/**\n * Inputs:\n * `request` - Request holding URL to be scraped.\n * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.\n * `ctx.input` - The input object that was passed to this Actor.\n * `ctx.state` - An object you can use to persist state across all your custom functions.\n * `ctx.sendRequest` - Fetch remote data. Uses 'got-scraping', same as Apify's `sendRequest`.\n * See https://crawlee.dev/docs/guides/got-scraping\n * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.\n * It takes the entry itself, and a list of properties to be used for hashing.\n * By default, you should pass `input.cachePrimaryKeys` to it.\n *\n */\n// async (request, { io, input, state, sendRequest, itemCacheKey }) => {\n// // Example: Filter requests based on their tag\n// // (maybe because we use RequestQueue that pools multiple scrapers)\n// return request.userData.tag === \"VARIANT_A\";\n//\n// /* ========== SEE BELOW FOR MORE EXAMPLES ========= */\n//\n// /**\n// * ======= ACCESSING DATASET ========\n// * To save/load/access entries in Dataset.\n// * Docs:\n// * - https://docs.apify.com/platform/storage/dataset\n// * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset\n// * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce\n// */\n// // const dataset = await io.openDataset('MyDatasetId');\n// // const info = await dataset.getInfo();\n// // console.log(info.itemCount);\n// // // => 0\n//\n// /**\n// * ======= ACCESSING REMOTE DATA ========\n// * Use `sendRequest` to get data from the internet:\n// * Docs:\n// * - https://github.com/apify/got-scraping\n// */\n// // const catFact = await sendRequest.get('https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889').json();\n// // console.log(catFact.text);\n// // // => \"Cats make about 100 different sounds. Dogs make only about 10.\",\n//\n// /**\n// * ======= USING CACHE ========\n// * To save the entry to the KeyValue cache (or retrieve it), you can use\n// * `itemCacheKey` to create the entry's ID for you:\n// */\n// // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);\n// // const cache = await io.openKeyValueStore('MyStoreId');\n// // cache.setValue(cacheId, entry);\n// };",
16 "requestFilterBefore": "\n/**\n * Inputs:\n *\n * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.\n * `ctx.input` - The input object that was passed to this Actor.\n * `ctx.state` - An object you can use to persist state across all your custom functions.\n * `ctx.sendRequest` - Fetch remote data. Uses 'got-scraping', same as Apify's `sendRequest`.\n * See https://crawlee.dev/docs/guides/got-scraping\n * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.\n * It takes the entry itself, and a list of properties to be used for hashing.\n * By default, you should pass `input.cachePrimaryKeys` to it.\n *\n */\n// async ({ io, input, state, sendRequest, itemCacheKey }) => {\n// // Example: Fetch data or run code BEFORE requests are processed.\n// state.categories = await sendRequest.get('https://example.com/my-categories').json();\n//\n// /* ========== SEE BELOW FOR MORE EXAMPLES ========= */\n//\n// /**\n// * ======= ACCESSING DATASET ========\n// * To save/load/access entries in Dataset.\n// * Docs:\n// * - https://docs.apify.com/platform/storage/dataset\n// * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset\n// * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce\n// */\n// // const dataset = await io.openDataset('MyDatasetId');\n// // const info = await dataset.getInfo();\n// // console.log(info.itemCount);\n// // // => 0\n//\n// /**\n// * ======= ACCESSING REMOTE DATA ========\n// * Use `sendRequest` to get data from the internet:\n// * Docs:\n// * - https://github.com/apify/got-scraping\n// */\n// // const catFact = await sendRequest.get('https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889').json();\n// // console.log(catFact.text);\n// // // => \"Cats make about 100 different sounds. Dogs make only about 10.\",\n//\n// /**\n// * ======= USING CACHE ========\n// * To save the entry to the KeyValue cache (or retrieve it), you can use\n// * `itemCacheKey` to create the entry's ID for you:\n// */\n// // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);\n// // const cache = await io.openKeyValueStore('MyStoreId');\n// // cache.setValue(cacheId, entry);\n// };",
17 "requestFilterAfter": "\n/**\n * Inputs:\n *\n * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.\n * `ctx.input` - The input object that was passed to this Actor.\n * `ctx.state` - An object you can use to persist state across all your custom functions.\n * `ctx.sendRequest` - Fetch remote data. Uses 'got-scraping', same as Apify's `sendRequest`.\n * See https://crawlee.dev/docs/guides/got-scraping\n * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.\n * It takes the entry itself, and a list of properties to be used for hashing.\n * By default, you should pass `input.cachePrimaryKeys` to it.\n *\n */\n// async ({ io, input, state, sendRequest, itemCacheKey }) => {\n// // Example: Fetch data or run code AFTER requests are processed.\n// delete state.categories;\n//\n// /* ========== SEE BELOW FOR MORE EXAMPLES ========= */\n//\n// /**\n// * ======= ACCESSING DATASET ========\n// * To save/load/access entries in Dataset.\n// * Docs:\n// * - https://docs.apify.com/platform/storage/dataset\n// * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset\n// * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce\n// */\n// // const dataset = await io.openDataset('MyDatasetId');\n// // const info = await dataset.getInfo();\n// // console.log(info.itemCount);\n// // // => 0\n//\n// /**\n// * ======= ACCESSING REMOTE DATA ========\n// * Use `sendRequest` to get data from the internet:\n// * Docs:\n// * - https://github.com/apify/got-scraping\n// */\n// // const catFact = await sendRequest.get('https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889').json();\n// // console.log(catFact.text);\n// // // => \"Cats make about 100 different sounds. Dogs make only about 10.\",\n//\n// /**\n// * ======= USING CACHE ========\n// * To save the entry to the KeyValue cache (or retrieve it), you can use\n// * `itemCacheKey` to create the entry's ID for you:\n// */\n// // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);\n// // const cache = await io.openKeyValueStore('MyStoreId');\n// // cache.setValue(cacheId, entry);\n// };",
18 "outputMaxEntries": 50,
19 "outputTransform": "\n/**\n * Inputs:\n * `entry` - Scraped entry.\n * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.\n * `ctx.input` - The input object that was passed to this Actor.\n * `ctx.state` - An object you can use to persist state across all your custom functions.\n * `ctx.sendRequest` - Fetch remote data. Uses 'got-scraping', same as Apify's `sendRequest`.\n * See https://crawlee.dev/docs/guides/got-scraping\n * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.\n * It takes the entry itself, and a list of properties to be used for hashing.\n * By default, you should pass `input.cachePrimaryKeys` to it.\n *\n */\n// async (entry, { io, input, state, sendRequest, itemCacheKey }) => {\n// // Example: Add extra custom fields like aggregates\n// return {\n// ...entry,\n// imagesCount: entry.images.length,\n// };\n//\n// /* ========== SEE BELOW FOR MORE EXAMPLES ========= */\n//\n// /**\n// * ======= ACCESSING DATASET ========\n// * To save/load/access entries in Dataset.\n// * Docs:\n// * - https://docs.apify.com/platform/storage/dataset\n// * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset\n// * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce\n// */\n// // const dataset = await io.openDataset('MyDatasetId');\n// // const info = await dataset.getInfo();\n// // console.log(info.itemCount);\n// // // => 0\n//\n// /**\n// * ======= ACCESSING REMOTE DATA ========\n// * Use `sendRequest` to get data from the internet:\n// * Docs:\n// * - https://github.com/apify/got-scraping\n// */\n// // const catFact = await sendRequest.get('https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889').json();\n// // console.log(catFact.text);\n// // // => \"Cats make about 100 different sounds. Dogs make only about 10.\",\n//\n// /**\n// * ======= USING CACHE ========\n// * To save the entry to the KeyValue cache (or retrieve it), you can use\n// * `itemCacheKey` to create the entry's ID for you:\n// */\n// // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);\n// // const cache = await io.openKeyValueStore('MyStoreId');\n// // cache.setValue(cacheId, entry);\n// };",
20 "outputTransformBefore": "\n/**\n * Inputs:\n *\n * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.\n * `ctx.input` - The input object that was passed to this Actor.\n * `ctx.state` - An object you can use to persist state across all your custom functions.\n * `ctx.sendRequest` - Fetch remote data. Uses 'got-scraping', same as Apify's `sendRequest`.\n * See https://crawlee.dev/docs/guides/got-scraping\n * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.\n * It takes the entry itself, and a list of properties to be used for hashing.\n * By default, you should pass `input.cachePrimaryKeys` to it.\n *\n */\n// async ({ io, input, state, sendRequest, itemCacheKey }) => {\n// // Example: Fetch data or run code BEFORE entries are scraped.\n// state.categories = await sendRequest.get('https://example.com/my-categories').json();\n//\n// /* ========== SEE BELOW FOR MORE EXAMPLES ========= */\n//\n// /**\n// * ======= ACCESSING DATASET ========\n// * To save/load/access entries in Dataset.\n// * Docs:\n// * - https://docs.apify.com/platform/storage/dataset\n// * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset\n// * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce\n// */\n// // const dataset = await io.openDataset('MyDatasetId');\n// // const info = await dataset.getInfo();\n// // console.log(info.itemCount);\n// // // => 0\n//\n// /**\n// * ======= ACCESSING REMOTE DATA ========\n// * Use `sendRequest` to get data from the internet:\n// * Docs:\n// * - https://github.com/apify/got-scraping\n// */\n// // const catFact = await sendRequest.get('https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889').json();\n// // console.log(catFact.text);\n// // // => \"Cats make about 100 different sounds. Dogs make only about 10.\",\n//\n// /**\n// * ======= USING CACHE ========\n// * To save the entry to the KeyValue cache (or retrieve it), you can use\n// * `itemCacheKey` to create the entry's ID for you:\n// */\n// // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);\n// // const cache = await io.openKeyValueStore('MyStoreId');\n// // cache.setValue(cacheId, entry);\n// };",
21 "outputTransformAfter": "\n/**\n * Inputs:\n *\n * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.\n * `ctx.input` - The input object that was passed to this Actor.\n * `ctx.state` - An object you can use to persist state across all your custom functions.\n * `ctx.sendRequest` - Fetch remote data. Uses 'got-scraping', same as Apify's `sendRequest`.\n * See https://crawlee.dev/docs/guides/got-scraping\n * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.\n * It takes the entry itself, and a list of properties to be used for hashing.\n * By default, you should pass `input.cachePrimaryKeys` to it.\n *\n */\n// async ({ io, input, state, sendRequest, itemCacheKey }) => {\n// // Example: Fetch data or run code AFTER entries are scraped.\n// delete state.categories;\n//\n// /* ========== SEE BELOW FOR MORE EXAMPLES ========= */\n//\n// /**\n// * ======= ACCESSING DATASET ========\n// * To save/load/access entries in Dataset.\n// * Docs:\n// * - https://docs.apify.com/platform/storage/dataset\n// * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset\n// * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce\n// */\n// // const dataset = await io.openDataset('MyDatasetId');\n// // const info = await dataset.getInfo();\n// // console.log(info.itemCount);\n// // // => 0\n//\n// /**\n// * ======= ACCESSING REMOTE DATA ========\n// * Use `sendRequest` to get data from the internet:\n// * Docs:\n// * - https://github.com/apify/got-scraping\n// */\n// // const catFact = await sendRequest.get('https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889').json();\n// // console.log(catFact.text);\n// // // => \"Cats make about 100 different sounds. Dogs make only about 10.\",\n//\n// /**\n// * ======= USING CACHE ========\n// * To save the entry to the KeyValue cache (or retrieve it), you can use\n// * `itemCacheKey` to create the entry's ID for you:\n// */\n// // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);\n// // const cache = await io.openKeyValueStore('MyStoreId');\n// // cache.setValue(cacheId, entry);\n// };",
22 "outputFilter": "\n/**\n * Inputs:\n * `entry` - Scraped entry.\n * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.\n * `ctx.input` - The input object that was passed to this Actor.\n * `ctx.state` - An object you can use to persist state across all your custom functions.\n * `ctx.sendRequest` - Fetch remote data. Uses 'got-scraping', same as Apify's `sendRequest`.\n * See https://crawlee.dev/docs/guides/got-scraping\n * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.\n * It takes the entry itself, and a list of properties to be used for hashing.\n * By default, you should pass `input.cachePrimaryKeys` to it.\n *\n */\n// async (entry, { io, input, state, sendRequest, itemCacheKey }) => {\n// // Example: Filter entries based on number of images they have (at least 5)\n// return entry.images.length > 5;\n//\n// /* ========== SEE BELOW FOR MORE EXAMPLES ========= */\n//\n// /**\n// * ======= ACCESSING DATASET ========\n// * To save/load/access entries in Dataset.\n// * Docs:\n// * - https://docs.apify.com/platform/storage/dataset\n// * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset\n// * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce\n// */\n// // const dataset = await io.openDataset('MyDatasetId');\n// // const info = await dataset.getInfo();\n// // console.log(info.itemCount);\n// // // => 0\n//\n// /**\n// * ======= ACCESSING REMOTE DATA ========\n// * Use `sendRequest` to get data from the internet:\n// * Docs:\n// * - https://github.com/apify/got-scraping\n// */\n// // const catFact = await sendRequest.get('https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889').json();\n// // console.log(catFact.text);\n// // // => \"Cats make about 100 different sounds. Dogs make only about 10.\",\n//\n// /**\n// * ======= USING CACHE ========\n// * To save the entry to the KeyValue cache (or retrieve it), you can use\n// * `itemCacheKey` to create the entry's ID for you:\n// */\n// // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);\n// // const cache = await io.openKeyValueStore('MyStoreId');\n// // cache.setValue(cacheId, entry);\n// };",
23 "outputFilterBefore": "\n/**\n * Inputs:\n *\n * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.\n * `ctx.input` - The input object that was passed to this Actor.\n * `ctx.state` - An object you can use to persist state across all your custom functions.\n * `ctx.sendRequest` - Fetch remote data. Uses 'got-scraping', same as Apify's `sendRequest`.\n * See https://crawlee.dev/docs/guides/got-scraping\n * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.\n * It takes the entry itself, and a list of properties to be used for hashing.\n * By default, you should pass `input.cachePrimaryKeys` to it.\n *\n */\n// async ({ io, input, state, sendRequest, itemCacheKey }) => {\n// // Example: Fetch data or run code BEFORE entries are scraped.\n// state.categories = await sendRequest.get('https://example.com/my-categories').json();\n//\n// /* ========== SEE BELOW FOR MORE EXAMPLES ========= */\n//\n// /**\n// * ======= ACCESSING DATASET ========\n// * To save/load/access entries in Dataset.\n// * Docs:\n// * - https://docs.apify.com/platform/storage/dataset\n// * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset\n// * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce\n// */\n// // const dataset = await io.openDataset('MyDatasetId');\n// // const info = await dataset.getInfo();\n// // console.log(info.itemCount);\n// // // => 0\n//\n// /**\n// * ======= ACCESSING REMOTE DATA ========\n// * Use `sendRequest` to get data from the internet:\n// * Docs:\n// * - https://github.com/apify/got-scraping\n// */\n// // const catFact = await sendRequest.get('https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889').json();\n// // console.log(catFact.text);\n// // // => \"Cats make about 100 different sounds. Dogs make only about 10.\",\n//\n// /**\n// * ======= USING CACHE ========\n// * To save the entry to the KeyValue cache (or retrieve it), you can use\n// * `itemCacheKey` to create the entry's ID for you:\n// */\n// // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);\n// // const cache = await io.openKeyValueStore('MyStoreId');\n// // cache.setValue(cacheId, entry);\n// };",
24 "outputFilterAfter": "\n/**\n * Inputs:\n *\n * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.\n * `ctx.input` - The input object that was passed to this Actor.\n * `ctx.state` - An object you can use to persist state across all your custom functions.\n * `ctx.sendRequest` - Fetch remote data. Uses 'got-scraping', same as Apify's `sendRequest`.\n * See https://crawlee.dev/docs/guides/got-scraping\n * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.\n * It takes the entry itself, and a list of properties to be used for hashing.\n * By default, you should pass `input.cachePrimaryKeys` to it.\n *\n */\n// async ({ io, input, state, sendRequest, itemCacheKey }) => {\n// // Example: Fetch data or run code AFTER entries are scraped.\n// delete state.categories;\n//\n// /* ========== SEE BELOW FOR MORE EXAMPLES ========= */\n//\n// /**\n// * ======= ACCESSING DATASET ========\n// * To save/load/access entries in Dataset.\n// * Docs:\n// * - https://docs.apify.com/platform/storage/dataset\n// * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset\n// * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce\n// */\n// // const dataset = await io.openDataset('MyDatasetId');\n// // const info = await dataset.getInfo();\n// // console.log(info.itemCount);\n// // // => 0\n//\n// /**\n// * ======= ACCESSING REMOTE DATA ========\n// * Use `sendRequest` to get data from the internet:\n// * Docs:\n// * - https://github.com/apify/got-scraping\n// */\n// // const catFact = await sendRequest.get('https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889').json();\n// // console.log(catFact.text);\n// // // => \"Cats make about 100 different sounds. Dogs make only about 10.\",\n//\n// /**\n// * ======= USING CACHE ========\n// * To save the entry to the KeyValue cache (or retrieve it), you can use\n// * `itemCacheKey` to create the entry's ID for you:\n// */\n// // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);\n// // const cache = await io.openKeyValueStore('MyStoreId');\n// // cache.setValue(cacheId, entry);\n// };",
25 "maxRequestRetries": 3,
26 "maxRequestsPerMinute": 120,
27 "minConcurrency": 1,
28 "requestHandlerTimeoutSecs": 180,
29 "logLevel": "info",
30 "errorReportingDatasetId": "REPORTING"
31}
32EOF
33
34# Run the Actor using an HTTP API
35# See the full API reference at https://docs.apify.com/api/v2
36curl "https://api.apify.com/v2/acts/jurooravec~profesia-sk-scraper/runs?token=$API_TOKEN" \
37 -X POST \
38 -d @input.json \
39 -H 'Content-Type: application/json'
Profesia.sk Scraper API
Below, you can find a list of relevant HTTP API endpoints for calling the Profesia.sk Scraper Actor. For this, you’ll need an Apify account. Replace <YOUR_API_TOKEN> in the URLs with your Apify API token, which you can find under Integrations in Apify Console. For details, see the API reference .
Run Actor
https://api.apify.com/v2/acts/jurooravec~profesia-sk-scraper/runs?token=<YOUR_API_TOKEN>
Note: By adding the method=POST
query parameter, this API endpoint can be called using a GET request and thus used in third-party webhooks. Please refer to our Run Actor API documentation .
Run Actor synchronously and get dataset items
https://api.apify.com/v2/acts/jurooravec~profesia-sk-scraper/run-sync-get-dataset-items?token=<YOUR_API_TOKEN>
Note: This endpoint supports both POST and GET request methods. However, only the POST method allows you to pass input data. For more information, please refer to our Run Actor synchronously and get dataset items API documentation .
Get Actor
https://api.apify.com/v2/acts/jurooravec~profesia-sk-scraper?token=<YOUR_API_TOKEN>
For more information, please refer to our Get Actor API documentation .
Actors can be used to scrape web pages, extract data, or automate browser tasks. Use the Profesia.sk Scraper API programmatically via the Apify API.
You can choose from:
You can start Profesia.sk Scraper with the Apify API by sending an HTTP POST request to the Run Actor endpoint. An Actor’s input and its content type can be passed as a payload of the POST request, and additional options can be specified using URL query parameters. The Profesia.sk Scraper is identified within the API by its ID, which is the creator’s username and the name of the Actor.
When the Profesia.sk Scraper run finishes you can list the data from its default dataset (storage) via the API or you can preview the data directly on Apify Console .
- 3 monthly users
- 1 star
- 93.3% runs succeeded
- Created in Apr 2023
- Modified about 1 year ago