
Profesia.sk Scraper
Pricing
$25.00/month + usage

Profesia.sk Scraper
One-stop-shop for all data on Profesia.sk Extract job offers, list of companies, positions, locations... Job offers include salary, textual info, company, and more
0.0 (0)
Pricing
$25.00/month + usage
1
Monthly users
2
Runs succeeded
>99%
Last modified
2 years ago
You can access the Profesia.sk Scraper programmatically from your own applications by using the Apify API. You can also choose the language preference from below. To use the Apify API, you’ll need an Apify account and your API token, found in Integrations settings in Apify Console.
1# Start Server-Sent Events (SSE) session and keep it running
2curl "https://actors-mcp-server.apify.actor/sse?token=<YOUR_API_TOKEN>&actors=jurooravec/profesia-sk-scraper"
3
4# Session id example output:
5# event: endpoint
6# data: /message?sessionId=9d820491-38d4-4c7d-bb6a-3b7dc542f1fa
Using Profesia.sk Scraper via Model Context Protocol (MCP) server
MCP server lets you use Profesia.sk Scraper within your AI workflows. Send API requests to trigger actions and receive real-time results. Take the received sessionId
and use it to communicate with the MCP server. The message starts the Profesia.sk Scraper Actor with the provided input.
1curl -X POST "https://actors-mcp-server.apify.actor/message?token=<YOUR_API_TOKEN>&session_id=<SESSION_ID>" -H "Content-Type: application/json" -d '{
2 "jsonrpc": "2.0",
3 "id": 1,
4 "method": "tools/call",
5 "params": {
6 "arguments": {
7 "datasetType": "jobOffers",
8 "jobOfferFilterMinSalaryPeriod": "month",
9 "inputExtendFromFunction": "\\n/**\\n * Inputs:\\n *\\n * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.\\n * `ctx.input` - The input object that was passed to this Actor.\\n * `ctx.state` - An object you can use to persist state across all your custom functions.\\n * `ctx.sendRequest` - Fetch remote data. Uses '\''got-scraping'\'', same as Apify'\''s `sendRequest`.\\n * See https://crawlee.dev/docs/guides/got-scraping\\n * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.\\n * It takes the entry itself, and a list of properties to be used for hashing.\\n * By default, you should pass `input.cachePrimaryKeys` to it.\\n *\\n */\\n// async ({ io, input, state, sendRequest, itemCacheKey }) => {\\n// // Example: Load Actor config from GitHub URL (public)\\n// const config = await sendRequest.get('\''https://raw.githubusercontent.com/username/project/main/config.json'\'').json();\\n// \\n// // Increase concurrency during off-peak hours\\n// // NOTE: Imagine we'\''re targetting a small server, that can be slower during the day\\n// const hours = new Date().getUTCHours();\\n// const isOffPeak = hours < 6 || hours > 20;\\n// config.maxConcurrency = isOffPeak ? 8 : 3;\\n// \\n// return config;\\n//\\n// /* ========== SEE BELOW FOR MORE EXAMPLES ========= */\\n//\\n// /**\\n// * ======= ACCESSING DATASET ========\\n// * To save/load/access entries in Dataset.\\n// * Docs:\\n// * - https://docs.apify.com/platform/storage/dataset\\n// * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset\\n// * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce\\n// */\\n// // const dataset = await io.openDataset('\''MyDatasetId'\'');\\n// // const info = await dataset.getInfo();\\n// // console.log(info.itemCount);\\n// // // => 0\\n//\\n// /**\\n// * ======= ACCESSING REMOTE DATA ========\\n// * Use `sendRequest` to get data from the internet:\\n// * Docs:\\n// * - https://github.com/apify/got-scraping\\n// */\\n// // const catFact = await sendRequest.get('\''https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889'\'').json();\\n// // console.log(catFact.text);\\n// // // => \\"Cats make about 100 different sounds. Dogs make only about 10.\\",\\n//\\n// /**\\n// * ======= USING CACHE ========\\n// * To save the entry to the KeyValue cache (or retrieve it), you can use\\n// * `itemCacheKey` to create the entry'\''s ID for you:\\n// */\\n// // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);\\n// // const cache = await io.openKeyValueStore('\''MyStoreId'\'');\\n// // cache.setValue(cacheId, entry);\\n// };",
10 "startUrlsFromFunction": "\\n/**\\n * Inputs:\\n *\\n * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.\\n * `ctx.input` - The input object that was passed to this Actor.\\n * `ctx.state` - An object you can use to persist state across all your custom functions.\\n * `ctx.sendRequest` - Fetch remote data. Uses '\''got-scraping'\'', same as Apify'\''s `sendRequest`.\\n * See https://crawlee.dev/docs/guides/got-scraping\\n * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.\\n * It takes the entry itself, and a list of properties to be used for hashing.\\n * By default, you should pass `input.cachePrimaryKeys` to it.\\n *\\n */\\n// async ({ io, input, state, sendRequest, itemCacheKey }) => {\\n// // Example: Create and load URLs from a Dataset by combining multiple fields\\n// const dataset = await io.openDataset(datasetNameOrId);\\n// const data = await dataset.getData();\\n// const urls = data.items.map((item) => `https://example.com/u/${item.userId}/list/${item.listId}`);\\n// return urls;\\n//\\n// /* ========== SEE BELOW FOR MORE EXAMPLES ========= */\\n//\\n// /**\\n// * ======= ACCESSING DATASET ========\\n// * To save/load/access entries in Dataset.\\n// * Docs:\\n// * - https://docs.apify.com/platform/storage/dataset\\n// * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset\\n// * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce\\n// */\\n// // const dataset = await io.openDataset('\''MyDatasetId'\'');\\n// // const info = await dataset.getInfo();\\n// // console.log(info.itemCount);\\n// // // => 0\\n//\\n// /**\\n// * ======= ACCESSING REMOTE DATA ========\\n// * Use `sendRequest` to get data from the internet:\\n// * Docs:\\n// * - https://github.com/apify/got-scraping\\n// */\\n// // const catFact = await sendRequest.get('\''https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889'\'').json();\\n// // console.log(catFact.text);\\n// // // => \\"Cats make about 100 different sounds. Dogs make only about 10.\\",\\n//\\n// /**\\n// * ======= USING CACHE ========\\n// * To save the entry to the KeyValue cache (or retrieve it), you can use\\n// * `itemCacheKey` to create the entry'\''s ID for you:\\n// */\\n// // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);\\n// // const cache = await io.openKeyValueStore('\''MyStoreId'\'');\\n// // cache.setValue(cacheId, entry);\\n// };",
11 "requestMaxEntries": 50,
12 "requestTransform": "\\n/**\\n * Inputs:\\n * `request` - Request holding URL to be scraped.\\n * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.\\n * `ctx.input` - The input object that was passed to this Actor.\\n * `ctx.state` - An object you can use to persist state across all your custom functions.\\n * `ctx.sendRequest` - Fetch remote data. Uses '\''got-scraping'\'', same as Apify'\''s `sendRequest`.\\n * See https://crawlee.dev/docs/guides/got-scraping\\n * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.\\n * It takes the entry itself, and a list of properties to be used for hashing.\\n * By default, you should pass `input.cachePrimaryKeys` to it.\\n *\\n */\\n// async (request, { io, input, state, sendRequest, itemCacheKey }) => {\\n// // Example: Tag requests\\n// // (maybe because we use RequestQueue that pools multiple scrapers)\\n// request.userData.tag = \\"VARIANT_A\\";\\n// return requestQueue;\\n//\\n// /* ========== SEE BELOW FOR MORE EXAMPLES ========= */\\n//\\n// /**\\n// * ======= ACCESSING DATASET ========\\n// * To save/load/access entries in Dataset.\\n// * Docs:\\n// * - https://docs.apify.com/platform/storage/dataset\\n// * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset\\n// * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce\\n// */\\n// // const dataset = await io.openDataset('\''MyDatasetId'\'');\\n// // const info = await dataset.getInfo();\\n// // console.log(info.itemCount);\\n// // // => 0\\n//\\n// /**\\n// * ======= ACCESSING REMOTE DATA ========\\n// * Use `sendRequest` to get data from the internet:\\n// * Docs:\\n// * - https://github.com/apify/got-scraping\\n// */\\n// // const catFact = await sendRequest.get('\''https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889'\'').json();\\n// // console.log(catFact.text);\\n// // // => \\"Cats make about 100 different sounds. Dogs make only about 10.\\",\\n//\\n// /**\\n// * ======= USING CACHE ========\\n// * To save the entry to the KeyValue cache (or retrieve it), you can use\\n// * `itemCacheKey` to create the entry'\''s ID for you:\\n// */\\n// // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);\\n// // const cache = await io.openKeyValueStore('\''MyStoreId'\'');\\n// // cache.setValue(cacheId, entry);\\n// };",
13 "requestTransformBefore": "\\n/**\\n * Inputs:\\n *\\n * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.\\n * `ctx.input` - The input object that was passed to this Actor.\\n * `ctx.state` - An object you can use to persist state across all your custom functions.\\n * `ctx.sendRequest` - Fetch remote data. Uses '\''got-scraping'\'', same as Apify'\''s `sendRequest`.\\n * See https://crawlee.dev/docs/guides/got-scraping\\n * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.\\n * It takes the entry itself, and a list of properties to be used for hashing.\\n * By default, you should pass `input.cachePrimaryKeys` to it.\\n *\\n */\\n// async ({ io, input, state, sendRequest, itemCacheKey }) => {\\n// // Example: Fetch data or run code BEFORE requests are processed.\\n// state.categories = await sendRequest.get('\''https://example.com/my-categories'\'').json();\\n//\\n// /* ========== SEE BELOW FOR MORE EXAMPLES ========= */\\n//\\n// /**\\n// * ======= ACCESSING DATASET ========\\n// * To save/load/access entries in Dataset.\\n// * Docs:\\n// * - https://docs.apify.com/platform/storage/dataset\\n// * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset\\n// * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce\\n// */\\n// // const dataset = await io.openDataset('\''MyDatasetId'\'');\\n// // const info = await dataset.getInfo();\\n// // console.log(info.itemCount);\\n// // // => 0\\n//\\n// /**\\n// * ======= ACCESSING REMOTE DATA ========\\n// * Use `sendRequest` to get data from the internet:\\n// * Docs:\\n// * - https://github.com/apify/got-scraping\\n// */\\n// // const catFact = await sendRequest.get('\''https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889'\'').json();\\n// // console.log(catFact.text);\\n// // // => \\"Cats make about 100 different sounds. Dogs make only about 10.\\",\\n//\\n// /**\\n// * ======= USING CACHE ========\\n// * To save the entry to the KeyValue cache (or retrieve it), you can use\\n// * `itemCacheKey` to create the entry'\''s ID for you:\\n// */\\n// // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);\\n// // const cache = await io.openKeyValueStore('\''MyStoreId'\'');\\n// // cache.setValue(cacheId, entry);\\n// };",
14 "requestTransformAfter": "\\n/**\\n * Inputs:\\n *\\n * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.\\n * `ctx.input` - The input object that was passed to this Actor.\\n * `ctx.state` - An object you can use to persist state across all your custom functions.\\n * `ctx.sendRequest` - Fetch remote data. Uses '\''got-scraping'\'', same as Apify'\''s `sendRequest`.\\n * See https://crawlee.dev/docs/guides/got-scraping\\n * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.\\n * It takes the entry itself, and a list of properties to be used for hashing.\\n * By default, you should pass `input.cachePrimaryKeys` to it.\\n *\\n */\\n// async ({ io, input, state, sendRequest, itemCacheKey }) => {\\n// // Example: Fetch data or run code AFTER requests are processed.\\n// delete state.categories;\\n//\\n// /* ========== SEE BELOW FOR MORE EXAMPLES ========= */\\n//\\n// /**\\n// * ======= ACCESSING DATASET ========\\n// * To save/load/access entries in Dataset.\\n// * Docs:\\n// * - https://docs.apify.com/platform/storage/dataset\\n// * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset\\n// * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce\\n// */\\n// // const dataset = await io.openDataset('\''MyDatasetId'\'');\\n// // const info = await dataset.getInfo();\\n// // console.log(info.itemCount);\\n// // // => 0\\n//\\n// /**\\n// * ======= ACCESSING REMOTE DATA ========\\n// * Use `sendRequest` to get data from the internet:\\n// * Docs:\\n// * - https://github.com/apify/got-scraping\\n// */\\n// // const catFact = await sendRequest.get('\''https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889'\'').json();\\n// // console.log(catFact.text);\\n// // // => \\"Cats make about 100 different sounds. Dogs make only about 10.\\",\\n//\\n// /**\\n// * ======= USING CACHE ========\\n// * To save the entry to the KeyValue cache (or retrieve it), you can use\\n// * `itemCacheKey` to create the entry'\''s ID for you:\\n// */\\n// // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);\\n// // const cache = await io.openKeyValueStore('\''MyStoreId'\'');\\n// // cache.setValue(cacheId, entry);\\n// };",
15 "requestFilter": "\\n/**\\n * Inputs:\\n * `request` - Request holding URL to be scraped.\\n * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.\\n * `ctx.input` - The input object that was passed to this Actor.\\n * `ctx.state` - An object you can use to persist state across all your custom functions.\\n * `ctx.sendRequest` - Fetch remote data. Uses '\''got-scraping'\'', same as Apify'\''s `sendRequest`.\\n * See https://crawlee.dev/docs/guides/got-scraping\\n * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.\\n * It takes the entry itself, and a list of properties to be used for hashing.\\n * By default, you should pass `input.cachePrimaryKeys` to it.\\n *\\n */\\n// async (request, { io, input, state, sendRequest, itemCacheKey }) => {\\n// // Example: Filter requests based on their tag\\n// // (maybe because we use RequestQueue that pools multiple scrapers)\\n// return request.userData.tag === \\"VARIANT_A\\";\\n//\\n// /* ========== SEE BELOW FOR MORE EXAMPLES ========= */\\n//\\n// /**\\n// * ======= ACCESSING DATASET ========\\n// * To save/load/access entries in Dataset.\\n// * Docs:\\n// * - https://docs.apify.com/platform/storage/dataset\\n// * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset\\n// * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce\\n// */\\n// // const dataset = await io.openDataset('\''MyDatasetId'\'');\\n// // const info = await dataset.getInfo();\\n// // console.log(info.itemCount);\\n// // // => 0\\n//\\n// /**\\n// * ======= ACCESSING REMOTE DATA ========\\n// * Use `sendRequest` to get data from the internet:\\n// * Docs:\\n// * - https://github.com/apify/got-scraping\\n// */\\n// // const catFact = await sendRequest.get('\''https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889'\'').json();\\n// // console.log(catFact.text);\\n// // // => \\"Cats make about 100 different sounds. Dogs make only about 10.\\",\\n//\\n// /**\\n// * ======= USING CACHE ========\\n// * To save the entry to the KeyValue cache (or retrieve it), you can use\\n// * `itemCacheKey` to create the entry'\''s ID for you:\\n// */\\n// // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);\\n// // const cache = await io.openKeyValueStore('\''MyStoreId'\'');\\n// // cache.setValue(cacheId, entry);\\n// };",
16 "requestFilterBefore": "\\n/**\\n * Inputs:\\n *\\n * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.\\n * `ctx.input` - The input object that was passed to this Actor.\\n * `ctx.state` - An object you can use to persist state across all your custom functions.\\n * `ctx.sendRequest` - Fetch remote data. Uses '\''got-scraping'\'', same as Apify'\''s `sendRequest`.\\n * See https://crawlee.dev/docs/guides/got-scraping\\n * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.\\n * It takes the entry itself, and a list of properties to be used for hashing.\\n * By default, you should pass `input.cachePrimaryKeys` to it.\\n *\\n */\\n// async ({ io, input, state, sendRequest, itemCacheKey }) => {\\n// // Example: Fetch data or run code BEFORE requests are processed.\\n// state.categories = await sendRequest.get('\''https://example.com/my-categories'\'').json();\\n//\\n// /* ========== SEE BELOW FOR MORE EXAMPLES ========= */\\n//\\n// /**\\n// * ======= ACCESSING DATASET ========\\n// * To save/load/access entries in Dataset.\\n// * Docs:\\n// * - https://docs.apify.com/platform/storage/dataset\\n// * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset\\n// * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce\\n// */\\n// // const dataset = await io.openDataset('\''MyDatasetId'\'');\\n// // const info = await dataset.getInfo();\\n// // console.log(info.itemCount);\\n// // // => 0\\n//\\n// /**\\n// * ======= ACCESSING REMOTE DATA ========\\n// * Use `sendRequest` to get data from the internet:\\n// * Docs:\\n// * - https://github.com/apify/got-scraping\\n// */\\n// // const catFact = await sendRequest.get('\''https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889'\'').json();\\n// // console.log(catFact.text);\\n// // // => \\"Cats make about 100 different sounds. Dogs make only about 10.\\",\\n//\\n// /**\\n// * ======= USING CACHE ========\\n// * To save the entry to the KeyValue cache (or retrieve it), you can use\\n// * `itemCacheKey` to create the entry'\''s ID for you:\\n// */\\n// // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);\\n// // const cache = await io.openKeyValueStore('\''MyStoreId'\'');\\n// // cache.setValue(cacheId, entry);\\n// };",
17 "requestFilterAfter": "\\n/**\\n * Inputs:\\n *\\n * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.\\n * `ctx.input` - The input object that was passed to this Actor.\\n * `ctx.state` - An object you can use to persist state across all your custom functions.\\n * `ctx.sendRequest` - Fetch remote data. Uses '\''got-scraping'\'', same as Apify'\''s `sendRequest`.\\n * See https://crawlee.dev/docs/guides/got-scraping\\n * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.\\n * It takes the entry itself, and a list of properties to be used for hashing.\\n * By default, you should pass `input.cachePrimaryKeys` to it.\\n *\\n */\\n// async ({ io, input, state, sendRequest, itemCacheKey }) => {\\n// // Example: Fetch data or run code AFTER requests are processed.\\n// delete state.categories;\\n//\\n// /* ========== SEE BELOW FOR MORE EXAMPLES ========= */\\n//\\n// /**\\n// * ======= ACCESSING DATASET ========\\n// * To save/load/access entries in Dataset.\\n// * Docs:\\n// * - https://docs.apify.com/platform/storage/dataset\\n// * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset\\n// * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce\\n// */\\n// // const dataset = await io.openDataset('\''MyDatasetId'\'');\\n// // const info = await dataset.getInfo();\\n// // console.log(info.itemCount);\\n// // // => 0\\n//\\n// /**\\n// * ======= ACCESSING REMOTE DATA ========\\n// * Use `sendRequest` to get data from the internet:\\n// * Docs:\\n// * - https://github.com/apify/got-scraping\\n// */\\n// // const catFact = await sendRequest.get('\''https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889'\'').json();\\n// // console.log(catFact.text);\\n// // // => \\"Cats make about 100 different sounds. Dogs make only about 10.\\",\\n//\\n// /**\\n// * ======= USING CACHE ========\\n// * To save the entry to the KeyValue cache (or retrieve it), you can use\\n// * `itemCacheKey` to create the entry'\''s ID for you:\\n// */\\n// // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);\\n// // const cache = await io.openKeyValueStore('\''MyStoreId'\'');\\n// // cache.setValue(cacheId, entry);\\n// };",
18 "outputMaxEntries": 50,
19 "outputTransform": "\\n/**\\n * Inputs:\\n * `entry` - Scraped entry.\\n * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.\\n * `ctx.input` - The input object that was passed to this Actor.\\n * `ctx.state` - An object you can use to persist state across all your custom functions.\\n * `ctx.sendRequest` - Fetch remote data. Uses '\''got-scraping'\'', same as Apify'\''s `sendRequest`.\\n * See https://crawlee.dev/docs/guides/got-scraping\\n * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.\\n * It takes the entry itself, and a list of properties to be used for hashing.\\n * By default, you should pass `input.cachePrimaryKeys` to it.\\n *\\n */\\n// async (entry, { io, input, state, sendRequest, itemCacheKey }) => {\\n// // Example: Add extra custom fields like aggregates\\n// return {\\n// ...entry,\\n// imagesCount: entry.images.length,\\n// };\\n//\\n// /* ========== SEE BELOW FOR MORE EXAMPLES ========= */\\n//\\n// /**\\n// * ======= ACCESSING DATASET ========\\n// * To save/load/access entries in Dataset.\\n// * Docs:\\n// * - https://docs.apify.com/platform/storage/dataset\\n// * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset\\n// * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce\\n// */\\n// // const dataset = await io.openDataset('\''MyDatasetId'\'');\\n// // const info = await dataset.getInfo();\\n// // console.log(info.itemCount);\\n// // // => 0\\n//\\n// /**\\n// * ======= ACCESSING REMOTE DATA ========\\n// * Use `sendRequest` to get data from the internet:\\n// * Docs:\\n// * - https://github.com/apify/got-scraping\\n// */\\n// // const catFact = await sendRequest.get('\''https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889'\'').json();\\n// // console.log(catFact.text);\\n// // // => \\"Cats make about 100 different sounds. Dogs make only about 10.\\",\\n//\\n// /**\\n// * ======= USING CACHE ========\\n// * To save the entry to the KeyValue cache (or retrieve it), you can use\\n// * `itemCacheKey` to create the entry'\''s ID for you:\\n// */\\n// // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);\\n// // const cache = await io.openKeyValueStore('\''MyStoreId'\'');\\n// // cache.setValue(cacheId, entry);\\n// };",
20 "outputTransformBefore": "\\n/**\\n * Inputs:\\n *\\n * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.\\n * `ctx.input` - The input object that was passed to this Actor.\\n * `ctx.state` - An object you can use to persist state across all your custom functions.\\n * `ctx.sendRequest` - Fetch remote data. Uses '\''got-scraping'\'', same as Apify'\''s `sendRequest`.\\n * See https://crawlee.dev/docs/guides/got-scraping\\n * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.\\n * It takes the entry itself, and a list of properties to be used for hashing.\\n * By default, you should pass `input.cachePrimaryKeys` to it.\\n *\\n */\\n// async ({ io, input, state, sendRequest, itemCacheKey }) => {\\n// // Example: Fetch data or run code BEFORE entries are scraped.\\n// state.categories = await sendRequest.get('\''https://example.com/my-categories'\'').json();\\n//\\n// /* ========== SEE BELOW FOR MORE EXAMPLES ========= */\\n//\\n// /**\\n// * ======= ACCESSING DATASET ========\\n// * To save/load/access entries in Dataset.\\n// * Docs:\\n// * - https://docs.apify.com/platform/storage/dataset\\n// * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset\\n// * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce\\n// */\\n// // const dataset = await io.openDataset('\''MyDatasetId'\'');\\n// // const info = await dataset.getInfo();\\n// // console.log(info.itemCount);\\n// // // => 0\\n//\\n// /**\\n// * ======= ACCESSING REMOTE DATA ========\\n// * Use `sendRequest` to get data from the internet:\\n// * Docs:\\n// * - https://github.com/apify/got-scraping\\n// */\\n// // const catFact = await sendRequest.get('\''https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889'\'').json();\\n// // console.log(catFact.text);\\n// // // => \\"Cats make about 100 different sounds. Dogs make only about 10.\\",\\n//\\n// /**\\n// * ======= USING CACHE ========\\n// * To save the entry to the KeyValue cache (or retrieve it), you can use\\n// * `itemCacheKey` to create the entry'\''s ID for you:\\n// */\\n// // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);\\n// // const cache = await io.openKeyValueStore('\''MyStoreId'\'');\\n// // cache.setValue(cacheId, entry);\\n// };",
21 "outputTransformAfter": "\\n/**\\n * Inputs:\\n *\\n * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.\\n * `ctx.input` - The input object that was passed to this Actor.\\n * `ctx.state` - An object you can use to persist state across all your custom functions.\\n * `ctx.sendRequest` - Fetch remote data. Uses '\''got-scraping'\'', same as Apify'\''s `sendRequest`.\\n * See https://crawlee.dev/docs/guides/got-scraping\\n * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.\\n * It takes the entry itself, and a list of properties to be used for hashing.\\n * By default, you should pass `input.cachePrimaryKeys` to it.\\n *\\n */\\n// async ({ io, input, state, sendRequest, itemCacheKey }) => {\\n// // Example: Fetch data or run code AFTER entries are scraped.\\n// delete state.categories;\\n//\\n// /* ========== SEE BELOW FOR MORE EXAMPLES ========= */\\n//\\n// /**\\n// * ======= ACCESSING DATASET ========\\n// * To save/load/access entries in Dataset.\\n// * Docs:\\n// * - https://docs.apify.com/platform/storage/dataset\\n// * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset\\n// * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce\\n// */\\n// // const dataset = await io.openDataset('\''MyDatasetId'\'');\\n// // const info = await dataset.getInfo();\\n// // console.log(info.itemCount);\\n// // // => 0\\n//\\n// /**\\n// * ======= ACCESSING REMOTE DATA ========\\n// * Use `sendRequest` to get data from the internet:\\n// * Docs:\\n// * - https://github.com/apify/got-scraping\\n// */\\n// // const catFact = await sendRequest.get('\''https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889'\'').json();\\n// // console.log(catFact.text);\\n// // // => \\"Cats make about 100 different sounds. Dogs make only about 10.\\",\\n//\\n// /**\\n// * ======= USING CACHE ========\\n// * To save the entry to the KeyValue cache (or retrieve it), you can use\\n// * `itemCacheKey` to create the entry'\''s ID for you:\\n// */\\n// // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);\\n// // const cache = await io.openKeyValueStore('\''MyStoreId'\'');\\n// // cache.setValue(cacheId, entry);\\n// };",
22 "outputFilter": "\\n/**\\n * Inputs:\\n * `entry` - Scraped entry.\\n * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.\\n * `ctx.input` - The input object that was passed to this Actor.\\n * `ctx.state` - An object you can use to persist state across all your custom functions.\\n * `ctx.sendRequest` - Fetch remote data. Uses '\''got-scraping'\'', same as Apify'\''s `sendRequest`.\\n * See https://crawlee.dev/docs/guides/got-scraping\\n * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.\\n * It takes the entry itself, and a list of properties to be used for hashing.\\n * By default, you should pass `input.cachePrimaryKeys` to it.\\n *\\n */\\n// async (entry, { io, input, state, sendRequest, itemCacheKey }) => {\\n// // Example: Filter entries based on number of images they have (at least 5)\\n// return entry.images.length > 5;\\n//\\n// /* ========== SEE BELOW FOR MORE EXAMPLES ========= */\\n//\\n// /**\\n// * ======= ACCESSING DATASET ========\\n// * To save/load/access entries in Dataset.\\n// * Docs:\\n// * - https://docs.apify.com/platform/storage/dataset\\n// * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset\\n// * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce\\n// */\\n// // const dataset = await io.openDataset('\''MyDatasetId'\'');\\n// // const info = await dataset.getInfo();\\n// // console.log(info.itemCount);\\n// // // => 0\\n//\\n// /**\\n// * ======= ACCESSING REMOTE DATA ========\\n// * Use `sendRequest` to get data from the internet:\\n// * Docs:\\n// * - https://github.com/apify/got-scraping\\n// */\\n// // const catFact = await sendRequest.get('\''https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889'\'').json();\\n// // console.log(catFact.text);\\n// // // => \\"Cats make about 100 different sounds. Dogs make only about 10.\\",\\n//\\n// /**\\n// * ======= USING CACHE ========\\n// * To save the entry to the KeyValue cache (or retrieve it), you can use\\n// * `itemCacheKey` to create the entry'\''s ID for you:\\n// */\\n// // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);\\n// // const cache = await io.openKeyValueStore('\''MyStoreId'\'');\\n// // cache.setValue(cacheId, entry);\\n// };",
23 "outputFilterBefore": "\\n/**\\n * Inputs:\\n *\\n * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.\\n * `ctx.input` - The input object that was passed to this Actor.\\n * `ctx.state` - An object you can use to persist state across all your custom functions.\\n * `ctx.sendRequest` - Fetch remote data. Uses '\''got-scraping'\'', same as Apify'\''s `sendRequest`.\\n * See https://crawlee.dev/docs/guides/got-scraping\\n * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.\\n * It takes the entry itself, and a list of properties to be used for hashing.\\n * By default, you should pass `input.cachePrimaryKeys` to it.\\n *\\n */\\n// async ({ io, input, state, sendRequest, itemCacheKey }) => {\\n// // Example: Fetch data or run code BEFORE entries are scraped.\\n// state.categories = await sendRequest.get('\''https://example.com/my-categories'\'').json();\\n//\\n// /* ========== SEE BELOW FOR MORE EXAMPLES ========= */\\n//\\n// /**\\n// * ======= ACCESSING DATASET ========\\n// * To save/load/access entries in Dataset.\\n// * Docs:\\n// * - https://docs.apify.com/platform/storage/dataset\\n// * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset\\n// * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce\\n// */\\n// // const dataset = await io.openDataset('\''MyDatasetId'\'');\\n// // const info = await dataset.getInfo();\\n// // console.log(info.itemCount);\\n// // // => 0\\n//\\n// /**\\n// * ======= ACCESSING REMOTE DATA ========\\n// * Use `sendRequest` to get data from the internet:\\n// * Docs:\\n// * - https://github.com/apify/got-scraping\\n// */\\n// // const catFact = await sendRequest.get('\''https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889'\'').json();\\n// // console.log(catFact.text);\\n// // // => \\"Cats make about 100 different sounds. Dogs make only about 10.\\",\\n//\\n// /**\\n// * ======= USING CACHE ========\\n// * To save the entry to the KeyValue cache (or retrieve it), you can use\\n// * `itemCacheKey` to create the entry'\''s ID for you:\\n// */\\n// // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);\\n// // const cache = await io.openKeyValueStore('\''MyStoreId'\'');\\n// // cache.setValue(cacheId, entry);\\n// };",
24 "outputFilterAfter": "\\n/**\\n * Inputs:\\n *\\n * `ctx.io` - Apify Actor class, see https://docs.apify.com/sdk/js/reference/class/Actor.\\n * `ctx.input` - The input object that was passed to this Actor.\\n * `ctx.state` - An object you can use to persist state across all your custom functions.\\n * `ctx.sendRequest` - Fetch remote data. Uses '\''got-scraping'\'', same as Apify'\''s `sendRequest`.\\n * See https://crawlee.dev/docs/guides/got-scraping\\n * `ctx.itemCacheKey` - A function you can use to get cacheID for current `entry`.\\n * It takes the entry itself, and a list of properties to be used for hashing.\\n * By default, you should pass `input.cachePrimaryKeys` to it.\\n *\\n */\\n// async ({ io, input, state, sendRequest, itemCacheKey }) => {\\n// // Example: Fetch data or run code AFTER entries are scraped.\\n// delete state.categories;\\n//\\n// /* ========== SEE BELOW FOR MORE EXAMPLES ========= */\\n//\\n// /**\\n// * ======= ACCESSING DATASET ========\\n// * To save/load/access entries in Dataset.\\n// * Docs:\\n// * - https://docs.apify.com/platform/storage/dataset\\n// * - https://docs.apify.com/sdk/js/docs/guides/result-storage#dataset\\n// * - https://docs.apify.com/sdk/js/docs/examples/map-and-reduce\\n// */\\n// // const dataset = await io.openDataset('\''MyDatasetId'\'');\\n// // const info = await dataset.getInfo();\\n// // console.log(info.itemCount);\\n// // // => 0\\n//\\n// /**\\n// * ======= ACCESSING REMOTE DATA ========\\n// * Use `sendRequest` to get data from the internet:\\n// * Docs:\\n// * - https://github.com/apify/got-scraping\\n// */\\n// // const catFact = await sendRequest.get('\''https://cat-fact.herokuapp.com/facts/5887e1d85c873e0011036889'\'').json();\\n// // console.log(catFact.text);\\n// // // => \\"Cats make about 100 different sounds. Dogs make only about 10.\\",\\n//\\n// /**\\n// * ======= USING CACHE ========\\n// * To save the entry to the KeyValue cache (or retrieve it), you can use\\n// * `itemCacheKey` to create the entry'\''s ID for you:\\n// */\\n// // const cacheId = itemCacheKey(item, input.cachePrimaryKeys);\\n// // const cache = await io.openKeyValueStore('\''MyStoreId'\'');\\n// // cache.setValue(cacheId, entry);\\n// };",
25 "maxRequestRetries": 3,
26 "maxRequestsPerMinute": 120,
27 "minConcurrency": 1,
28 "requestHandlerTimeoutSecs": 180,
29 "logLevel": "info",
30 "errorReportingDatasetId": "REPORTING"
31},
32 "name": "jurooravec/profesia-sk-scraper"
33 }
34}'
The response should be: Accepted
. You should received response via SSE (JSON) as:
1event: message
2data: {
3 "result": {
4 "content": [
5 {
6 "type": "text",
7 "text": "ACTOR_RESPONSE"
8 }
9 ]
10 }
11}
Configure local MCP Server via standard input/output for Profesia.sk Scraper
You can connect to the MCP Server using clients like ClaudeDesktop and LibreChat or build your own. The server can run both locally and remotely, giving you full flexibility. Set up the server in the client configuration as follows:
1{
2 "mcpServers": {
3 "actors-mcp-server": {
4 "command": "npx",
5 "args": [
6 "-y",
7 "@apify/actors-mcp-server",
8 "--actors",
9 "jurooravec/profesia-sk-scraper"
10 ],
11 "env": {
12 "APIFY_TOKEN": "<YOUR_API_TOKEN>"
13 }
14 }
15 }
16}
You can further access the MCP client through the Tester MCP Client, a chat user interface to interact with the server.
To get started, check out the documentation and example clients. If you are interested in learning more about our MCP server, check out our blog post.
Pricing
Pricing model
RentalTo use this Actor, you have to pay a monthly rental fee to the developer. The rent is subtracted from your prepaid usage every month after the free trial period. You also pay for the Apify platform usage.
Free trial
3 days
Price
$25.00