Vanilla JS Scraper avatar
Vanilla JS Scraper
Try for free

No credit card required

View all Actors
Vanilla JS Scraper

Vanilla JS Scraper

mstephen190/vanilla-js-scraper
Try for free

No credit card required

Scrape the web using familiar JavaScript methods! Crawls websites using raw HTTP requests, parses the HTML with the JSDOM package, and extracts data from the pages using Node.js code. Supports both recursive crawling and lists of URLs. This actor is a non jQuery alternative to CheerioScraper.

The code examples below show how to run the Actor and get its results. To run the code, you need to have an Apify account. Replace <YOUR_API_TOKEN> in the code with your API token, which you can find under Settings > Integrations in Apify Console. Learn mode

Node.js

Python

curl

1import { ApifyClient } from 'apify-client';
2
3// Initialize the ApifyClient with your Apify API token
4const client = new ApifyClient({
5    token: '<YOUR_API_TOKEN>',
6});
7
8// Prepare Actor input
9const input = {
10    "requests": [
11        {
12            "url": "https://apify.com"
13        }
14    ],
15    "pseudoUrls": [
16        {
17            "purl": "https://apify.com[(/[\\w-]+)?]"
18        }
19    ],
20    "linkSelector": "a[href]",
21    "pageFunction": async function pageFunction(context) {
22        const { window, document, crawler, enqueueRequest, request, response, userData, json, body, kvStore, customData } = context;
23    
24        const title = document.querySelector('title').textContent
25    
26        const responseHeaders = response.headers
27    
28        return {
29            title,
30            responseHeaders
31        };
32    },
33    "preNavigationHooks": `// We need to return array of (possibly async) functions here.
34        // The functions accept two arguments: the "crawlingContext" object
35        // and "requestAsBrowserOptions" which are passed to the `requestAsBrowser()`
36        // function the crawler calls to navigate..
37        [
38            async (crawlingContext, requestAsBrowserOptions) => {
39                // ...
40            }
41        ]`,
42    "postNavigationHooks": `// We need to return array of (possibly async) functions here.
43        // The functions accept a single argument: the "crawlingContext" object.
44        [
45            async (crawlingContext) => {
46                // ...
47            },
48        ]`,
49    "proxy": {
50        "useApifyProxy": true
51    },
52    "additionalMimeTypes": [],
53    "customData": {}
54};
55
56(async () => {
57    // Run the Actor and wait for it to finish
58    const run = await client.actor("mstephen190/vanilla-js-scraper").call(input);
59
60    // Fetch and print Actor results from the run's dataset (if any)
61    console.log('Results from dataset');
62    const { items } = await client.dataset(run.defaultDatasetId).listItems();
63    items.forEach((item) => {
64        console.dir(item);
65    });
66})();
Developer
Maintained by Community
Actor metrics
  • 17 monthly users
  • 99.7% runs succeeded
  • 0.0 days response time
  • Created in Mar 2022
  • Modified 6 months ago