Crawlee + Cheerio
A scraper example that uses Cheerio to parse HTML. It's fast, but it can't run the website's JavaScript or pass JS anti-scraping challenges.
src/main.js
1// Apify SDK - toolkit for building Apify Actors (Read more at https://docs.apify.com/sdk/js/)
2import { Actor } from 'apify';
3// Crawlee - web scraping and browser automation library (Read more at https://crawlee.dev)
4import { CheerioCrawler, Dataset } from 'crawlee';
5// this is ESM project, and as such, it requires you to specify extensions in your relative imports
6// read more about this here: https://nodejs.org/docs/latest-v18.x/api/esm.html#mandatory-file-extensions
7// import { router } from './routes.js';
8
9// The init() call configures the Actor for its environment. It's recommended to start every Actor with an init()
10await Actor.init();
11
12// Structure of input is defined in input_schema.json
13const {
14 startUrls = ['https://crawlee.dev'],
15 maxRequestsPerCrawl = 100,
16} = await Actor.getInput() ?? {};
17
18const proxyConfiguration = await Actor.createProxyConfiguration();
19
20const crawler = new CheerioCrawler({
21 proxyConfiguration,
22 maxRequestsPerCrawl,
23 async requestHandler({ enqueueLinks, request, $, log }) {
24 log.info('enqueueing new URLs');
25 await enqueueLinks();
26
27 // Extract title from the page.
28 const title = $('title').text();
29 log.info(`${title}`, { url: request.loadedUrl });
30
31 // Save url and title to Dataset - a table-like storage.
32 await Dataset.pushData({ url: request.loadedUrl, title });
33 },
34});
35
36await crawler.run(startUrls);
37
38// Gracefully exit the Actor process. It's recommended to quit all Actors with an exit()
39await Actor.exit();
JavaScript Crawlee & CheerioCrawler template
This template example was built with Crawlee to scrape data from a website using Cheerio wrapped into CheerioCrawler.
Included features
- Apify SDK - toolkit for building Actors
- Crawlee - web scraping and browser automation library
- Input schema - define and easily validate a schema for your Actor's input
- Dataset - store structured data where each object stored has the same attributes
- Cheerio - a fast, flexible & elegant library for parsing and manipulating HTML and XML
How it works
This code is a JavaScript script that uses Cheerio to scrape data from a website. It then stores the website titles in a dataset.
- The crawler starts with URLs provided from the input
startUrls
field defined by the input schema. Number of scraped pages is limited bymaxPagesPerCrawl
field from the input schema. - The crawler uses
requestHandler
for each URL to extract the data from the page with the Cheerio library and to save the title and URL of each page to the dataset. It also logs out each result that is being saved.
Resources
- Video tutorial on building a scraper using CheerioCrawler
- Written tutorial on building a scraper using CheerioCrawler
- Web scraping with Cheerio in 2023
- How to scrape a dynamic page using Cheerio
- Integration with Zapier, Make, Google Drive and others
- Video guide on getting data using Apify API
- A short guide on how to create Actors using code templates:
Scrape single page with provided URL with Axios and extract data from page's HTML with Cheerio.
Example of a Puppeteer and headless Chrome web scraper. Headless browsers render JavaScript and are harder to block, but they're slower than plain HTTP.
Web scraper example with Crawlee, Playwright and headless Chrome. Playwright is more modern, user-friendly and harder to block than Puppeteer.
Skeleton project that helps you quickly bootstrap `CheerioCrawler` in JavaScript. It's best for developers who already know Apify SDK and Crawlee.
Example of running Cypress tests and saving their results on the Apify platform. JSON results are saved to Dataset, videos to Key-value store.
Empty template with basic structure for the Actor with Apify SDK that allows you to easily add your own functionality.