Back to template gallery

Crawlee + Cheerio

A scraper example that uses Cheerio to parse HTML. It's fast, but it can't run the website's JavaScript or pass JS anti-scraping challenges.

Language

typescript

Tools

nodejs

crawlee

cheerio

Use cases

Starter

Web scraping

src/main.ts

1// Apify SDK - toolkit for building Apify Actors (Read more at https://docs.apify.com/sdk/js/)
2import { Actor } from 'apify';
3// Crawlee - web scraping and browser automation library (Read more at https://crawlee.dev)
4import { CheerioCrawler, Dataset } from 'crawlee';
5
6interface Input {
7 startUrls: {
8 url: string;
9 method?: 'GET' | 'HEAD' | 'POST' | 'PUT' | 'DELETE' | 'TRACE' | 'OPTIONS' | 'CONNECT' | 'PATCH';
10 headers?: Record<string, string>;
11 userData: Record<string, unknown>;
12 }[];
13 maxRequestsPerCrawl: number;
14}
15
16// The init() call configures the Actor for its environment. It's recommended to start every Actor with an init()
17await Actor.init();
18
19// Structure of input is defined in input_schema.json
20const { startUrls = ['https://apify.com'], maxRequestsPerCrawl = 100 } =
21 (await Actor.getInput<Input>()) ?? ({} as Input);
22
23const proxyConfiguration = await Actor.createProxyConfiguration();
24
25const crawler = new CheerioCrawler({
26 proxyConfiguration,
27 maxRequestsPerCrawl,
28 requestHandler: async ({ enqueueLinks, request, $, log }) => {
29 log.info('enqueueing new URLs');
30 await enqueueLinks();
31
32 // Extract title from the page.
33 const title = $('title').text();
34 log.info(`${title}`, { url: request.loadedUrl });
35
36 // Save url and title to Dataset - a table-like storage.
37 await Dataset.pushData({ url: request.loadedUrl, title });
38 },
39});
40
41await crawler.run(startUrls);
42
43// Gracefully exit the Actor process. It's recommended to quit all Actors with an exit()
44await Actor.exit();

TypeScript Crawlee & CheerioCrawler Actor Template

This template example was built with Crawlee to scrape data from a website using Cheerio wrapped into CheerioCrawler.

Quick Start

Once you've installed the dependencies, start the Actor:

$apify run

Once your Actor is ready, you can push it to the Apify Console:

apify login # first, you need to log in if you haven't already done so
apify push

Project Structure

.actor/
├── actor.json # Actor config: name, version, env vars, runtime settings
├── dataset_schena.json # Structure and representation of data produced by an Actor
├── input_schema.json # Input validation & Console form definition
└── output_schema.json # Specifies where an Actor stores its output
src/
└── main.ts # Actor entry point and orchestrator
storage/ # Local storage (mirrors Cloud during development)
├── datasets/ # Output items (JSON objects)
├── key_value_stores/ # Files, config, INPUT
└── request_queues/ # Pending crawl requests
Dockerfile # Container image definition

For more information, see the Actor definition documentation.

How it works

This code is a TypeScript script that uses Cheerio to scrape data from a website. It then stores the website titles in a dataset.

  • The crawler starts with URLs provided from the input startUrls field defined by the input schema. Number of scraped pages is limited by maxPagesPerCrawl field from the input schema.
  • The crawler uses requestHandler for each URL to extract the data from the page with the Cheerio library and to save the title and URL of each page to the dataset. It also logs out each result that is being saved.

What's included

  • Apify SDK - toolkit for building Actors
  • Crawlee - web scraping and browser automation library
  • Input schema - define and easily validate a schema for your Actor's input
  • Dataset - store structured data where each object stored has the same attributes
  • Cheerio - a fast, flexible & elegant library for parsing and manipulating HTML and XML
  • Proxy configuration - rotate IP addresses to prevent blocking

Resources

Creating Actors with templates

Already have a solution in mind?

Sign up for a free Apify account and deploy your code to the platform in just a few minutes! If you want a head start without coding it yourself, browse our Store of existing solutions.