Back to template gallery

Crawlee + Puppeteer + Chrome

Example of a Puppeteer and headless Chrome web scraper. Headless browsers render JavaScript and are harder to block, but they're slower than plain HTTP.







Use cases

Web scraping



1// Apify SDK - toolkit for building Apify Actors (Read more at
2import { Actor } from 'apify';
3// Web scraping and browser automation library (
4import { PuppeteerCrawler, Request } from 'crawlee';
5import { router } from './routes.js';
7// The init() call configures the Actor for its environment. It's recommended to start every Actor with an init().
8await Actor.init();
10interface Input {
11    startUrls: Request[];
13// Define the URLs to start the crawler with - get them from the input of the Actor or use a default list.
14const {
15    startUrls = [''],
16} = await Actor.getInput<Input>() ?? {};
18// Create a proxy configuration that will rotate proxies from Apify Proxy.
19const proxyConfiguration = await Actor.createProxyConfiguration();
21// Create a PuppeteerCrawler that will use the proxy configuration and and handle requests with the router from routes.js file.
22const crawler = new PuppeteerCrawler({
23    proxyConfiguration,
24    requestHandler: router,
27// Run the crawler with the start URLs and wait for it to finish.
30// Gracefully exit the Actor process. It's recommended to quit all Actors with an exit().
31await Actor.exit();

TypeScript PuppeteerCrawler Actor template

This template is a production ready boilerplate for developing with PuppeteerCrawler. The PuppeteerCrawler provides a simple framework for parallel crawling of web pages using headless Chrome with Puppeteer. Since PuppeteerCrawler uses headless Chrome to download web pages and extract data, it is useful for crawling of websites that require to execute JavaScript.

If you're looking for examples or want to learn more visit:

Included features

  • Puppeteer Crawler - simple framework for parallel crawling of web pages using headless Chrome with Puppeteer
  • Configurable Proxy - tool for working around IP blocking
  • Input schema - define and easily validate a schema for your Actor's input
  • Dataset - store structured data where each object stored has the same attributes
  • Apify SDK - toolkit for building Actors

How it works

  1. Actor.getInput() gets the input from INPUT.json where the start urls are defined
  2. Create a configuration for proxy servers to be used during the crawling with Actor.createProxyConfiguration() to work around IP blocking. Use Apify Proxy or your own Proxy URLs provided and rotated according to the configuration. You can read more about proxy configuration here.
  3. Create an instance of Crawlee's Puppeteer Crawler with new PuppeteerCrawler(). You can pass options to the crawler constructor as:
    • proxyConfiguration - provide the proxy configuration to the crawler
    • requestHandler - handle each request with custom router defined in the routes.js file.
  4. Handle requests with the custom router from routes.js file. Read more about custom routing for the Cheerio Crawler here
    • Create a new router instance with new createPuppeteerRouter()
    • Define default handler that will be called for all URLs that are not handled by other handlers by adding router.addDefaultHandler(() => { ... })
    • Define additional handlers - here you can add your own handling of the page
      1router.addHandler('detail', async ({ request, page, log }) => {
      2    const title = await page.title();
      3    // You can add your own page handling here
      5    await Dataset.pushData({
      6        url: request.loadedUrl,
      7        title,
      8    });
  5.; start the crawler and wait for its finish


If you're looking for examples or want to learn more visit:

Already have a solution in mind?

Sign up for a free Apify account and deploy your code to the platform in just a few minutes! If you want a head start without coding it yourself, browse our Store of existing solutions.