Puppeteer Scraper

  • apify/puppeteer-scraper
  • Modified
  • Users 1.9k
  • Runs 2.1M
  • Created by Author's avatarApify

Crawls websites with the headless Chrome and Puppeteer library using a provided server-side Node.js code. This crawler is an alternative to apify/web-scraper that gives you finer control over the process. Supports both recursive crawling and list of URLs. Supports login to website.

Start URLs

startUrls

Required

array

URLs to start with

Glob Patterns

globs

Optional

array

Glob patterns to match links in the page that you want to enqueue. Combine with Link selector to tell the scraper where to find links. Omitting the Glob patterns will cause the scraper to enqueue all links matched by the Link selector.

Pseudo-URLs

pseudoUrls

Optional

array

Pseudo-URLs to match links in the page that you want to enqueue. Combine with Link selector to tell the scraper where to find links. Omitting the Pseudo-URLs will cause the scraper to enqueue all links matched by the Link selector.

Link selector

linkSelector

Optional

string

CSS selector matching elements with 'href' attributes that should be enqueued. To enqueue urls from tags, you would enter div.my-class. Leave empty to ignore all links.

Clickable elements selector

clickableElementsSelector

Optional

string

For pages where simple 'href' links are not available, this attribute allows you to specify a CSS selector matching elements that the scraper will mouse click after the page function finishes. Any triggered requests, navigations or open tabs will be intercepted and the target URLs will be filtered using Pseudo URLs and/or Glob patterns and subsequently added to the request queue. Leave empty to prevent the scraper from clicking in the page. Using this setting will have a performance impact.

Keep URL fragments

keepUrlFragments

Optional

boolean

URL fragments (the parts of URL after a #) are not considered when the scraper determines whether a URL has already been visited. This means that when adding URLs such as https://example.com/#foo and https://example.com/#bar, only the first will be visited. Turn this option on to tell the scraper to visit both.

Page function

pageFunction

Required

string

Function executed for each request

Proxy configuration

proxyConfiguration

Required

object

Specifies proxy servers that will be used by the scraper in order to hide its origin. For details, see Proxy configuration in README.

Proxy rotation

proxyRotation

Optional

string

This property indicates the strategy of proxy rotation and can only be used in conjunction with Apify Proxy. The recommended setting automatically picks the best proxies from your available pool and rotates them evenly, discarding proxies that become blocked or unresponsive. If this strategy does not work for you for any reason, you may configure the scraper to either use a new proxy for each request, or to use one proxy as long as possible, until the proxy fails. IMPORTANT: This setting will only use your available Apify Proxy pool, so if you don't have enough proxies for a given task, no rotation setting will produce satisfactory results.

Options:

"RECOMMENDED", "PER_REQUEST", "UNTIL_FAILURE"

Session pool name

sessionPoolName

Optional

string

Use only english alphanumeric characters dashes and underscores. A session is a representation of a user. It has it's own IP and cookies which are then used together to emulate a real user. Usage of the sessions is controlled by the Proxy rotation option. By providing a session pool name, you enable sharing of those sessions across multiple actor runs. This is very useful when you need specific cookies for accessing the websites or when a lot of your proxies are already blocked. Instead of trying randomly, a list of working sessions will be saved and a new actor run can reuse those sessions. Note that the IP lock on sessions expires after 24 hours, unless the session is used again in that window.

Initial cookies

initialCookies

Optional

array

The provided cookies will be pre-set to all pages the scraper opens.

Use Chrome

useChrome

Optional

boolean

The scraper will use a real Chrome browser instead of a Chromium masking as Chrome. Using this option may help with bypassing certain anti-scraping protections, but risks that the scraper will be unstable or not work at all.

Run browsers in headless mode

headless

Optional

boolean

By default, browsers run in headless mode. You can toggle this off to run them in headful mode, which can help with certain rare anti-scraping protections but is slower and more costly.

Ignore SSL errors

ignoreSslErrors

Optional

boolean

Scraper will ignore SSL certificate errors.

Ignore CORS and CSP

ignoreCorsAndCsp

Optional

boolean

Scraper will ignore CSP (content security policy) and CORS (cross origin resource sharing) settings of visited pages and requested domains. This enables you to freely use XHR/Fetch to make HTTP requests from the scraper.

Download media

downloadMedia

Optional

boolean

Scraper will download media such as images, fonts, videos and sounds. Disabling this may speed up the scrape, but certain websites could stop working correctly.

Download CSS

downloadCss

Optional

boolean

Scraper will download CSS stylesheets. Disabling this may speed up the scrape, but certain websites could stop working correctly.

Max request retries

maxRequestRetries

Optional

integer

Maximum number of times the request for the page will be retried in case of an error. Setting it to 0 means that the request will be attempted once and will not be retried if it fails.

Max pages per run

maxPagesPerCrawl

Optional

integer

Maximum number of pages that the scraper will open. 0 means unlimited.

Max result records

maxResultsPerCrawl

Optional

integer

Maximum number of results that will be saved to dataset. The scraper will terminate afterwards. 0 means unlimited.

Max crawling depth

maxCrawlingDepth

Optional

integer

Defines how many links away from the StartURLs will the scraper descend. 0 means unlimited.

Max concurrency

maxConcurrency

Optional

integer

Defines how many pages can be processed by the scraper in parallel. The scraper automatically increases and decreases concurrency based on available system resources. Use this option to set a hard limit.

Page load timeout

pageLoadTimeoutSecs

Optional

integer

Maximum time the scraper will allow a web page to load in seconds.

Page function timeout

pageFunctionTimeoutSecs

Optional

integer

Maximum time the scraper will wait for the page function to execute in seconds.

Navigation wait until

waitUntil

Optional

array

The scraper will wait until the selected events are triggered in the page before executing the page function. Available events are domcontentloaded, load, networkidle2 and networkidle0. See Puppeteer docs.

Pre-navigation hooks

preNavigationHooks

Optional

string

Async functions that are sequentially evaluated before the navigation. Good for setting additional cookies or browser properties before navigation. The function accepts two parameters, `crawlingContext` and `gotoOptions`, which are passed to the `page.goto()` function the crawler calls to navigate.

Post-navigation hooks

postNavigationHooks

Optional

string

Async functions that are sequentially evaluated after the navigation. Good for checking if the navigation was successful. The function accepts `crawlingContext` as the only parameter.

Debug log

debugLog

Optional

boolean

Debug messages will be included in the log. Use context.log.debug('message') to log your own debug messages.

Browser log

browserLog

Optional

boolean

Console messages from the Browser will be included in the log. This may result in the log being flooded by error messages, warnings and other messages of little value, especially with high concurrency.

Custom data

customData

Optional

object

This object will be available on pageFunction's context as customData.

Dataset name

datasetName

Optional

string

Name or ID of the dataset that will be used for storing results. If left empty, the default dataset of the run will be used.

Key-value store name

keyValueStoreName

Optional

string

Name or ID of the key-value store that will be used for storing records. If left empty, the default key-value store of the run will be used.

Request queue name

requestQueueName

Optional

string

Name of the request queue that will be used for storing requests. If left empty, the default request queue of the run will be used.