Puppeteer Scraper
No credit card required
Puppeteer Scraper
No credit card required
Crawls websites with the headless Chrome and Puppeteer library using a provided server-side Node.js code. This crawler is an alternative to apify/web-scraper that gives you finer control over the process. Supports both recursive crawling and list of URLs. Supports login to website.
Do you want to learn more about this Actor?
Get a demoGlob Patterns
globs
arrayOptional
Glob patterns to match links in the page that you want to enqueue. Combine with Link selector to tell the scraper where to find links. Omitting the Glob patterns will cause the scraper to enqueue all links matched by the Link selector.
Default value of this property is []
Pseudo-URLs
pseudoUrls
arrayOptional
Pseudo-URLs to match links in the page that you want to enqueue. Combine with Link selector to tell the scraper where to find links. Omitting the Pseudo-URLs will cause the scraper to enqueue all links matched by the Link selector.
Default value of this property is []
Exclude Glob Patterns
excludes
arrayOptional
Glob patterns to match links in the page that you want to exclude from being enqueued.
Default value of this property is []
Link selector
linkSelector
stringOptional
CSS selector matching elements with 'href' attributes that should be enqueued. To enqueue urls from
Clickable elements selector
clickableElementsSelector
stringOptional
For pages where simple 'href' links are not available, this attribute allows you to specify a CSS selector matching elements that the scraper will mouse click after the page function finishes. Any triggered requests, navigations or open tabs will be intercepted and the target URLs will be filtered using Pseudo URLs and/or Glob patterns and subsequently added to the request queue. Leave empty to prevent the scraper from clicking in the page. Using this setting will have a performance impact.
Keep URL fragments
keepUrlFragments
booleanOptional
URL fragments (the parts of URL after a #
) are not considered when the scraper determines whether a URL has already been visited. This means that when adding URLs such as https://example.com/#foo
and https://example.com/#bar
, only the first will be visited. Turn this option on to tell the scraper to visit both.
Default value of this property is false
Proxy configuration
proxyConfiguration
objectRequired
Specifies proxy servers that will be used by the scraper in order to hide its origin.
For details, see Proxy configuration in README.
Default value of this property is {"useApifyProxy":true}
Proxy rotation
proxyRotation
EnumOptional
This property indicates the strategy of proxy rotation and can only be used in conjunction with Apify Proxy. The recommended setting automatically picks the best proxies from your available pool and rotates them evenly, discarding proxies that become blocked or unresponsive. If this strategy does not work for you for any reason, you may configure the scraper to either use a new proxy for each request, or to use one proxy as long as possible, until the proxy fails. IMPORTANT: This setting will only use your available Apify Proxy pool, so if you don't have enough proxies for a given task, no rotation setting will produce satisfactory results.
Value options:
"RECOMMENDED": string"PER_REQUEST": string"UNTIL_FAILURE": string
Default value of this property is "RECOMMENDED"
Session pool name
sessionPoolName
stringOptional
Use only english alphanumeric characters dashes and underscores. A session is a representation of a user. It has it's own IP and cookies which are then used together to emulate a real user. Usage of the sessions is controlled by the Proxy rotation option. By providing a session pool name, you enable sharing of those sessions across multiple actor runs. This is very useful when you need specific cookies for accessing the websites or when a lot of your proxies are already blocked. Instead of trying randomly, a list of working sessions will be saved and a new actor run can reuse those sessions. Note that the IP lock on sessions expires after 24 hours, unless the session is used again in that window.
Initial cookies
initialCookies
arrayOptional
The provided cookies will be pre-set to all pages the scraper opens.
Default value of this property is []
Use Chrome
useChrome
booleanOptional
The scraper will use a real Chrome browser instead of a Chromium masking as Chrome. Using this option may help with bypassing certain anti-scraping protections, but risks that the scraper will be unstable or not work at all.
Default value of this property is false
Run browsers in headless mode
headless
booleanOptional
By default, browsers run in headless mode. You can toggle this off to run them in headful mode, which can help with certain rare anti-scraping protections but is slower and more costly.
Default value of this property is true
Ignore SSL errors
ignoreSslErrors
booleanOptional
Scraper will ignore SSL certificate errors.
Default value of this property is false
Ignore CORS and CSP
ignoreCorsAndCsp
booleanOptional
Scraper will ignore CSP (content security policy) and CORS (cross origin resource sharing) settings of visited pages and requested domains. This enables you to freely use XHR/Fetch to make HTTP requests from the scraper.
Default value of this property is false
Download media
downloadMedia
booleanOptional
Scraper will download media such as images, fonts, videos and sounds. Disabling this may speed up the scrape, but certain websites could stop working correctly.
Default value of this property is true
Download CSS
downloadCss
booleanOptional
Scraper will download CSS stylesheets. Disabling this may speed up the scrape, but certain websites could stop working correctly.
Default value of this property is true
Max request retries
maxRequestRetries
integerOptional
Maximum number of times the request for the page will be retried in case of an error. Setting it to 0 means that the request will be attempted once and will not be retried if it fails.
Default value of this property is 3
Max pages per run
maxPagesPerCrawl
integerOptional
Maximum number of pages that the scraper will open. 0 means unlimited.
Default value of this property is 0
Max result records
maxResultsPerCrawl
integerOptional
Maximum number of results that will be saved to dataset. The scraper will terminate afterwards. 0 means unlimited.
Default value of this property is 0
Max crawling depth
maxCrawlingDepth
integerOptional
Defines how many links away from the StartURLs will the scraper descend. 0 means unlimited.
Default value of this property is 0
Max concurrency
maxConcurrency
integerOptional
Defines how many pages can be processed by the scraper in parallel. The scraper automatically increases and decreases concurrency based on available system resources. Use this option to set a hard limit.
Default value of this property is 50
Page load timeout
pageLoadTimeoutSecs
integerOptional
Maximum time the scraper will allow a web page to load in seconds.
Default value of this property is 60
Page function timeout
pageFunctionTimeoutSecs
integerOptional
Maximum time the scraper will wait for the page function to execute in seconds.
Default value of this property is 60
Navigation wait until
waitUntil
arrayOptional
The scraper will wait until the selected events are triggered in the page before executing the page function. Available events are domcontentloaded
, load
, networkidle2
and networkidle0
. See Puppeteer docs.
Default value of this property is ["networkidle2"]
Pre-navigation hooks
preNavigationHooks
stringOptional
Async functions that are sequentially evaluated before the navigation. Good for setting additional cookies or browser properties before navigation. The function accepts two parameters, crawlingContext
and gotoOptions
, which are passed to the page.goto()
function the crawler calls to navigate.
Post-navigation hooks
postNavigationHooks
stringOptional
Async functions that are sequentially evaluated after the navigation. Good for checking if the navigation was successful. The function accepts crawlingContext
as the only parameter.
Dismiss cookie modals
closeCookieModals
booleanOptional
Using the I don't care about cookies browser extension. When on, the crawler will automatically try to dismiss cookie consent modals. This can be useful when crawling European websites that show cookie consent modals.
Default value of this property is false
Maximum scrolling distance in pixels
maxScrollHeightPixels
integerOptional
The crawler will scroll down the page until all content is loaded or the maximum scrolling distance is reached. Setting this to 0
disables scrolling altogether.
Default value of this property is 5000
Debug log
debugLog
booleanOptional
Debug messages will be included in the log. Use context.log.debug('message')
to log your own debug messages.
Default value of this property is false
Browser log
browserLog
booleanOptional
Console messages from the Browser will be included in the log. This may result in the log being flooded by error messages, warnings and other messages of little value, especially with high concurrency.
Default value of this property is false
Custom data
customData
objectOptional
This object will be available on pageFunction's context as customData.
Default value of this property is {}
Dataset name
datasetName
stringOptional
Name or ID of the dataset that will be used for storing results. If left empty, the default dataset of the run will be used.
Actor Metrics
362 monthly users
-
70 stars
>99% runs succeeded
30 days response time
Created in Apr 2019
Modified 6 months ago