Puppeteer Improved Inputs Scraper avatar
Puppeteer Improved Inputs Scraper
Deprecated
View all Actors
This Actor is deprecated

This Actor is unavailable because the developer has decided to deprecate it. Would you like to try a similar Actor instead?

See alternative Actors
Puppeteer Improved Inputs Scraper

Puppeteer Improved Inputs Scraper

barry8schneider/puppeteer20191228

Add the following options to inputs: Use Live View, Inject Query, and Inject Underscore

Start URLs

startUrlsarrayRequired

URLs to start with

Pseudo-URLs

pseudoUrlsarrayOptional

Pseudo-URLs to match links in the page that you want to enqueue. Combine with Link selector to tell the scraper where to find links. Omitting the Pseudo-URLs will cause the scraper to enqueue all links matched by the Link selector.

Default value of this property is []

Link selector

linkSelectorstringOptional

CSS selector matching elements with 'href' attributes that should be enqueued. To enqueue urls from

Clickable elements selector

clickableElementsSelectorstringOptional

For pages where simple 'href' links are not available, this attribute allows you to specify a CSS selector matching elements that the scraper will mouse click after the page function finishes. Any triggered requests, navigations or open tabs will be intercepted and the target URLs will be filtered using Pseudo URLs and subsequently added to the request queue. Leave empty to prevent the scraper from clicking in the page. Using this setting will have a performance impact.

Keep URL fragments

keepUrlFragmentsbooleanOptional

URL fragments (the parts of URL after a #) are not considered when the scraper determines whether a URL has already been visited. This means that when adding URLs such as https://example.com/#foo and https://example.com/#bar, only the first will be visited. Turn this option on to tell the scraper to visit both.

Default value of this property is false

Inject jQuery to work in Page.Evaluate()

injectJQuerybooleanOptional

Indicates that the jQuery library should be injected into each page before Page function is invoked so jQuery can be used within the Page.Evaluate() function. Note that the jQuery object will not be registered into global namespace in order to avoid conflicts with libraries used by the web page. It can only be accessed through context.jQuery. For more information

Default value of this property is true

Inject Underscore.js to work in Page.Evaluate()

injectUnderscoreJsbooleanOptional

Indicates that the Underscore.js library should be injected into each page before Page function is invoked so Underscore.js can be used within the Page.Evaluate() function. Note that the Underscore object will not be registered into global namespace in order to avoid conflicts with libraries used by the web page. It can only be accessed through context.underscoreJs.

Default value of this property is false

Proxy configuration

proxyConfigurationobjectOptional

Choose to use no proxy, Apify Proxy, or provide custom proxy URLs.

Default value of this property is {}

Ignore SSL errors

ignoreSslErrorsbooleanOptional

Scraper will ignore SSL certificate errors.

Default value of this property is true

Debug log

debugLogbooleanOptional

Debug messages will be included in the log. Use context.log.debug('message') to log your own debug messages.

Default value of this property is false

Custom data

customDataobjectOptional

This object will be available on pageFunction's context as customData.

Default value of this property is {}

Page function

pageFunctionstringRequired

Function executed for each request

Pre goto function

preGotoFunctionstringOptional

This function is executed before navigation to a given URL. It can be useful to do pre-processing, changes to the page that allow bypassing anti-scraping protections or just setting cookies.

Initial cookies

initialCookiesarrayOptional

The provided cookies will be pre-set to all pages the scraper opens.

Default value of this property is []

Use Chrome

useChromebooleanOptional

The scraper will use a real Chrome browser instead of a Chromium masking as Chrome. Using this option may help with bypassing certain anti-scraping protections, but risks that the scraper will be unstable or not work at all.

Default value of this property is false

Use Stealth

useStealthbooleanOptional

The scraper will apply various browser emulation techniques to match a real user as closely as possible. This feature works best in conjunction with the Use Chrome option and also carries the risk of making the scraper unstable.

Default value of this property is false

Ignore CORS and CSP

ignoreCorsAndCspbooleanOptional

Scraper will ignore CSP (content security policy) and CORS (cross origin resource sharing) settings of visited pages and requested domains. This enables you to freely use XHR/Fetch to make HTTP requests from the scraper.

Default value of this property is false

Download media

downloadMediabooleanOptional

Scraper will download media such as images, fonts, videos and sounds. Disabling this may speed up the scrape, but certain websites could stop working correctly.

Default value of this property is false

Download CSS

downloadCssbooleanOptional

Scraper will download CSS stylesheets. Disabling this may speed up the scrape, but certain websites could stop working correctly.

Default value of this property is false

Use LiveView

useLiveViewbooleanOptional

Displays the first page of the browser. Disabling this may speed up the scrape.

Default value of this property is false

Max request retries

maxRequestRetriesintegerOptional

Maximum number of times the request for the page will be retried in case of an error. Setting it to 0 means that the request will be attempted once and will not be retried if it fails.

Default value of this property is 3

Max pages per run

maxPagesPerCrawlintegerOptional

Maximum number of pages that the scraper will open. 0 means unlimited.

Default value of this property is 0

Max result records

maxResultsPerCrawlintegerOptional

Maximum number of results that will be saved to dataset. The scraper will terminate afterwards. 0 means unlimited.

Default value of this property is 0

Max crawling depth

maxCrawlingDepthintegerOptional

Defines how many links away from the StartURLs will the scraper descend. 0 means unlimited.

Default value of this property is 0

Max concurrency

maxConcurrencyintegerOptional

Defines how many pages can be processed by the scraper in parallel. The scraper automatically increases and decreases concurrency based on available system resources. Use this option to set a hard limit.

Default value of this property is 50

Page load timeout

pageLoadTimeoutSecsintegerOptional

Maximum time the scraper will allow a web page to load in seconds.

Default value of this property is 60

Page function timeout

pageFunctionTimeoutSecsintegerOptional

Maximum time the scraper will wait for the page function to execute in seconds.

Default value of this property is 60

Navigation wait until

waitUntilarrayOptional

The scraper will wait until the selected events are triggered in the page before executing the page function. Available events are domcontentloaded, load, networkidle2 and networkidle0. See Puppeteer docs.

Default value of this property is ["networkidle2"]

Browser log

browserLogbooleanOptional

Console messages from the Browser will be included in the log. This may result in the log being flooded by error messages, warnings and other messages of little value, especially with high concurrency.

Default value of this property is false

Developer
Maintained by Community
Categories