Web Scraper Experimental Debug
Pricing
Pay per usage
Go to Apify Store
Web Scraper Experimental Debug
Experimental version of Apify Web Scraper with Chrome debugger integrated
0.0 (0)
Pricing
Pay per usage
3
69
2
Last modified
5 years ago
Pricing
Pay per usage
Experimental version of Apify Web Scraper with Chrome debugger integrated
0.0 (0)
Pricing
Pay per usage
3
69
2
Last modified
5 years ago
useRequestQueuebooleanOptional
Request queue enables recursive crawling and the use of Pseudo-URLs, Link selector and context.enqueueRequest().
Default value of this property is true
pseudoUrlsarrayOptional
Pseudo-URLs to match links in the page that you want to enqueue. Combine with Link selector to tell the scraper where to find links. Omitting the Pseudo-URLs will cause the scraper to enqueue all links matched by the Link selector.
Default value of this property is []
linkSelectorstringOptional
CSS selector matching elements with 'href' attributes that should be enqueued. To enqueue urls from
keepUrlFragmentsbooleanOptional
URL fragments (the parts of URL after a #) are not considered when the scraper determines whether a URL has already been visited. This means that when adding URLs such as https://example.com/#foo and https://example.com/#bar, only the first will be visited. Turn this option on to tell the scraper to visit both.
Default value of this property is false
injectJQuerybooleanOptional
The jQuery library will be injected into each page. If the page already uses jQuery, conflicts may arise.
Default value of this property is true
injectUnderscorebooleanOptional
The Underscore.js library will be injected into each page. If the page already uses Underscore.js (or other libraries that attach to '_', such as Lodash), conflicts may arise.
Default value of this property is false
proxyConfigurationobjectOptional
Choose to use no proxy, Apify Proxy, or provide custom proxy URLs.
Default value of this property is {}
initialCookiesarrayOptional
The provided cookies will be pre-set to all pages the scraper opens.
Default value of this property is []
useChromebooleanOptional
The scraper will use a real Chrome browser instead of a Chromium masking as Chrome. Using this option may help with bypassing certain anti-scraping protections, but risks that the scraper will be unstable or not work at all.
Default value of this property is false
useStealthbooleanOptional
The scraper will apply various browser emulation techniques to match a real user as closely as possible. This feature works best in conjunction with the Use Chrome option and also carries the risk of making the scraper unstable.
Default value of this property is false
ignoreSslErrorsbooleanOptional
Scraper will ignore SSL certificate errors.
Default value of this property is false
ignoreCorsAndCspbooleanOptional
Scraper will ignore CSP (content security policy) and CORS (cross origin resource sharing) settings of visited pages and requested domains. This enables you to freely use XHR/Fetch to make HTTP requests from the scraper.
Default value of this property is false
downloadMediabooleanOptional
Scraper will download media such as images, fonts, videos and sounds. Disabling this may speed up the scrape, but certain websites could stop working correctly.
Default value of this property is true
downloadCssbooleanOptional
Scraper will download CSS stylesheets. Disabling this may speed up the scrape, but certain websites could stop working correctly.
Default value of this property is true
maxRequestRetriesintegerOptional
Maximum number of times the request for the page will be retried in case of an error. Setting it to 0 means that the request will be attempted once and will not be retried if it fails.
Default value of this property is 3
maxPagesPerCrawlintegerOptional
Maximum number of pages that the scraper will open. 0 means unlimited.
Default value of this property is 0
maxResultsPerCrawlintegerOptional
Maximum number of results that will be saved to dataset. The scraper will terminate afterwards. 0 means unlimited.
Default value of this property is 0
maxCrawlingDepthintegerOptional
Defines how many links away from the StartURLs will the scraper descend. 0 means unlimited.
Default value of this property is 0
maxConcurrencyintegerOptional
Defines how many pages can be processed by the scraper in parallel. The scraper automatically increases and decreases concurrency based on available system resources. Use this option to set a hard limit.
Default value of this property is 50
pageLoadTimeoutSecsintegerOptional
Maximum time the scraper will allow a web page to load in seconds.
Default value of this property is 60
pageFunctionTimeoutSecsintegerOptional
Maximum time the scraper will wait for the page function to execute in seconds.
Default value of this property is 60
waitUntilarrayOptional
The scraper will wait until the selected events are triggered in the page before executing the page function. Available events are domcontentloaded, load, networkidle2 and networkidle0. See Puppeteer docs.
Default value of this property is ["networkidle2"]
debugLogbooleanOptional
Debug messages will be included in the log. Use context.log.debug('message') to log your own debug messages.
Default value of this property is false
browserLogbooleanOptional
Console messages from the Browser will be included in the log. This may result in the log being flooded by error messages, warnings and other messages of little value, especially with high concurrency.
Default value of this property is false