Web Scraper
No credit card required
Web Scraper
No credit card required
Crawls arbitrary websites using the Chrome browser and extracts data from pages using JavaScript code. The Actor supports both recursive crawling and lists of URLs and automatically manages concurrency for maximum performance. This is Apify's basic tool for web crawling and scraping.
Do you want to learn more about this Actor?
Get a demoRun mode
runMode
EnumOptional
This property indicates the scraper's mode of operation. In DEVELOPMENT mode, the scraper ignores page timeouts, doesn't use sessionPool, opens pages one by one and enables debugging via Chrome DevTools. Open the live view tab or the container URL to access the debugger. Further debugging options can be configured in the Advanced configuration section. PRODUCTION mode disables debugging and enables timeouts and concurrency.
For details, see Run mode in README.
Value options:
"PRODUCTION": string"DEVELOPMENT": string
Default value of this property is "PRODUCTION"
Start URLs
startUrls
arrayRequired
A static list of URLs to scrape.
For details, see Start URLs in README.
URL #fragments identify unique pages
keepUrlFragments
booleanOptional
Indicates that URL fragments (e.g. http://example.com#fragment
) should be included when checking whether a URL has already been visited or not. Typically, URL fragments are used for page navigation only and therefore they should be ignored, as they don't identify separate pages. However, some single-page websites use URL fragments to display different pages; in such a case, this option should be enabled.
Default value of this property is false
Link selector
linkSelector
stringOptional
A CSS selector saying which links on the page (<a>
elements with href
attribute) shall be followed and added to the request queue. To filter the links added to the queue, use the Pseudo-URLs and/or Glob patterns setting.
If Link selector is empty, the page links are ignored.
For details, see Link selector in README.
Glob Patterns
globs
arrayOptional
Glob patterns to match links in the page that you want to enqueue. Combine with Link selector to tell the scraper where to find links. Omitting the Glob patterns will cause the scraper to enqueue all links matched by the Link selector.
Default value of this property is []
Pseudo-URLs
pseudoUrls
arrayOptional
Specifies what kind of URLs found by Link selector should be added to the request queue. A pseudo-URL is a URL with regular expressions enclosed in []
brackets, e.g. http://www.example.com/[.*]
.
If Pseudo-URLs are omitted, the actor enqueues all links matched by the Link selector.
For details, see Pseudo-URLs in README.
Default value of this property is []
Exclude Glob Patterns
excludes
arrayOptional
Glob patterns to match links in the page that you want to exclude from being enqueued.
Default value of this property is []
Page function
pageFunction
stringRequired
JavaScript (ES6) function that is executed in the context of every page loaded in the Chrome browser. Use it to scrape data from the page, perform actions or add new URLs to the request queue.
For details, see Page function in README.
Inject jQuery
injectJQuery
booleanOptional
If enabled, the scraper will inject the jQuery library into every web page loaded, before Page function is invoked. Note that the jQuery object ($
) will not be registered into global namespace in order to avoid conflicts with libraries used by the web page. It can only be accessed through context.jQuery
in Page function.
Default value of this property is true
Proxy configuration
proxyConfiguration
objectRequired
Specifies proxy servers that will be used by the scraper in order to hide its origin.
For details, see Proxy configuration in README.
Default value of this property is {"useApifyProxy":true}
Proxy rotation
proxyRotation
EnumOptional
This property indicates the strategy of proxy rotation and can only be used in conjunction with Apify Proxy. The recommended setting automatically picks the best proxies from your available pool and rotates them evenly, discarding proxies that become blocked or unresponsive. If this strategy does not work for you for any reason, you may configure the scraper to either use a new proxy for each request, or to use one proxy as long as possible, until the proxy fails. IMPORTANT: This setting will only use your available Apify Proxy pool, so if you don't have enough proxies for a given task, no rotation setting will produce satisfactory results.
Value options:
"RECOMMENDED": string"PER_REQUEST": string"UNTIL_FAILURE": string
Default value of this property is "RECOMMENDED"
Session pool name
sessionPoolName
stringOptional
Use only english alphanumeric characters dashes and underscores. A session is a representation of a user. It has it's own IP and cookies which are then used together to emulate a real user. Usage of the sessions is controlled by the Proxy rotation option. By providing a session pool name, you enable sharing of those sessions across multiple actor runs. This is very useful when you need specific cookies for accessing the websites or when a lot of your proxies are already blocked. Instead of trying randomly, a list of working sessions will be saved and a new actor run can reuse those sessions. Note that the IP lock on sessions expires after 24 hours, unless the session is used again in that window.
Initial cookies
initialCookies
arrayOptional
A JSON array with cookies that will be set to every Chrome browser tab opened before loading the page, in the format accepted by Puppeteer's Page.setCookie()
function. This option is useful for transferring a logged-in session from an external web browser. For details how to do this, read this help article.
Default value of this property is []
Use Chrome
useChrome
booleanOptional
If enabled, the scraper will use a real Chrome browser instead of Chromium bundled with Puppeteer. This option may help bypass certain anti-scraping protections, but might make the scraper unstable. Use at your own risk 🙂
Default value of this property is false
Run browsers in headless mode
headless
booleanOptional
By default, browsers run in headless mode. You can toggle this off to run them in headful mode, which can help with certain rare anti-scraping protections but is slower and more costly.
Default value of this property is true
Ignore SSL errors
ignoreSslErrors
booleanOptional
If enabled, the scraper will ignore SSL/TLS certificate errors. Use at your own risk.
Default value of this property is false
Ignore CORS and CSP
ignoreCorsAndCsp
booleanOptional
If enabled, the scraper will ignore Content Security Policy (CSP) and Cross-Origin Resource Sharing (CORS) settings of visited pages and requested domains. This enables you to freely use XHR/Fetch to make HTTP requests from Page function.
Default value of this property is false
Download media files
downloadMedia
booleanOptional
If enabled, the scraper will download media such as images, fonts, videos and sound files, as usual. Disabling this option might speed up the scrape, but certain websites could stop working correctly.
Default value of this property is true
Download CSS files
downloadCss
booleanOptional
If enabled, the scraper will download CSS files with stylesheets, as usual. Disabling this option may speed up the scrape, but certain websites could stop working correctly, and the live view will not look as cool.
Default value of this property is true
Max page retries
maxRequestRetries
integerOptional
The maximum number of times the scraper will retry to load each web page on error, in case of a page load error or an exception thrown by Page function.
If set to 0
, the page will be considered failed right after the first error.
Default value of this property is 3
Max pages per run
maxPagesPerCrawl
integerOptional
The maximum number of pages that the scraper will load. The scraper will stop when this limit is reached. It's always a good idea to set this limit in order to prevent excess platform usage for misconfigured scrapers. Note that the actual number of pages loaded might be slightly higher than this value.
If set to 0
, there is no limit.
Default value of this property is 0
Max result records
maxResultsPerCrawl
integerOptional
The maximum number of records that will be saved to the resulting dataset. The scraper will stop when this limit is reached.
If set to 0
, there is no limit.
Default value of this property is 0
Max crawling depth
maxCrawlingDepth
integerOptional
Specifies how many links away from Start URLs the scraper will descend. This value is a safeguard against infinite crawling depths for misconfigured scrapers. Note that pages added using context.enqueuePage()
in Page function are not subject to the maximum depth constraint.
If set to 0
, there is no limit. To crawl only the pages specified by the Start URLs, set linkSelector
empty instead.
Default value of this property is 0
Max concurrency
maxConcurrency
integerOptional
Specified the maximum number of pages that can be processed by the scraper in parallel. The scraper automatically increases and decreases concurrency based on available system resources. This option enables you to set an upper limit, for example to reduce the load on a target web server.
Default value of this property is 50
Page load timeout
pageLoadTimeoutSecs
integerOptional
The maximum amount of time the scraper will wait for a web page to load, in seconds. If the web page does not load in this timeframe, it is considered to have failed and will be retried (subject to Max page retries), similarly as with other page load errors.
Default value of this property is 60
Page function timeout
pageFunctionTimeoutSecs
integerOptional
The maximum amount of time the scraper will wait for Page function to execute, in seconds. It's a good idea to set this limit, to ensure that unexpected behavior in page function will not get the scraper stuck.
Default value of this property is 60
Navigation waits until
waitUntil
arrayOptional
Contains a JSON array with names of page events to wait, before considering a web page fully loaded. The scraper will wait until all of the events are triggered in the web page before executing Page function. Available events are domcontentloaded
, load
, networkidle2
and networkidle0
.
For details, see waitUntil
option in Puppeteer's Page.goto()
function documentation.
Default value of this property is ["networkidle2"]
Pre-navigation hooks
preNavigationHooks
stringOptional
Async functions that are sequentially evaluated before the navigation. Good for setting additional cookies or browser properties before navigation. The function accepts two parameters, crawlingContext
and gotoOptions
, which are passed to the page.goto()
function the crawler calls to navigate.
Post-navigation hooks
postNavigationHooks
stringOptional
Async functions that are sequentially evaluated after the navigation. Good for checking if the navigation was successful. The function accepts crawlingContext
as the only parameter.
Insert breakpoint
breakpointLocation
EnumOptional
This property has no effect if Run mode is set to PRODUCTION. When set to DEVELOPMENT it inserts a breakpoint at the selected location in every page the scraper visits. Execution of code stops at the breakpoint until manually resumed in the DevTools window accessible via Live View tab or Container URL. Additional breakpoints can be added by adding debugger;
statements within your Page function.
See Run mode in README for details.
Value options:
"NONE": string"BEFORE_GOTO": string"BEFORE_PAGE_FUNCTION": string"AFTER_PAGE_FUNCTION": string
Default value of this property is "NONE"
Dismiss cookie modals
closeCookieModals
booleanOptional
Using the I don't care about cookies browser extension. When on, the crawler will automatically try to dismiss cookie consent modals. This can be useful when crawling European websites that show cookie consent modals.
Default value of this property is false
Maximum scrolling distance in pixels
maxScrollHeightPixels
integerOptional
The crawler will scroll down the page until all content is loaded or the maximum scrolling distance is reached. Setting this to 0
disables scrolling altogether.
Default value of this property is 5000
Debug log
debugLog
booleanOptional
If enabled, the actor log will include debug messages. Beware that this can be quite verbose. Use context.log.debug('message')
to log your own debug messages from Page function.
Default value of this property is false
Browser log
browserLog
booleanOptional
If enabled, the actor log will include console messages produced by JavaScript executed by the web pages (e.g. using console.log()
). Beware that this may result in the log being flooded by error messages, warnings and other messages of little value, especially with high concurrency.
Default value of this property is false
Custom data
customData
objectOptional
A custom JSON object that is passed to Page function as context.customData
. This setting is useful when invoking the scraper via API, in order to pass some arbitrary parameters to your code.
Default value of this property is {}
Dataset name
datasetName
stringOptional
Name or ID of the dataset that will be used for storing results. If left empty, the default dataset of the run will be used.
Actor Metrics
2.6k monthly users
-
219 stars
>99% runs succeeded
44 days response time
Created in Mar 2019
Modified 3 months ago