Website Backup avatar
Website Backup
Try for free

No credit card required

View all Actors
Website Backup

Website Backup

Try for free

No credit card required

Enables to create a backup of any website by crawling it, so that you don’t lose any content by accident. Ideal e.g. for your personal or company blog.

Apify Actor - Website Backup


The purpose of this actor is to enable creation of website backups by recursively crawling them. For example, we’d use it to make regular backups of, so that we don’t lose any content by accident. Although such backup cannot be automatically restored, it’s better than losing data completely.

Given URL entry points, the actors recursively crawls the links found on the pages using a provided CSS selector and create a separate MHTML snapshot of each page. Each snapshot is taken after the full page is rendered with Puppeteer crawler and includes all the content such as images and CSS. Hence, it can be used on any HTML / JS / Wordpress web sites which don't require authentication.

Input parameters

startURLsarrayList of URL entry points
linkSelectorstringCSS selector matching elements with 'href' attributes that should be enqueued
maxRequestsPerCrawlintegerThe maximum number of pages that the scraper will load. The scraper will stop when this limit is reached. It's always a good idea to set this limit in order to prevent excess platform usage for misconfigured scrapers. Note that the actual number of pages loaded might be slightly higher than this value.If set to 0, there is no limit.
maxCrawlingDepthintegerDefines how many links away from the StartURLs will the scraper descend. 0 means unlimited.
maxConcurrencyintegerDefines how many pages can be processed by the scraper in parallel. The scraper automatically increases and decreases concurrency based on available system resources. Use this option to set a hard limit.
customKeyValueStorestringUse custom named key value store for saving results. If the key value store with this name doesn't yet exist, it's created. The snapshots of the pages will be saved in the key value store.
customDatasetstringUse custom named dataset for saving metadata. If the dataset with this name doesn't yet exist, it's created. The metadata about the snapshots of the pages will be saves in the dataset.
proxyConfigurationobjectChoose to use no proxy, Apify Proxy, or provide custom proxy URLs.
sameOriginbooleanOnly backup URLs with the same origin as any of the start URL origins. E.g. when turned on for a single start URL, only links with prefix will be backed up recursively.
timeoutForSingleUrlInSecondsintegerTimeout in seconds for doing a backup of a single URL. Try to increase this timeout in case you see an error Error: handlePageFunction timed out after X seconds. .
navigationTimeoutInSecondsintegerTimeout in seconds in which the navigation needs to finish. Try to increase this if you see an error Navigation timeout of XXX ms exceeded
searchParamsToIgnorearrayNames of URL search parameters (such as 'source', 'sourceid', etc.) that should be ignored in the URLs when crawling.


Single zip file containing MHTML snapshot and its metadata is stored in a key value store (default or named depending on the input argument) for each URL visited. The key for each zip file includes a timestamp, URL hash and the URL in a human readable form. Note that the Apify platform only supports certain characters and limits the length of the key to 256 characters (that is why e.g. / is removed). Apart from the key value store, metadata for the crawled webpages are also stored in a dataset (default or named).

Compute unit consumption

An example run which did a backup of 323 webpages under <a href=', configured with 8192 Mb of memory and lasting 12 minutes consumed 1.6617 compute units.

Maintained by Community
Actor metrics
  • 8 monthly users
  • 1 star
  • 100.0% runs succeeded
  • Created in Jul 2020
  • Modified about 3 years ago