Website Content Crawler avatar
Website Content Crawler
Try for free

No credit card required

View all Actors
Website Content Crawler

Website Content Crawler

apify/website-content-crawler
Try for free

No credit card required

Automatically crawl and extract text content from websites with documentation, knowledge bases, help centers, or blogs. This Actor is designed to provide data to feed, fine-tune, or train large language models such as ChatGPT or LLaMA.

User avatar

Unable to improve crawling speed with memory and concurrency

Closed

motivated_leaflet opened this issue
a month ago

Use case: I have a use case where the entire list of required urls for scraping is provided as input, i.e. no crawling is required. Want to reduce crawling time as much as possible.

Observations:

  • initial concurrency is set to 10 - however, concurrency does not seem to lead to observable speed improvements
  • memory is set to 4GB - current runs do not come close to the memory limit, setting to 8GB also did not seem to lead to speed improvements

Questions: Any recommendations to what parameters or strategies I can deploy to enable true concurrency and reduce time to completion?

User avatar

Hello again!

I cannot see the website you are scraping - would you mind sharing the Run ID? The optimization techniques depend a lot on the target website - whether it uses JS client-side rendering, some advanced bot-protection services etc.

In general, you get the best performance from the Raw HTTP Client (Cheerio) crawler type. However, this one doesn't work with JS-rendered pages (if that's your case, you have to use one of the browser-based crawlers).

Recently, we were running experiments testing the Actor performance with different memory settings - larger available memory does lead to speed improvements, although it might take some time - the crawler scales up and down based on the current system load. This happens over time - therefore it might not be as noticeable for shorter runs. Because of this, it's also not advisable to change the initialConcurrency option - if set too high, the Actor might start scraping too many webpages at once, overloading the system, slowing everything down and only then scaling down (while still having the memory + CPU cluttered with all the open webpages). Keeping all these options on their default values seems to work the best.

Last but not least - the Actor only consumes a part of the memory because the browser interaction is a CPU-bound task - the Actor reaches the CPU-based limits faster than the memory limits.

As I mentioned above, we can help you more, but we'll need to know what website you're scraping. Thank you (and looking forward to hearing from you again soon)!

User avatar

motivated_leaflet

a month ago

We have a pretty short retention period so the previous run may have expired. Here is another RUN ID: https://console.apify.com/actors/runs/WzLUSPcc7F93LQBV1.

The types of websites we are targeting can vary from run to run, there will definitely be JS-rendered pages though so we are looking for the optimal setting to get to shortest run times on average with 20-100 pages. All the pages are independent though so it would be good to be able to increase concurrency maximally.

Thank you for the explanation around memory and CPU!

User avatar

Alright, thank you for the details! In such a case, I would recommend using the Adaptive Crawler crawler type. This crawler type automatically switches between Cheerio and browser-based crawlers, based on the web page content and previously seen pages.

Scraping pages from multiple different websites might diminish the performance boost a bit (if all the pages were from the same website, the Adaptive Crawling could predict the crawler type better). There is also a small performance hit regarding content caching (if all the pages were from the same website, they would most likely share some scripts etc. that wouldn't have to be loaded for every request).

Either way, trying to run the Adaptive crawler type on your start URLs, I've managed to get the results in approx. half the time(!!) (see my run here - https://console.apify.com/view/runs/ieMdtG2itV4gYPFv6). To be fair, some of it might be also caused by the momentary latencies of the target servers, but the adaptive switching also does quite a lot (see the logs).

Aside from this, I don't have many other ideas on how to speed up your crawl. As mentioned above, you can speed up longer runs by increasing the available memory / CPU - unfortunately, your use case of ~50 URLs per run falls exactly in the valley of "quite a lot of URLs"-"too little time to properly scale up".

We'll continue looking into the possible performance boosts, but unfortunately, I'm afraid I cannot help you much more. I'll close this issue now - but feel free to ask any additional questions (or propose any ideas on how to make this Actor better / faster :)).

Cheers!

Developer
Maintained by Apify
Actor metrics
  • 2k monthly users
  • 99.9% runs succeeded
  • 2.9 days response time
  • Created in Mar 2023
  • Modified 3 days ago