Website Content Crawler avatar

Website Content Crawler

Try for free

No credit card required

Go to Store
Website Content Crawler

Website Content Crawler

apify/website-content-crawler
Try for free

No credit card required

Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.

Do you want to learn more about this Actor?

Get a demo
WR

Poor CPU utilization due to low usage limit

Open

write2souvik opened this issue
3 months ago

I'm trying to optimize this actor for crawling larger sites - we have cases averaging around 1500 distinct URLs (and we do actually want all of them). Presently, with memory set to 4GB, these runs would take several hours to complete successfully.

I noticed in the logs (run URL attached) that we're getting very poor concurrency - desired concurrency is at most 2, and often 1. Sample status log:

PlaywrightCrawler:AutoscaledPool: state {"currentConcurrency":1,"desiredConcurrency":1,"systemStatus":{"isSystemIdle":false,"memInfo":{"isOverloaded":false,"limitRatio":0.2,"actualRatio":0},"eventLoopInfo":{"isOverloaded":false,"limitRatio":0.6,"actualRatio":0.124},"cpuInfo":{"isOverloaded":true,"limitRatio":0.4,"actualRatio":0.404},"clientInfo":{"isOverloaded":false,"limitRatio":0.3,"actualRatio":0}}}

The one that catches my attention is the CPU limit ratio being 0.4. I recognize that there are two pools in play, so they can't both take up the full CPU allocation, but our workload appears to be using the HttpCrawler basically not at all, since it only uses that pool for PDF downloads (at the moment, it's done 17 requests via HTTP, and 331 via Playwright). As a result, at least half of our CPU quota is sitting idle. The job also peaked at 2.6/4GB memory usage (65%), so there's clearly some headroom that we could be using but aren't.

Before I crank up the resources allocated to the run, is there anything we can do to actually make better use of them? I'd prefer not to... [trimmed]

janbuchar avatar

Hello, and thank you for your interest in Website Content Crawler! I recommend you try running the actor with 8GB of RAM or more. This will also provide you with more CPU capacity. It is likely that the headless browser will find a way to use all the available memory to make crawling faster, don't worry about not using the memory before you try it. Memory management in browsers is a very complex topic and it's hard to predict how exactly it will behave when you give it more resources.

WR

write2souvik

3 months ago

I can appreciate that browser memory management is unpredictable, especially across a potentially broad range of web pages. The concern I have is that it empirically is not finding a way to use all the available memory or CPU available to it. The actor is self-throttling at 40% CPU utilization, and is only using about 60% of memory. Should I somehow expect those ratios to change at larger total resource allocation?

janbuchar avatar

Yes, it's possible that the browser will be able to work faster with a larger memory allocation. I understand it may be counter-intuitive, but it's a good idea to try it first.

WR

write2souvik

3 months ago

Stepped away from this for a while, but re-ran yesterday with doubled RAM quota. I'm still seeing roughly the same behavior:

{"currentConcurrency":4,"desiredConcurrency":3,"systemStatus":{"isSystemIdle":false,"memInfo":{"isOverloaded":false,"limitRatio":0.2,"actualRatio":0},"eventLoopInfo":{"isOverloaded":false,"limitRatio":0.6,"actualRatio":0.04},"cpuInfo":{"isOverloaded":true,"limitRatio":0.4,"actualRatio":0.577},"clientInfo":{"isOverloaded":false,"limitRatio":0.3,"actualRatio":0}}}

Of course the absolute concurrency is higher, so it's making more progress in the same amount of time, but it's still getting poor utilization of the resources allocated to it. This doesn't feel like a cost-efficient solution, if we're only getting to use half the resources.

WR

write2souvik

3 months ago

Here's a recent case in point: https://console.apify.com/organization/LXWdxWeT9jfpRfxQp/actors/runs/AMJqBQ6UD3X0zpgXi

100 minutes to crawl 168 pages, with 8GB of RAM and 2 CPU, seems completely unreasonable.

Developer
Maintained by Apify

Actor Metrics

  • 3.9k monthly users

  • 712 stars

  • >99% runs succeeded

  • 2.2 days response time

  • Created in Mar 2023

  • Modified 12 hours ago