
Website Content Crawler
Pricing
Pay per usage

Website Content Crawler
Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.
3.6 (39)
Pricing
Pay per usage
1406
Total users
54K
Monthly users
8K
Runs succeeded
>99%
Issues response
7.6 days
Last modified
2 days ago
Large number of requests fail
Open
Hello, lately many requests from runs with the 'cheerio' crawler time out. I am not sure how that is supposed to happen, as the crawler should dynamically update the number of requests and the website does not block the requests (to my understanding). Is there some misconfiguration on my side? I am ok with the run taking some time, it is more important that the requests don't time out, without setting some especially high value. In a test run, I increased the memory and time out duration and it crawled more pages, but still far from all, which surprises me. Any guidance would be very helpful. Thank you.

Hi,
I’m really sorry for the delayed response.
I reviewed the recent runs and noticed that the crawler is still experiencing timeouts — for example, when scraping this site: https://tourismus.reg****.de. It’s likely that the target website is slow to respond, causing the requests to fail due to the default 60-second timeout.
That said, all failed requests were eventually retried and the data was successfully scraped.
As you mentioned, the best way to address this is by increasing the timeout and the number of retries. By default, the timeout is set to 60 seconds and the retry limit is 5, which usually works well. However, if you want to be extra safe, you can increase the retries to 10.
Hope this helps! Jiri