
Website Content Crawler
Pricing
Pay per usage

Website Content Crawler
Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.
4.0 (40)
Pricing
Pay per usage
1392
Total users
53K
Monthly users
7.9K
Runs succeeded
>99%
Issues response
6.8 days
Last modified
4 days ago
There was an uncaught exception during the run of the Actor and it was not handled
Closed
When crawler reached max limit 200 it thows exfception.
Autocom
I have had the same issue constantly - not sure where to go to from here.
Hello, and thank you for your interest in this Actor!
This indeed seems like a bug in WCC's file download feature. While the results are correct (you have set maxResults: 200
, so the crawler is expected to produce at most 200 dataset items), the Actor shouldn't spam the logs with error messages.
We are already working on a refactor of the file downloader component. The ETA for this fix is approximately a week.
We'll keep you updated here once we release a new version of this Actor.
Cheers!
formidable_quagmire
Thanks
Autocom
Thank you
Hello!
Just writing to you that today we released a new version of Website Content Crawler (0.3.66
), including the aforementioned file download fixes. This should solve the problem you are mentioning here.
If you're using the version-0
build of the Actor (this is the default), you'll start using the patched builds automatically.
I'll close this issue now, but feel free to let me know if you need any more help. Cheers!