Website Content Crawler avatar
Website Content Crawler

Pricing

Pay per usage

Go to Store
Website Content Crawler

Website Content Crawler

Developed by

Apify

Apify

Maintained by Apify

Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.

4.0 (40)

Pricing

Pay per usage

1392

Total users

53K

Monthly users

7.9K

Runs succeeded

>99%

Issues response

6.8 days

Last modified

4 days ago

FQ

There was an uncaught exception during the run of the Actor and it was not handled

Closed

formidable_quagmire opened this issue
19 days ago

When crawler reached max limit 200 it thows exfception.

UT

Autocom

18 days ago

I have had the same issue constantly - not sure where to go to from here.

jindrich.bar avatar

Hello, and thank you for your interest in this Actor!

This indeed seems like a bug in WCC's file download feature. While the results are correct (you have set maxResults: 200, so the crawler is expected to produce at most 200 dataset items), the Actor shouldn't spam the logs with error messages.

We are already working on a refactor of the file downloader component. The ETA for this fix is approximately a week.

We'll keep you updated here once we release a new version of this Actor.

Cheers!

FQ

formidable_quagmire

17 days ago

Thanks

UT

Autocom

16 days ago

Thank you

jindrich.bar avatar

Hello!

Just writing to you that today we released a new version of Website Content Crawler (0.3.66), including the aforementioned file download fixes. This should solve the problem you are mentioning here.

If you're using the version-0 build of the Actor (this is the default), you'll start using the patched builds automatically.

I'll close this issue now, but feel free to let me know if you need any more help. Cheers!