
Website Content Crawler
Pricing
Pay per usage

Website Content Crawler
Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.
4.0 (40)
Pricing
Pay per usage
1392
Total users
53K
Monthly users
7.9K
Runs succeeded
>99%
Issues response
6.8 days
Last modified
4 days ago
crawling cannot be done with arabic website in english
Open
the arabic website have english version but the crawling doesn't recognize and continue crawling the arabic content. example https://www.nbr.gov.bh/
Hello, and thank you for your interest in this Actor.
This website seems to switch the language with HTTP session cookies. This is quite an unusual and non-standard way of delivering localized content. By default, the server returns Arabic webpages.
You can load the website in your browser locally, switch to English, copy the cookies from your browser (using e.g. EditThisCookie browser extension), and pass them to WCC's initialCookies
input option.
This will unfortunately work only for a brief time, until the cookie expires - see my example run here.
I'll create a ticket for this, and we'll try to come up with better support for such use-cases in WCC. Note that this is a medium-sized task, and it might take several weeks to solve properly. I'll keep you posted here in case of any updates.
Cheers!