Website Content Crawler avatar
Website Content Crawler

Pricing

Pay per usage

Go to Store
Website Content Crawler

Website Content Crawler

Developed by

Apify

Apify

Maintained by Apify

Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.

3.7 (41)

Pricing

Pay per usage

1501

Total users

58K

Monthly users

8.1K

Runs succeeded

>99%

Issues response

7.6 days

Last modified

3 hours ago

SO

Http website inaccessible

Closed

souheil opened this issue
a month ago

I'm unable to crawl and fetch content from http only website that aren't using no or invalid ssl cert

jindrich.bar avatar

Hello, and thank you for your interest in this Actor!

Can you please share the URL of the website you are trying to scrape? WCC should be able to connect to HTTP-only servers. Servers with invalid TLS certificates should produce an error for security reasons, but we might add an input option for turning this behaviour off.

By sharing the URL, you would definitely help us with making the decision or providing better support.

Cheres!

SO

souheil

22 days ago

Hi,

I tested with the following URLs:

http://cargomatrix.com http://cargomessenger.com

Let me know if you need further information.

Thanks

Le mar. 3 juin 2025 à 08:36, Jindřich Bär notifications@apify.com a écrit :

jindrich.bar avatar

Thank you for your response. It seems that those two pages have expired TLS certificates.

Imo, the best approach here would be adding a new ignoreTlsErrors input option, which would allow the Actor to access those pages, even if the TLS setup is faulty. Would that work for you?

I'll discuss the implementation with the rest of the team, and we'll let you know once there is any news. Cheers!

SO

souheil

21 days ago

Thank you!

Le mer. 4 juin 2025 à 12:25, Jindřich Bär notifications@apify.com a écrit :

jindrich.bar avatar

Hello again!

We're letting you know that the new beta build of Website Content Crawler (0.1.185) now includes a new input option. By enabling Crawler settings > Ignore HTTPS errors (or ignoreHttpsErrors via API), the crawler won't fail on TLS certificate errors and will load the page and store content even in case of misconfigured TLS.

To switch to the beta branch, pick Run options > Build > Beta, save, and reload the page to get the new input schema.

I'll close this issue now, but feel free to ask additional questions if you have any. Cheers!