Website Content Crawler avatar
Website Content Crawler

Pricing

Pay per usage

Go to Store
Website Content Crawler

Website Content Crawler

Developed by

Apify

Apify

Maintained by Apify

Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.

3.7 (41)

Pricing

Pay per usage

1531

Total users

59K

Monthly users

7.9K

Runs succeeded

>99%

Issues response

7.6 days

Last modified

5 days ago

MV

Error: incorrect header check

Closed

MavenAGI opened this issue
a year ago

A recent run failed multiple times with this error. Here's a relevant log snippet:

2024-05-13T18:32:30.260Z ACTOR: Pulling Docker image of build Q0D3SU0nCpLVGrtw4 from repository.
2024-05-13T18:32:30.383Z ACTOR: Creating Docker container.
2024-05-13T18:32:30.511Z ACTOR: Starting Docker container.
2024-05-13T18:32:31.818Z Starting X virtual framebuffer using: Xvfb :99 -ac -screen 0 1920x1080x24+32 -nolisten tcp
2024-05-13T18:32:31.826Z Executing main command
2024-05-13T18:32:35.912Z INFO System info {"apifyVersion":"3.1.16","apifyClientVersion":"2.9.0","crawleeVersion":"3.8.1","osType":"Linux","nodeVersion":"v18.19.1"}
2024-05-13T18:32:36.139Z INFO Discovering possible sitemap files from the start URLs...
2024-05-13T18:32:38.358Z node:events:495
2024-05-13T18:32:38.360Z throw er; // Unhandled 'error' event
2024-05-13T18:32:38.362Z ^
2024-05-13T18:32:38.363Z
2024-05-13T18:32:38.365Z Error: incorrect header check
2024-05-13T18:32:38.366Z at Zlib.zlibOnError [as onerror] (node:zlib:189:17)
2024-05-13T18:32:38.368Z at Zlib.callbackTrampoline (node:internal/async_hooks:128:17)
2024-05-13T18:32:38.370Z Emitted 'error' event on Gunzip instance at:
2024-05-13T18:32:38.371Z at Gunzip.onerror (node:internal/streams/readable:828:14)
2024-05-13T18:32:38.373Z at Gunzip.emit (node:events:517:28)
2024-05-13T18:32:38.375Z at emitErrorNT (node:internal/streams/destroy:151:8)
2024-05-13T18:32:38.376Z at emitErrorCloseNT (node:internal/stre... [trimmed]
jindrich.bar avatar

Hello and thank you for your interest in this Actor!

Our web-scraping library Crawlee is indeed having some issues with processing the sitemap on this domain. I already created a GitHub issue for this (see here) and our team will look into this soon.

In the meantime, you can simply disable the sitemap discovery (the Consider URLs from Sitemaps) option for this run. This way, the Actor won't try to access the sitemap and won't fail. Keep in mind that the sitemap discovery is only a supportive mechanism (and in most cases, you should get the same results in the same amount of time with and without it).

I'll keep you posted with any updates regarding this issue. Thank you! (and sorry for the inconvenience.)

jindrich.bar avatar

Hello again! Just letting you know that this issue has been fixed in the latest release. This Actor should be able to process sitemaps correctly regardless of the compression.

I'll close this issue now, but feel free to reopen it if the issue resurfaces (although it shouldn't :)). Thank you again for your patience!

AO

apricot_orange

a year ago

I don't see how we can reopen the issue via the web so I'm responding here. I'm not sure if it's related but we had a sitemap based crawl today that didn't seem to be making any progress:

https://console.apify.com/organization/5WhuE8XiPsnLiYsmv/actors/runs/0s7yHkSxax9vQOcIO#log

jindrich.bar avatar

Hello - this is a known issue (https://console.apify.com/actors/aYG0l9s7dbB7j3gbS/issues/5AfOIAxLtcJYZnDNy) and we're currently looking into the fix.

Right now, the Actor needs to parse the entire sitemap before processing the requests. If the sitemap is large enough, this takes a lot of time and can look like the Actor is stuck. We're implementing a non-blocking sitemap parser (PR here) that would allow us to start processing the requests during the sitemap parsing (concurrently).

We'll keep you updated once we make some progress with this (hopefully this / next week). Thank you for your patience!