Website Content Crawler avatar
Website Content Crawler
Try for free

No credit card required

View all Actors
Website Content Crawler

Website Content Crawler

apify/website-content-crawler
Try for free

No credit card required

Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗LangChain, LlamaIndex, and the wider LLM ecosystem.

Do you want to learn more about this Actor?

Get a demo
SS

Crawling does not work with some specific custom sitemaps

Closed

sai_sampath opened this issue
3 months ago

First of all, Thank you for fixing the issue previously with the custom sitemaps. This is now working fine, but when I tried to add the following sitemap, It's still failing. Can you please check and update. Thank you.

https://gist.githubusercontent.com/haneeshmvv/7b545a68bcd28f47a338bcda8d6383f6/raw/f2a5ad0e98e3234c0238d402eeffa0b61b2df8e8/circle-community.xml

jindrich.bar avatar

Hello again!

The issue is caused by the way you're submitting the sitemap to the Actor. GitHub gists return all content with the Content-Type: text/plain header. Since WCC already supports plain text sitemaps (i.e. plaintext files with new-line separated URLs), submitting an XML file with the text/plain content type causes WCC to try parse it as this plain text sitemap.

There are two possible solutions - either:

  • upload your XML file on a server that will send it with the correct Content-Type header (e.g. GitHub Pages), or
  • transform your XML file into a new-line separated list of URLs.

I'll close this issue now but feel free to ask additional questions, if you have any. Cheers!

Developer
Maintained by Apify
Actor metrics
  • 2.8k monthly users
  • 434 stars
  • 99.9% runs succeeded
  • 2.9 days response time
  • Created in Mar 2023
  • Modified 3 days ago