Website Content Crawler avatar
Website Content Crawler

Pricing

Pay per usage

Go to Store
Website Content Crawler

Website Content Crawler

Developed by

Apify

Apify

Maintained by Apify

Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.

3.9 (41)

Pricing

Pay per usage

1554

Total users

60K

Monthly users

7.8K

Runs succeeded

>99%

Issues response

7.8 days

Last modified

5 days ago

UF

robots.txt

Closed

usr-F3gTCcLF opened this issue
a year ago

Does this crawler respect the robots.txt file on the site that it crawls by default? If it doesn't do so by default, how do I activate that setting?

jindrich.bar avatar

Hello and thank you for your interest in this Actor!

Website Content Crawler can parse the robots.txt file to get to the sitemap. If you're asking about the Allow / Disallow directives, the Actor ignores those - and currently, there is no plan to implement this.

Keep in mind that the robots.txt file is only a "suggestion" and is not legally binding. Accessing any publicly available page on the internet is inherently legal (but you should think twice before using the content somewhere - always keep an eye on licensing information!)

Read our blog post on the legality of web scraping to get a better idea about the laws regarding scraping the web. I'll close this issue now, but feel free to ask additional questions if you have any.

Cheers!