
Website Content Crawler
Pricing
Pay per usage

Website Content Crawler
Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.
4.0 (41)
Pricing
Pay per usage
1590
Total users
62K
Monthly users
8.2K
Runs succeeded
>99%
Issues response
7.8 days
Last modified
7 days ago
Getting duplicate URLs in web crawling
Closed
Hello, We're encountering an issue with duplicate URLs in our web crawling process. This redundancy is leading to unnecessary resource consumption and inefficiency. The current setup, using LlamaIndex for web crawling, is producing duplicate URLs, which wastes system resources and impacts performance. We need to implement a URL deduplication strategy to filter out duplicates and optimize our resource usage.

Hello, and thank you for your interest in Website Content Crawler! I looked into your last runs and it does seem that the usual deduplication is malfunctioning there. We will look into this and let you know.
simpleworks
Hello Jan Buchar, Is there any update on the above issue?

Hello, unfortunately we haven't yet been able to look into this.

Hi,
I apologize for the delayed response. We are currently revisiting all open issues, attempting to reproduce them and provide answers.
Unfortunately, with the given information, I’m unable to reproduce the issue at this time.
I’m sorry for the inconvenience, but I’ll go ahead and close this issue for now. If you continue to face problems, please feel free to reopen it, and we’ll try to help you.