
Website Content Crawler
Pricing
Pay per usage

Website Content Crawler
Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.
3.7 (41)
Pricing
Pay per usage
1531
Total users
59K
Monthly users
7.9K
Runs succeeded
>99%
Issues response
7.6 days
Last modified
5 days ago
No urls to crawl, runs forever
Closed
I found a bug in the website-content-crawler. In particular, it looks like if I add exclude_url_globs and it can't find any URLs to crawl from the start URL that are not in exclude_url_globs, it just runs forever.
If I ensure that there are start URLs not included in exclude_url_globs, that solves the problem.

Hi, thank you for trying the Website Content Crawler.
I’m not entirely sure what you’re trying to achieve. You’ve entered a startURL and then listed around 50 URLs to exclude.
Would it make more sense to include only the startURLs you want to scrape and set the maximum crawler depth to 0?
I’ve attempted to replicate your run, and it completed successfully. You can check it here for reference.

I’ll go ahead and close this issue now, but please feel free to ask additional questions or raise a new issue.