
Website Content Crawler
Pricing
Pay per usage

Website Content Crawler
Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.
3.9 (41)
Pricing
Pay per usage
1552
Total users
60K
Monthly users
7.7K
Runs succeeded
>99%
Issues response
7.8 days
Last modified
4 days ago
Scrape files that don't originate from start URLs
Closed
Is there any way to revert to the behavior from before issue I3vvgzAyxfAA39xE1
was implemented?
That changed saveFiles
behavior to respect "include globs", vs the old behavior of saving all files.
I would prefer the original behavior, but I don't see a way to do that. If I add **/*
to "include globs", it correctly downloads the files I need, but then also scrapes pages from other domains, which I don't want.
Could you introduce a setting that separates "include crawl globs" from "include file globs"?
civic-roundtable
Another idea would be to list all extensions that I want scraped in "include globs":
**/*.@(pdf|csv|ppt|pptx|doc|docx|xls|xlsx)?(\?*)
The problems with this are:
- I would need to list a lot of extensions
- This wouldn't respect short URLs that link to files, e.g.
example.com/keynote
redirecting tofiles.example.com/a/b/c/keynote.docx
)
Are those problems correct? If saveFiles
wouldn't handle them anyway, then this ticket is moot and the above glob is sufficient
civic-roundtable
It looks like if I remove all "include globs" entries, then this works the way I want! Great