Website Content Crawler
No credit card required
Website Content Crawler
No credit card required
Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.
Do you want to learn more about this Actor?
Get a demoIs there any way to revert to the behavior from before issue I3vvgzAyxfAA39xE1
was implemented?
That changed saveFiles
behavior to respect "include globs", vs the old behavior of saving all files.
I would prefer the original behavior, but I don't see a way to do that. If I add **/*
to "include globs", it correctly downloads the files I need, but then also scrapes pages from other domains, which I don't want.
Could you introduce a setting that separates "include crawl globs" from "include file globs"?
Another idea would be to list all extensions that I want scraped in "include globs":
**/*.@(pdf|csv|ppt|pptx|doc|docx|xls|xlsx)?(\?*)
The problems with this are:
- I would need to list a lot of extensions
- This wouldn't respect short URLs that link to files, e.g.
example.com/keynote
redirecting tofiles.example.com/a/b/c/keynote.docx
)
Are those problems correct? If saveFiles
wouldn't handle them anyway, then this ticket is moot and the above glob is sufficient
It looks like if I remove all "include globs" entries, then this works the way I want! Great
- 3.8k monthly users
- 544 stars
- 99.9% runs succeeded
- 3.4 days response time
- Created in Mar 2023
- Modified 1 day ago