Website Content Crawler avatar

Website Content Crawler

Try for free

No credit card required

View all Actors
Website Content Crawler

Website Content Crawler

apify/website-content-crawler
Try for free

No credit card required

Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.

Do you want to learn more about this Actor?

Get a demo
CR

Scrape files that don't originate from start URLs

Closed

civic-roundtable opened this issue
3 months ago

Is there any way to revert to the behavior from before issue I3vvgzAyxfAA39xE1 was implemented?

That changed saveFiles behavior to respect "include globs", vs the old behavior of saving all files.

I would prefer the original behavior, but I don't see a way to do that. If I add **/* to "include globs", it correctly downloads the files I need, but then also scrapes pages from other domains, which I don't want.

Could you introduce a setting that separates "include crawl globs" from "include file globs"?

CR

civic-roundtable

3 months ago

Another idea would be to list all extensions that I want scraped in "include globs":

  • **/*.@(pdf|csv|ppt|pptx|doc|docx|xls|xlsx)?(\?*)

The problems with this are:

  1. I would need to list a lot of extensions
  2. This wouldn't respect short URLs that link to files, e.g. example.com/keynote redirecting to files.example.com/a/b/c/keynote.docx)

Are those problems correct? If saveFiles wouldn't handle them anyway, then this ticket is moot and the above glob is sufficient

CR

civic-roundtable

3 months ago

It looks like if I remove all "include globs" entries, then this works the way I want! Great

Developer
Maintained by Apify
Actor metrics
  • 3.8k monthly users
  • 544 stars
  • 99.9% runs succeeded
  • 3.4 days response time
  • Created in Mar 2023
  • Modified 1 day ago