Website Content Crawler avatar
Website Content Crawler

Pricing

Pay per usage

Go to Store
Website Content Crawler

Website Content Crawler

Developed by

Apify

Apify

Maintained by Apify

Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.

3.9 (41)

Pricing

Pay per usage

1552

Total users

60K

Monthly users

7.7K

Runs succeeded

>99%

Issues response

7.8 days

Last modified

4 days ago

CR

Scrape files that don't originate from start URLs

Closed

civic-roundtable opened this issue
a year ago

Is there any way to revert to the behavior from before issue I3vvgzAyxfAA39xE1 was implemented?

That changed saveFiles behavior to respect "include globs", vs the old behavior of saving all files.

I would prefer the original behavior, but I don't see a way to do that. If I add **/* to "include globs", it correctly downloads the files I need, but then also scrapes pages from other domains, which I don't want.

Could you introduce a setting that separates "include crawl globs" from "include file globs"?

CR

civic-roundtable

a year ago

Another idea would be to list all extensions that I want scraped in "include globs":

  • **/*.@(pdf|csv|ppt|pptx|doc|docx|xls|xlsx)?(\?*)

The problems with this are:

  1. I would need to list a lot of extensions
  2. This wouldn't respect short URLs that link to files, e.g. example.com/keynote redirecting to files.example.com/a/b/c/keynote.docx)

Are those problems correct? If saveFiles wouldn't handle them anyway, then this ticket is moot and the above glob is sufficient

CR

civic-roundtable

a year ago

It looks like if I remove all "include globs" entries, then this works the way I want! Great