Website Content Crawler
No credit card required
Website Content Crawler
No credit card required
Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.
Do you want to learn more about this Actor?
Get a demoIs there any way to configure the crawler to treat hash URLs as unique?
I am trying to crawl this microsite: https://www.azdhs.gov/preparedness/epidemiology-disease-control/extreme-weather/heat-safety/extreme-heat-preparedness/index.php
It has 3 child pages, with completely separate content, but unfortunately they are all hash URLs:
- https://www.azdhs.gov/preparedness/epidemiology-disease-control/extreme-weather/heat-safety/extreme-heat-preparedness/index.php#az-heat-network
- https://www.azdhs.gov/preparedness/epidemiology-disease-control/extreme-weather/heat-safety/extreme-heat-preparedness/index.php#az-heat-planning-summit
- https://www.azdhs.gov/preparedness/epidemiology-disease-control/extreme-weather/heat-safety/extreme-heat-preparedness/index.php#statewide-extreme-heat-planning-events
The sitemap.xml
is not up to date on this website, and canonicalUrl
metadata is not set correctly.
What have I tried so far?
- Using
includeUrlGlobs
to explicitly include hash links (e.g. glob@(#?)*
), but that does not work - the log says no links found - explicitly setting these 4 startUrls, but the job de-dups them and only crawls the root page
Thank you for the help
My current solution to this is to have separate tasks for each child page. When a hash URL is the only URL in startUrls
, it is crawled correctly.
This works, but is not ideal because:
- I need to manually list all pages, defeating a key benefit of a dynamic crawl
- I need to wait for multiple tasks to finish and then merge their results, vs having all results in one dataset
Any ideas for how to treat hashes as unique URLs?
Update - it looks like Web Scraper
has this setting already! URL #fragments identify unique pages
Can we make that setting available in this Actor too?
Hello and thank you for your interest in this Actor!
Thanks for the detailed report! Treating hash URLs as unique is a good feature idea. We'll discuss it with our team and determine the best way to incorporate it into the Actor.
In the meantime, your workaround of separate tasks is the best approach, even though it's not ideal.
Appreciate your patience and suggestion! I'll keep you posted about the progress on this. Cheers!
Ok, thank you!
Actor Metrics
3.9k monthly users
-
708 stars
>99% runs succeeded
2.1 days response time
Created in Mar 2023
Modified 16 days ago