Website Content Crawler
No credit card required
Website Content Crawler
No credit card required
Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.
Do you want to learn more about this Actor?
Get a demoThis actor simply is not woking. Weather its 10 URLs or 500. No results were given, period.
Hi, thank you for your interest in the Website Content Crawler.
What's happening is that the crawler is attempting to fetch sitemaps for all the supplied URLs, so initially, it may seem like nothing is happening. However, the crawler is working on downloading all the sitemaps, which takes approximately 3 minutes for the URLs you provided. Once that's done, the crawling will begin.
If you want to start crawling immediately, set the Consider URLs from Sitemaps option to false, and the process will start right away.
Additionally, please increase the Max Pages setting from 3 to a larger value, as keeping it at 3 will limit the results.
Please, let me know if this helps.
I am tyring ot send 500 urls at a time to the scraper or 1k is that okay??
I only want the home page or about page content what are your thoughts?
In that case, set maxCrawlDepth
to 0 and only the 1k startUrls
will be crawled.
It is fine to upload them all at once.
Also, make sure to set Consider URLs from Sitemaps
to false.
And also remove the limit of max pages
.
This should work fine. Please let me know whether it works.
I am submitting via api is that okay?
Yeah, that's ok, I don't see any reason why it shouldn't work. If you can, try with 10-50 URLs first.
What if a url has bot protection etc.
Can you guys scrape the sites that have safe guards?
It’s hard to say for certain—we’re aiming to make it work across all sites, and it does in the majority of cases. However, there are some websites we’re unable to scrape. I’m sorry I can’t provide a more definitive answer; it really depends on the specific site.
Its okay. thank you
If I send a batch of 500 to the actor. I know I get a run ID. I would like to know if each URL gets its own UUID/ID I track. That way when I API back in to grab the results I can pull one by one.
ok, let me close this issue for now.
did you see my question?
No, unfortunately, I closed the issue around the same time you commented. Apologies for the bad timing!
As for your question, I’m not sure I fully understand. I don’t believe you need a separate ID for a URL; the URL itself can serve as the ID. You can retrieve dataset items one by one, but it’s inefficient. It’s better to download everything at once or paginate through the dataset (for a very large once).
I am running in to a problem.
I have input 50 domains and only got 6 output. IT says 0 fails...
You need to remove the "maxCrawlPages": 5
input parameter, as this limits the number of pages you specify to crawl.
There’s a log message in the run:
12024-10-26T17:29:49.383Z WARN Reached the maximum number of pages to enqueue (5), not enqueueing from 2```
- 3.8k monthly users
- 636 stars
- 100.0% runs succeeded
- 2.7 days response time
- Created in Mar 2023
- Modified 7 days ago