Website Content Crawler
No credit card required
Website Content Crawler
No credit card required
Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.
Do you want to learn more about this Actor?
Get a demoHi, in this run 12 requests were handled, but the output only contains results for 2 URLs. I expected 12 outputs, is this a bug or do I misunderstand how it works? Thank you.
Hi, thank you for using Website Content Crawler.
I can see that you've configured everything well, but there is one small issue. Unfortunately, the crawler logs are a bit verbose in this case.
Here’s what’s happening:
The initialConcurrency
is set to 10, so the crawler tries to scrape 10 pages simultaneously. However, it won’t save all the output because you’ve limited it with "maxResults": 1
. In your case, it saved 2 results, likely due to the concurrency setting.
To resolve this, remove the "maxResults": 1
variable. You’ll also need to remove "maxCrawlPages": 20
.
Please see my example run
I hope this helps. Jiri
Ok that explains it. Thank you! Since I resused settings from an old run I didn't realise that was set.
I suggest when the settings boxes are collapsed, any optional values that are set are displayed on the collapsed row. Like the run options has.
Interesting idea—thank you for sharing your feedback! I've passed it along internally.
Actor Metrics
4k monthly users
-
840 stars
>99% runs succeeded
1 days response time
Created in Mar 2023
Modified 21 hours ago