
Website Content Crawler
Pricing
Pay per usage

Website Content Crawler
Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.
4.0 (41)
Pricing
Pay per usage
1591
Total users
62K
Monthly users
8.2K
Runs succeeded
>99%
Issues response
7.9 days
Last modified
2 hours ago
This just kept going for no reason
Closed
This was supposed to be a limited search and instead it just kept running and charged me. Refund this.
Hello, and thank you for your interest in this Actor!
Based on the current configuration, the scraper was set up to run extensively. Here are some key settings that contributed to the long execution:
- "maxCrawlPages": 9999999 — Allows nearly unlimited total pages.
- "maxCrawlDepth": 2 — Enables crawling multiple levels deep.
- "maxResults": 9999999 — Permits an extremely large result set.
Your input contains several unsupported input options: "sameDomainDelay"
,"maxEmailsPerDomain"
,"emailPattern"
, or "excludePatterns"
. None of these are options recognized by the Website Content Crawler, and submitting these won't affect the run in any way.
If you want to limit the total number of pages crawled, use one of the options mentioned above. By default, the crawler will stay on the domains of the start URLs, so maxCrawlPages
actually does limit "max pages per domain".
Regarding the unexpected charges, we understand your frustration. I contacted our support team internally to refund the costs of this run for you.
I'll close this issue now, but feel free to ask additional questions if you have any. Cheers!