
Website Content Crawler
Pricing
Pay per usage

Website Content Crawler
Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.
4.0 (41)
Pricing
Pay per usage
1596
Total users
62K
Monthly users
8.2K
Runs succeeded
>99%
Issues response
8.2 days
Last modified
11 hours ago
Memory issue
Open
Getting the memory error while scraping content from the websites saying it required 32 GB though memory has been adjusted for the actor but still getting this error.
Hello, and thank you for your interest in this Actor.
To help us investigate this effectively, could you please provide either:
- The Run ID of the failed scraping task, or
- A reproducible example (e.g., the specific actor settings, input URLs, and any other relevant configuration) that triggers this error.
Without seeing the actual error logs or the specific scenario, it's difficult for us to diagnose the root cause.
Looking forward to your details so we can assist further.
archflowai
Hi, We are working on N8n agent in the workflow we are using the website crawler. Even though we specified the crawler to use 2GB but still api calls or showing 8gb and which eventually getting failed due to memory exceeding, and here is the RUN ID:nTKYjZCBcDEypATFc
Hello @archflowai! From what we can see on the Apify Platform, the Actor has indeed been run with 8 GB of memory.
This is likely an error in the n8n integration you are using. Would you mind sharing more information about how you're connecting your n8n integration to Apify? Note that 8 GB is the default amount of memory for this Actor, so it serves as a fallback value in case something goes wrong.
Looking forward to hearing from you soon.