
Website Content Crawler
Pricing
Pay per usage

Website Content Crawler
Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.
3.7 (41)
Pricing
Pay per usage
1531
Total users
59K
Monthly users
7.9K
Runs succeeded
>99%
Issues response
7.6 days
Last modified
5 days ago
Memory limit control
Closed
I want to address the memory control issue using llama index integration. where I used the ACTOR_MEMORY_MBYTES parameter for controlling the RAM usage but it didn't reflect on the actor's console. But when we tried to use it on APIFY Ui it worked and we were able to control the memory limit, we wanted to know a workaround or a solution on how we can control memory usage through API which we were using via python, a solution to this will be very helpful.
Hello, and thank you for your interest in this Actor!
Can you please share the code snippet you're using to call this Actor with LlamaIndex? Being able to reproduce this issue would greatly help us with assessing the source of the problem.
Cheers!
warmhearted_bank
Thank you for the response. please find the code snippet below.

I double checked this, and the input for website content crawler does not contain environmentVariables
- is this an experiment or is there some misleading documentation that we should know of?
Also, if you want to control the memory usage of an Actor, you need to set it on the platform. The environment variable only tells the actor how much memory is available, changing it doesn't really change the limit.
intriguing_game
Hi, what to do when we want to change memory size from our code base as we are not using the console to crawl the URLs, we found the variable deep in one of the documentation we will tried using it, also do we have a cut off parameter other than the notifications if the usage is going over board.

I see. If you really just want to set the memory used by the website content crawler that you're launching via reader.load_data
, you can do so using the memory_mbytes
parameter - see https://docs.llamaindex.ai/en/stable/api_reference/readers/apify/#llama_index.readers.apify.ApifyActor.load_data.
For example:
reader.load_data(actor_id='apify/website-content-crawler',run_input={...},dataset_mapping_function=...,memory_mbytes=2048,)
intriguing_game
Thanks for the tip,we will try this out and for the part where we want to set the cut off limit any inputs?

Hi, I apologize for the very late response. We are currently revisiting all open issues.
The solution provided by janbuchar is working fine. I’m sorry, but I don’t fully understand your question about the cut-off limit. Can you please clarify it?
I’ll go ahead and close this issue for now. But if you face problems, please feel free to reopen it, and I'll try to help you.