Website Content Crawler avatar
Website Content Crawler
Try for free

No credit card required

View all Actors
Website Content Crawler

Website Content Crawler

apify/website-content-crawler
Try for free

No credit card required

Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗LangChain, LlamaIndex, and the wider LLM ecosystem.

Do you want to learn more about this Actor?

Get a demo
VI

Memory limit control

Open

vitthalrao.lavate opened this issue
a month ago

I want to address the memory control issue using llama index integration. where I used the ACTOR_MEMORY_MBYTES parameter for controlling the RAM usage but it didn't reflect on the actor's console. But when we tried to use it on APIFY Ui it worked and we were able to control the memory limit, we wanted to know a workaround or a solution on how we can control memory usage through API which we were using via python, a solution to this will be very helpful.

jindrich.bar avatar

Hello, and thank you for your interest in this Actor!

Can you please share the code snippet you're using to call this Actor with LlamaIndex? Being able to reproduce this issue would greatly help us with assessing the source of the problem.

Cheers!

WB

warmhearted_bank

a month ago

Thank you for the response. please find the code snippet below.

janbuchar avatar

I double checked this, and the input for website content crawler does not contain environmentVariables - is this an experiment or is there some misleading documentation that we should know of?

Also, if you want to control the memory usage of an Actor, you need to set it on the platform. The environment variable only tells the actor how much memory is available, changing it doesn't really change the limit.

IG

intriguing_game

a month ago

Hi, what to do when we want to change memory size from our code base as we are not using the console to crawl the URLs, we found the variable deep in one of the documentation we will tried using it, also do we have a cut off parameter other than the notifications if the usage is going over board.

janbuchar avatar

I see. If you really just want to set the memory used by the website content crawler that you're launching via reader.load_data, you can do so using the memory_mbytes parameter - see https://docs.llamaindex.ai/en/stable/api_reference/readers/apify/#llama_index.readers.apify.ApifyActor.load_data.

For example:

1reader.load_data(
2  actor_id='apify/website-content-crawler',
3  run_input={...},
4  dataset_mapping_function=...,
5  memory_mbytes=2048,
6)
IG

intriguing_game

a month ago

Thanks for the tip,we will try this out and for the part where we want to set the cut off limit any inputs?

Developer
Maintained by Apify
Actor metrics
  • 2.8k monthly users
  • 434 stars
  • 99.9% runs succeeded
  • 2.9 days response time
  • Created in Mar 2023
  • Modified 3 days ago