
Website Content Crawler
Pricing
Pay per usage

Website Content Crawler
Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.
3.6 (39)
Pricing
Pay per usage
1406
Total users
54K
Monthly users
8K
Runs succeeded
>99%
Issues response
7.6 days
Last modified
2 days ago
Execution context was destroyed
Closed
Hi, I have this error, how can I solve it ?
2025-04-17T13:53:56.989Z ACTOR: Pulling Docker image of build 7TgcL2ANbS1UMWAzS from registry. 2025-04-17T13:53:57.102Z ACTOR: Creating Docker container. 2025-04-17T13:53:57.288Z ACTOR: Starting Docker container. 2025-04-17T13:53:57.504Z Starting X virtual framebuffer using: Xvfb :99 -ac -screen 0 1920x1080x24+32 -nolisten tcp 2025-04-17T13:53:57.506Z Executing main command 2025-04-17T13:53:58.793Z INFO System info {"apifyVersion":"3.2.6","apifyClientVersion":"2.9.5","crawleeVersion":"3.13.1","osType":"Linux","nodeVersion":"v22.9.0"} 2025-04-17T13:53:59.481Z INFO Crawling will be started using 1 start URLs and 0 sitemap URLs 2025-04-17T13:54:00.093Z INFO AdaptiveCrawler: Starting the crawler. 2025-04-17T13:54:00.182Z INFO AdaptiveCrawler: Running browser request handler for https://higea.fr/collections/nos-caramels/products/duo-de-caramels 2025-04-17T13:54:16.918Z WARN AdaptiveCrawler: Reclaiming failed request back to the list or queue. page.evaluate: Execution context was destroyed, most likely because of a navigation. 2025-04-17T13:54:16.921Z at expandClickableElements (/home/myuser/dist/utils.js:215:16) {"id":"Kt2MWbvmgocZeKc","url":"https://higea.fr/collections/nos-caramels/products/duo-de-caramels","retryCount":1} 2025-04-17T13:54:20.016Z INFO AdaptiveCrawler: Running browser request handler for https://higea.fr/collections/nos-caramels/products/duo-de-caramels 2025-04-17T13:54:32.026Z WARN AdaptiveCrawler: Reclaiming failed request back to the list or queue. page.evaluate: Execution context was destroyed, most likely because of a navigation. 2025-04-17T13:54:32.028Z at expandClickableElements (/home/myuser/dist/utils.js:215:16) {"id":"Kt2MWbvmgocZeKc","url":"https://higea.fr/collections/nos-caramels/products/duo-de-caramels","retryCount":2} 2025-04-17T13:54:35.369Z INFO AdaptiveCrawler: Running browser request handler for https://higea.fr/collections/nos-caramels/products/duo-de-caramels 2025-04-17T13:55:00.093Z INFO AdaptiveCrawler:Statistics: AdaptiveCrawler request statistics: {"requestAvgFailedDurationMillis":null,"requestAvgFinishedDurationMillis":null,"requestsFinishedPerMinute":0,"requestsFailedPerMinute":0,"requestTotalDurationMillis":0,"requestsTotal":0,"crawlerRuntimeMillis":60433,"retryHistogram":[]} 2025-04-17T13:55:00.122Z INFO AdaptiveCrawler:AutoscaledPool: state {"currentConcurrency":1,"desiredConcurrency":3,"systemStatus":{"isSystemIdle":true,"memInfo":{"isOverloaded":false,"limitRatio":0.2,"actualRatio":0},"eventLoopInfo":{"isOverloaded":false,"limitRatio":0.6,"actualRatio":0},"cpuInfo":{"isOverloaded":false,"limitRatio":0.4,"actualRatio":0},"clientInfo":{"isOverloaded":false,"limitRatio":0.3,"actualRatio":0}}} 2025-04-17T13:55:01.894Z WARN AdaptiveCrawler: Reclaiming failed request back to the list or queue. page.evaluate: Execution context was destroyed, most likely because of a navigation. 2025-04-17T13:55:01.897Z at expandClickableElements (/home/myuser/dist/utils.js:215:16) {"id":"Kt2MWbvmgocZeKc","url":"https://higea.fr/collections/nos-caramels/products/duo-de-caramels","retryCount":3} 2025-04-17T13:55:05.250Z INFO AdaptiveCrawler: Running browser request handler for https://higea.fr/collections/nos-caramels/products/duo-de-caramels 2025-04-17T13:55:14.929Z WARN AdaptiveCrawler: Reclaiming failed request back to the list or queue. page.evaluate: Execution context was destroyed, most likely because of a navigation. 2025-04-17T13:55:14.931Z at expandClickableElements (/home/myuser/dist/utils.js:215:16) {"id":"Kt2MWbvmgocZeKc","url":"https://higea.fr/collections/nos-caramels/products/duo-de-caramels","retryCount":4} 2025-04-17T13:55:18.030Z INFO AdaptiveCrawler: Running browser request handler for https://higea.fr/collections/nos-caramels/products/duo-de-caramels

Hi,
I'm really sorry for the delayed response.
This issue usually occurs when the crawler attempts to click an element—typically a button—that redirects to a different page.
I tried to reproduce the problem in this run, and everything seems to be working fine on my end.
Are you still experiencing the issue?
Best regards, Jiri
conv_ai_account
Hello! We encounter the same issues on some runs, I can provide some urls if needed. This is quite critical for our use-case.
Is there any configuration we can do on our side that would prevent the crawler from clicking on elements? I disabled clicking on elements and iframe, but I ended up with the same error.
Hello, and thank you for your interest in this Actor!
By default, the Actor attempts to click "expandable" elements on the page (e.g., accordions and similar), ensuring the result contains as much content as possible. On some pages, unfortunately, the elements marked as "expandable" actually cause navigation when clicked on.
You can disable this feature by changing the clickElementsCssSelector
(or HTML processing > Expand clickable elements
) to a selector that doesn't exist on this page (we usually use something like .no-click
). This way, the Actor won't find any elements matching this selector, and won't click any elements on the page. Note that leaving this input field empty in the web UI means that the platform will replace it with the default value - that's why you'd rather want to use a bogus selector.
See e.g. my example run for https://higea.fr/collections/nos-caramels/products/duo-de-caramels, which scrapes the page first try.
I'll close this issue now, but feel free to ask additional questions if you have any. Cheers!