Website Content Crawler avatar
Website Content Crawler

Pricing

Pay per usage

Go to Store
Website Content Crawler

Website Content Crawler

Developed by

Apify

Apify

Maintained by Apify

Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.

3.6 (39)

Pricing

Pay per usage

1406

Total users

54K

Monthly users

8K

Runs succeeded

>99%

Issues response

7.6 days

Last modified

2 days ago

FQ

Is there a way to crawl URL from the visible HTML after removing "removeElementsCssSelector"

Open

formidable_quagmire opened this issue
25 days ago

Currently when crawling URL it takes the complete viewport to enqueue new links. is there way to crawl the URl'sv after removing removeElementsCssSelector form the html ?

jakub.kopecky avatar

Hey,

thank you for using Website Content Crawler!

Website Content Crawler should enqueue all links found on a page, including those within elements specified in removeElementsCssSelector. The crawler removes these elements during the HTML processing phase before saving the output.

Does this answer your question? I am not entirely if this is what you meant.

Jakub

PU

pushpak

23 days ago

Nope this removes those element while saving it. We need a way to enqueue URLS after removing elements from removeElementsCssSelector. So we don;t need to enqueue link s from header or something. Is it possible ? if not can you build it ?

SH

shashank734

23 days ago

Nope this removes those element while saving it. We need a way to enqueue URLS after removing elements from removeElementsCssSelector. So we don;t need to enqueue link s from header or something. Is it possible ? if not can you build it ?

FQ

formidable_quagmire

3 days ago

Any update on this ?

jindrich.bar avatar

Hello all, and thank you for your interest in this feature.

As Jakub mentioned, removeElementsCssSelector only cleans the HTML output. You can filter the enqueued links using the Include URLs (globs) and Exclude URLs (globs) input options. If you e.g., only want to crawl subresources of https://example.com/blog/articles, you can set the Include URLs (globs) option to https://example.com/blog/articles/**/*. Same procedure if you want to omit those pages from your crawl (only submit this glob into the Exclude URLs (globs) option).

This feature has been on our radar for some time, but we haven't explored it further, as the glob options seemed to cover all the use cases. Do you have an example page that cannot be scraped like this? If so, please share!

Cheers!

FQ

formidable_quagmire

3 days ago

yes some website has all the resources under root.

for eg. xyz.com/news xyz.com/blog-1 xyz.com/blog2

In this case we want to crawl blog-1 & blog-2 but not news page

jindrich.bar avatar

Thank you, while I understand the idea behind this, even those pages often have distinct patterns in the URLs you can use for filtering the enqueued links.

Would you mind sharing the actual website you're trying to scrape, with examples of the URLs you would (and wouldn't) like to scrape?