Website Content Crawler avatar
Website Content Crawler

Pricing

Pay per usage

Go to Store
Website Content Crawler

Website Content Crawler

Developed by

Apify

Apify

Maintained by Apify

Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.

3.9 (41)

Pricing

Pay per usage

1546

Total users

60K

Monthly users

7.8K

Runs succeeded

>99%

Issues response

7.9 days

Last modified

4 days ago

XG

Actor simply doesn't work

Closed

xylonic_gloves opened this issue
8 months ago

This actor simply is not woking. Weather its 10 URLs or 500. No results were given, period.

jiri.spilka avatar

Hi, thank you for your interest in the Website Content Crawler.

What's happening is that the crawler is attempting to fetch sitemaps for all the supplied URLs, so initially, it may seem like nothing is happening. However, the crawler is working on downloading all the sitemaps, which takes approximately 3 minutes for the URLs you provided. Once that's done, the crawling will begin.

If you want to start crawling immediately, set the Consider URLs from Sitemaps option to false, and the process will start right away.

Additionally, please increase the Max Pages setting from 3 to a larger value, as keeping it at 3 will limit the results.

Please, let me know if this helps.

XG

xylonic_gloves

8 months ago

I am tyring ot send 500 urls at a time to the scraper or 1k is that okay??

I only want the home page or about page content what are your thoughts?

jiri.spilka avatar

In that case, set maxCrawlDepth to 0 and only the 1k startUrls will be crawled. It is fine to upload them all at once.

Also, make sure to set Consider URLs from Sitemaps to false. And also remove the limit of max pages.

This should work fine. Please let me know whether it works.

XG

xylonic_gloves

8 months ago

I am submitting via api is that okay?

jiri.spilka avatar

Yeah, that's ok, I don't see any reason why it shouldn't work. If you can, try with 10-50 URLs first.

XG

xylonic_gloves

8 months ago

What if a url has bot protection etc.

Can you guys scrape the sites that have safe guards?

jiri.spilka avatar

It’s hard to say for certain—we’re aiming to make it work across all sites, and it does in the majority of cases. However, there are some websites we’re unable to scrape. I’m sorry I can’t provide a more definitive answer; it really depends on the specific site.

XG

xylonic_gloves

8 months ago

Its okay. thank you

XG

xylonic_gloves

8 months ago

If I send a batch of 500 to the actor. I know I get a run ID. I would like to know if each URL gets its own UUID/ID I track. That way when I API back in to grab the results I can pull one by one.

jiri.spilka avatar

ok, let me close this issue for now.

XG

xylonic_gloves

8 months ago

did you see my question?

jiri.spilka avatar

No, unfortunately, I closed the issue around the same time you commented. Apologies for the bad timing!

As for your question, I’m not sure I fully understand. I don’t believe you need a separate ID for a URL; the URL itself can serve as the ID. You can retrieve dataset items one by one, but it’s inefficient. It’s better to download everything at once or paginate through the dataset (for a very large once).

XG

xylonic_gloves

8 months ago

I am running in to a problem.

I have input 50 domains and only got 6 output. IT says 0 fails...

jiri.spilka avatar

You need to remove the "maxCrawlPages": 5 input parameter, as this limits the number of pages you specify to crawl.

There’s a log message in the run:

2024-10-26T17:29:49.383Z WARN Reached the maximum number of pages to enqueue (5), not enqueueing from
```