Website Content Crawler avatar

Website Content Crawler

Try for free

No credit card required

View all Actors
Website Content Crawler

Website Content Crawler

apify/website-content-crawler
Try for free

No credit card required

Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗LangChain, LlamaIndex, and the wider LLM ecosystem.

Do you want to learn more about this Actor?

Get a demo
JD

Actor sometimes grabs only one screenshot or repeated 'mismatched' screenshots in output

Closed

jjohnson-dev opened this issue
3 months ago

The last couple days I noticed I don't seem to be getting as many screenshotUrls or accurately matched screenshotUrls in the dataset.

This run only grabbed one - https://console.apify.com/view/runs/Iyav2BXfpIAakad69.

This run has the SAME screenshot posted in the payload for multiple distinct websites.

screenshotUrl https://api.apify.com/v2/key-value-stores/OkUy6AhFjH7Qlt0VO/records/SCREENSHOT--.jpg over 70 times in the results for multiple different businesses. That can't be right?

https://console.apify.com/view/runs/RRsGOekCU3p5ggKMU

My goal is to grab a screenshot of each businesses home page I'm passing through the crawler. Should depth 0 be sufficient? Should sitemap option be off or on?

Thanks for looking!

jindrich.bar avatar

Hello again!

This is indeed an issue with Website Content Crawler - in certain cases, it stored the same screenshot for multiple crawled webpages. This is fixed in the latest version of this Actor (0.3.38).

Also make sure you're using the playwright:firefox crawler type - the Adaptive crawler cannot take the website screenshots properly (since it switches between actual browsers and raw HTTP clients). This behavior is now enforced in 0.3.38 - the Actor will print a warning message and will turn the saveScreenshots option off if you try to use it with an unsupported crawler type.

Regarding your other questions - if you're trying to only scrape the URLs from the Start URLs input, then yes - Maximum crawl depth set to 0 should be enough. In this case, you can also keep the Use Sitemaps option set to false.

I'll close this issue now, but feel free to ask additional questions, if you have any. Cheers!

Developer
Maintained by Apify
Actor metrics
  • 3.4k monthly users
  • 486 stars
  • 99.9% runs succeeded
  • 3.2 days response time
  • Created in Mar 2023
  • Modified 4 days ago