
Website Content Crawler
Pricing
Pay per usage

Website Content Crawler
Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.
4.0 (41)
Pricing
Pay per usage
1599
Total users
63K
Monthly users
8.1K
Runs succeeded
>99%
Issues response
8 days
Last modified
2 days ago
Add Time Range to Scraped Data
Closed
The scraper works well, but it would be even better if it included timestamps (from date to date) indicating the period during which blog content was posted on the portal
For example, this blog post was published on June 1, 2023: https://business.amazon.com/en/blog/what-is-amazon-business
Hello, and thank you for your interest in this Actor!
The published date (from the JSON-LD elements) is already stored in the dataset records as a part of the metadata
field. Download your dataset as a JSON file and make sure to pick "All fields", not only "Text" or "Markdown", to include the metadata field.
You can check out my Actor run here, or the linked dataset here.
I'll close this issue now, but feel free to ask additional questions if you have any. Cheers!
kristupas
Thanks!
It scrapes correctly for Amazon, but for other websites, such as this one (https://www.vilniausvystymas.lt/naujienos/statybu-bendrovems-paprastesnis-dalyvavimas-vilniaus-viesos-infrastrukturos-rangos-konkursuose/), it doesn't.
This is not a bug in the Actor; the website you're linking to does not contain the JSON-LD elements with the structured data (some more info on this e.g. here).
This Actor is primarily oriented towards scraping plain text or Markdown from the web pages. You can potentially parse the dates out of the scraped text, but there will always be some risk of error.
I'll re-close this issue, as I believe there is no problem with the Actor itself. If you're looking for a more programmable solution that would allow you to scrape parts of the page into JSON fields, check out our Web Scraper or Cheerio Scraper.