No credit card required
Website Content Crawler
No credit card required
Automatically crawl and extract text content from websites with documentation, knowledge bases, help centers, or blogs. This Actor is designed to provide data to feed, fine-tune, or train large language models such as ChatGPT or LLaMA.
Hi,
I have an issue with the website crawler. While the removedElementsHtmlUrl contains all the content I need, all extractions afterwards miss most of the content. How can I tackle this? Here are the snapshot urls:
"originalHtmlUrl": "https://api.apify.com/v2/key-value-stores/ommLFcytptMQ3mMIO/records/https---www-otto-de-shoppages-service-faq-originalHtmlUrl"
"removedElementsHtmlUrl": "https://api.apify.com/v2/key-value-stores/ommLFcytptMQ3mMIO/records/https---www-otto-de-shoppages-service-faq-removedElementsHtmlUrl"
"extractusHtmlUrl": "https://api.apify.com/v2/key-value-stores/ommLFcytptMQ3mMIO/records/https---www-otto-de-shoppages-service-faq-extractusHtmlUrl"
"readableTextHtmlUrl": "https://api.apify.com/v2/key-value-stores/ommLFcytptMQ3mMIO/records/https---www-otto-de-shoppages-service-faq-readableTextHtmlUrl"
"readableTextIfPossibleHtmlUrl": "https://api.apify.com/v2/key-value-stores/ommLFcytptMQ3mMIO/records/https---www-otto-de-shoppages-service-faq-readableTextIfPossibleHtmlUrl"
Hello and thank you for your interest in this Actor!
This is a prime example of the text extractors being too eager to clean the website contents.
To get the full webpage content into your dataset, you can simply switch HTML Processing > HTML Transformer
to None
.
This yields results equivalent to the removedElementsHtmlUrl
debug option. If you end up with some extra content in your dataset (that you don't want to have there), you can use the Remove HTML elements (CSS selector)
to remove those based on their CSS selectors.
I'll close this issue now, but feel free to ask additional questions if you have any. Cheers!
- 2k monthly users
- 99.9% runs succeeded
- 2.9 days response time
- Created in Mar 2023
- Modified 3 days ago