Website Content Crawler
No credit card required
Website Content Crawler
No credit card required
Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.
Do you want to learn more about this Actor?
Get a demoI am triggering my run using the api:
{ "startUrls": [ { "url": "https://www.bafa.de/DE/Wirtschaft/Handwerk_Industrie/Innovativer_Schiffbau/innovativer_schiffbau_node.html" } ], "maxCrawlDepth": 1, "useSitemaps": false, "saveFiles": true, "includeUrlGlobs": ["*.pdf"] }
The URL has two links leading to PDF documents, which are not being recognized. At least I don't see the links leading to the PDFs when looking at the results table. Nor do I recognise any files being downloaded. What could be the issue?
And: Does apify have a native method of indexing the contents of these PDFs?
Hi, thank you for using Website Content Crawler.
To address your case, you need to update the includeUrlGlobs
to:
1"includeUrlGlobs": [ 2 { 3 "glob": "**/*.pdf**" 4 } 5]
This configuration instructs the crawler to include also PDF files.
Unfortunately, it’s not working for this specific website. I’m unable to determine the cause at the moment.
I’ll keep this issue open, and we’ll try to investigate further. However, it might take some time before we can revisit this issue. Jiri
Actor Metrics
4k monthly users
-
839 stars
>99% runs succeeded
1 days response time
Created in Mar 2023
Modified 17 hours ago