
Website Content Crawler
Pricing
Pay per usage

Website Content Crawler
Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.
4.6 (38)
Pricing
Pay per usage
1.1k
Monthly users
5.9k
Runs succeeded
>99%
Response time
2.3 days
Last modified
6 days ago
Ability to Group Crawled Page with Followed Link and Its Content in a Single Row
Hi team 👋
I’ve been using the Website Content Crawler and loving how powerful it is. However, I’ve run into a limitation
The issue:
Right now, each crawled page is output as a separate row in the dataset — which is perfect for most use cases. But in my case, I’m crawling a page (let’s call it a “job list page”), and then following a link on that page (e.g., an “apply” button).
What I need is the columns to be:
Start Page URL | Start Page Details | Followed Page URL | Followed Page Details
Rather than:
URL | Details
Why this matters:
This structure would make it much easier to use the data downstream (e.g. for training, enrichment, integrations) without needing to do post-processing outside Apify. Right now, I’d have to stitch the records together manually in a second step, which is time-consuming and error-prone.
Possible solution:
Allow a way to:
- Track relationships between pages (parent/child or source/followed).
- Pass userData between requests and merge them into one final dataset row.
- Or expose a mode that pairs “origin page + clicked page” together in a single result.
Let me know if this is something that could be added or if there’s a workaround I’ve missed.
Thanks again for all the great work on Apify 🙌
Pricing
Pricing model
Pay per usageThis Actor is paid per platform usage. The Actor is free to use, and you only pay for the Apify platform usage.