Website Content Crawler avatar
Website Content Crawler

Pricing

Pay per usage

Go to Store
Website Content Crawler

Website Content Crawler

Developed by

Apify

Apify

Maintained by Apify

Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.

4.6 (38)

Pricing

Pay per usage

1310

Total users

49.4k

Monthly users

6.9k

Runs succeeded

>99%

Issue response

3.8 days

Last modified

7 days ago

RN

Ability to Group Crawled Page with Followed Link and Its Content in a Single Row

Closed

randomname1234 opened this issue
2 months ago

Hi team 👋

I’ve been using the Website Content Crawler and loving how powerful it is. However, I’ve run into a limitation

The issue:

Right now, each crawled page is output as a separate row in the dataset — which is perfect for most use cases. But in my case, I’m crawling a page (let’s call it a “job list page”), and then following a link on that page (e.g., an “apply” button).

What I need is the columns to be:

Start Page URL | Start Page Details | Followed Page URL | Followed Page Details

Rather than:

URL | Details

Why this matters:

This structure would make it much easier to use the data downstream (e.g. for training, enrichment, integrations) without needing to do post-processing outside Apify. Right now, I’d have to stitch the records together manually in a second step, which is time-consuming and error-prone.

Possible solution:

Allow a way to:

  • Track relationships between pages (parent/child or source/followed).
  • Pass userData between requests and merge them into one final dataset row.
  • Or expose a mode that pairs “origin page + clicked page” together in a single result.

Let me know if this is something that could be added or if there’s a workaround I’ve missed.

Thanks again for all the great work on Apify 🙌

jiri.spilka avatar

Hi, Thank you for using Website Content Crawler, and sorry for the delayed response. This doesn’t seem like something that can be easily supported, and currently, we’re not planning to add this functionality. I also haven’t come across similar requests for this type of structured output. That said, I’ll raise it internally and check with the team. Best, Jiri

jiri.spilka avatar

Hi, I've checked it internally.

He is a full answer from @barjir.

Unfortunately, we're not planning to support "combining" the parent and child page in one Dataset record. This could easily result in duplicate results and would further denormalize the dataset schema. However, we're already storing the information in the dataset. Under .crawl.referrerUrl, you get the URL of the resource that lead to enqueuing the given dataset result. I believe this all you can need to recreate the parent-child relationship. Note that each resource only gets one referrerUrl. This means they won't get the entire "website graph" from this - only some spanning tree of it.

I'll go ahead and close this issue for now. Please feel free to ask a question or raise a new issue. Jiri