Extended GPT Scraper avatar

Extended GPT Scraper

Try for free

No credit card required

Go to Store
Extended GPT Scraper

Extended GPT Scraper

drobnikj/extended-gpt-scraper
Try for free

No credit card required

Extract data from any website and feed it into GPT via the OpenAI API. Use ChatGPT to proofread content, analyze sentiment, summarize reviews, extract contact details, and much more.

Do you want to learn more about this Actor?

Get a demo
CB

Another problem!

Open

cooperative_bureau opened this issue
2 months ago

Hi! Another problem! I am uploading a list and I need to process exactly the pages I uploaded.

I have set the processing depth to 0.

But when the task is running, I see that the number of pages to be processed is increasing.

What am I setting wrong?

Thank you

lukas.prusa avatar

Hi, thanks for opening this issue!

Max crawling depth of 0 means to crawl infinitely. Unfortunately, the value 1 already means to go one page deep, which is making this really stupid. It's very unintuitive, so we will discuss it in our team and try to change or make it more clear :)

In the meantime, to not enqueue sub-pages, simply set the linkSelector or includeUrlGlobs to nothing and the crawler will not have anything to enqueue.

I will keep you updated here, thanks!

CB

cooperative_bureau

2 months ago

Thank you!

Developer
Maintained by Apify

Actor Metrics

  • 72 monthly users

  • 52 stars

  • 98% runs succeeded

  • 5.8 days response time

  • Created in Jun 2023

  • Modified 9 days ago