🐺 Tripadvisor Reviews Scraper | Pay Per Result $0.5/1K reviews avatar

🐺 Tripadvisor Reviews Scraper | Pay Per Result $0.5/1K reviews

Try for free

Pay $0.50 for 1,000 reviews

Go to Store
🐺 Tripadvisor Reviews Scraper | Pay Per Result $0.5/1K reviews

🐺 Tripadvisor Reviews Scraper | Pay Per Result $0.5/1K reviews

thewolves/tripadvisor-reviews-scraper
Try for free

Pay $0.50 for 1,000 reviews

The Wolves proudly presents TripAdvisor Review Scraper, the perfect solution for TripAdvisor review extraction. Incredibly, it retrieves 100-200 reviews per second at an amazing cost-effective rate of $0.50 per 1000 reviews. Get any data from TripAdvisor by targeting. Cheapest!!

Developer
Maintained by Community

Actor Metrics

  • 47 Monthly users

  • No reviews yet

  • 13 bookmarks

  • 97% runs succeeded

  • 9.7 hours response time

  • Created in Apr 2024

  • Modified 14 hours ago

HU

The problem persist, you should have access to the run.

Closed
humatics opened this issue
a month ago

I tried to see if the problem was solved. Could it be that the process is interrupted because new reviews are added during scraping? This could explain the non-deterministic nature of the problem. If you need more info, you can contact me on discord. Thanks.

2025-02-04T11:45:06.721Z file:///usr/src/app/src/reseller.js:119 2025-02-04T11:45:06.724Z log.info(Got ${json.data.reviews.length} results for place: ${id} with page: ${page}); 2025-02-04T11:45:06.726Z ^ 2025-02-04T11:45:06.727Z 2025-02-04T11:45:06.729Z TypeError: Cannot read properties of undefined (reading 'data') 2025-02-04T11:45:06.731Z at Reseller.getReviews (file:///usr/src/app/src/reseller.js:119:30) 2025-02-04T11:45:06.733Z at process.processTicksAndRejections (node:internal/process/task_queues:95:5) 2025-02-04T11:45:06.735Z at async file:///usr/src/app/node_modules/p-queue/dist/index.js:187:36 2025-02-04T11:45:06.738Z 2025-02-04T11:45:06.740Z Node.js v18.20.6

HU

humatics

a month ago

The Apify support told me that there are scrapers that can accept an existing storage as input instead of creating a new one for each run. Is this the case? Is there a "custom_storage" parameter to use? It would be useful for me to start each run with a single input and store data on the same storage. As a temporary solution, with the new added parameter "start_page", I will be able to easily restore the scraping process when a failure occurs. In the meantime I hope you can solve it. Thanks.

thewolves avatar

Hello,

Thanks a lot for your input, we are currently checking this and get back to you soon.

Best

thewolves avatar

Hello,

We still cannot reproduce the issue however we made some more improvements that will lower the chances of this.

Cheers