Google Maps Scraper avatar

Google Maps Scraper

Try for free

No credit card required

Go to Store
Google Maps Scraper

Google Maps Scraper

compass/crawler-google-places
Try for free

No credit card required

Extract data from hundreds of Google Maps locations and businesses. Get Google Maps data including reviews, images, contact info, opening hours, location, popular times, prices & more. Export scraped data, run the scraper via API, schedule and monitor runs, or integrate with other tools.

Do you want to learn more about this Actor?

Get a demo
BI

Why does Google Maps Scraper create so many requests?

Closed

billcawell opened this issue
3 months ago

In this search task, Google Maps Scraper create 3.8k requests, and find only 63 results. For a small city, I think the number of search results is reasonable, but the number of requests is unreasonable. Too many requests seem to directly increase the cost of computing. How can we reduce the number of requests and save computing costs while ensuring the comprehensiveness of the search?

milunnn avatar

Hi,

Sorry for the late response. The run returned only 63 results at the point of abortion was because it simply did not find any more results that matched the search criteria.

You could optimize this process, however, it could lead to losing a few results. You could do that by:

  1. Disabling Deeper City Scrape. This feature creates a lot more requests for places with high population density, such as cities. But if you know that there is not a high amount of places that match your criteria, you can potentially disable this function.
  2. Overriding zoom level. The zoom level that was set for this run was 15. If you override this number to 14. it should generate less amount of requests (but it could result in not finding all places, if set incorrectly).

If you have any more question, please do not hesitate to ask.

BI

billcawell

2 months ago

After trying with different terms and options, I find that if the zoom level is increased (Some place will appear until zoom level 17, even 20), the huge amount of data calculation is almost unacceptable. The computational costs are staggering. My current approach to balancing coverage and computational effort is to reduce search terms and focus on the few terms that I care about most. I feel there is no better way, so do you have any better suggestions?

ondrejklinovsky avatar

Hey,

I apologize for late response. Looking at your search term, what I would do is to set up two runs based on success rate of search terms:

  1. one run would use deeper city scrape with the following search terms: restaurant, cafe, hotel, store
  2. another run would use normal search with the rest of the search terms, e.g. observation deck is not that common place so no need to use deeper city scrape

You can then user Merge, Dedup & Transform Datasets to merge the two runs into one dataset.

Of course, with this approach you're gonna miss some places but I believe it's the best cost/results performance.

I'm gonna close the issue now but feel free to reopen if you have any questions.

BI

billcawell

2 months ago

Thanks, very thoughtful advice. I will try it right now.

Developer
Maintained by Apify

Actor Metrics

  • 3.4k monthly users

  • 613 stars

  • 98% runs succeeded

  • 5.7 days response time

  • Created in Nov 2018

  • Modified 3 hours ago