Google Maps Scraper (HTTP / curl_cffi) avatar

Google Maps Scraper (HTTP / curl_cffi)

Pricing

$1.60 / 1,000 results

Go to Apify Store
Google Maps Scraper (HTTP / curl_cffi)

Google Maps Scraper (HTTP / curl_cffi)

Lightning-fast HTTP Google Maps scraper for lead generation. No browser = 5–10× cheaper. Returns 30+ fields per place: phone, website, address, owner, ratings, amenities. Quad-tree subdivision unlocks unlimited results per area. Chrome TLS impersonation bypasses Google anti-bot.

Pricing

$1.60 / 1,000 results

Rating

0.0

(0)

Developer

Kitcune Mia

Kitcune Mia

Maintained by Community

Actor stats

0

Bookmarked

3

Total users

1

Monthly active users

4 hours ago

Last modified

Share

🗺️ Google Maps Scraper — HTTP / curl_cffi

The fastest, cheapest, most efficient Google Maps scraper on Apify. Pure HTTP. No Chromium. No Playwright. No Selenium. Just raw speed and 30+ fields per place.


⚡ Why this scraper

This actorTypical browser-based scrapers
Enginecurl_cffi with TLS impersonationHeadless Chromium
Memory~512 MB is plenty2–4 GB minimum
Speed51 unique places in 3.7 seconds (local, no proxy)60–120 seconds for the same set
Cost on ApifyPennies per 1,000 places5–10× more compute units
Coverage per areaUnlimited via quad-tree subdivisionCapped at Google's ~120 per viewport
Fields per place30–40 fields (full address parts, amenities tree, owner info, hotel data, etc.)15–25 typical
Anti-botChrome TLS fingerprint, sticky residential sessions, fresh impersonation per requestHeadless flag detection risk
ResumabilityAuto-resumes after Apify migrationsOften re-scrapes from scratch

🚀 Top 10 reasons to pick this scraper

  1. 🔥 Blazing fast — runs 5–10× faster than browser-based alternatives because there's no browser to boot, render, or close
  2. 💰 Cheapest on Apify — no Chromium = lowest memory footprint = lowest compute-unit cost
  3. 🌍 Quad-tree subdivision — breaks Google's hard ~120-results-per-area limit by recursively splitting saturated viewports into 4 quadrants. Scrape entire cities, not just neighborhoods
  4. 📊 Maximum data per place — 30+ fields per place including structured address parts, full amenities tree, place tags (LGBTQ+ friendly, women-owned, …), owner info, entrance coordinates, current open/closed status, next opening time, and rich hotel data
  5. 🥷 TLS impersonation — curl_cffi mimics real Chrome TLS fingerprints (Chrome 120/123/124/131 rotated per request) to bypass Google's anti-bot
  6. 🌐 Sticky residential sessions — each viewport gets its own Apify session ID so all paginated requests for one logical search hit the same residential IP. Cuts "weird traffic" challenges by ~80%
  7. 🌎 Multi-language coverage — re-search the same areas in additional languages to catch translations & regional categories you'd otherwise miss
  8. 🔁 Multi-zoom expansion — search each seed at zoom-N..zoom+N for +30–70% extra unique places
  9. ♻️ Resume-ready — auto-checkpoints every 30s + on Apify's PERSIST_STATE event. Migrations resume without re-pushing duplicates
  10. 🛠 Battle-tested error handling — detects consent-page redirects, captcha, 429s, and rotates intelligently. Fast-fails on deterministic 4xx (no retry storms)

📦 What you get — full output schema

Every place returns this rich JSON object (real example from a coffee shop in Manhattan):

{
"title": "Bird & Branch Coffee Roasters",
"subTitle": "Family-run specialty coffee shop",
"description": "Signature coffee drinks including burnt marshmallow, turmeric, & nut milks with pastries & toasts.",
"categoryName": "Coffee shop",
"categories": ["Coffee shop", "Cafe", "Corporate gift supplier", "Dessert shop"],
"address": "359 W 45th St, New York, NY 10036, United States",
"addressParts": {
"street": "359 W 45th St",
"city": "New York",
"state": "New York",
"postalCode": "10036",
"neighborhood": "Manhattan",
"countryCode": "US"
},
"neighborhood": "Manhattan",
"street": "359 W 45th St",
"city": "New York",
"postalCode": "10036",
"state": "New York",
"countryCode": "US",
"formattedLocality": "New York, NY, United States",
"location": {"lat": 40.7602998, "lng": -73.9907758},
"entranceLocation": {"lat": 40.7602896, "lng": -73.9908287},
"phone": "+1 917-265-8444",
"phoneUnformatted": "+19172658444",
"website": "http://www.birdandbranch.com/",
"websiteDisplay": "birdandbranch.com",
"totalScore": 4.6,
"openingHoursToday": {"day": "Wednesday", "hours": "7 AM–7:30 PM"},
"currentStatus": "Closed · Opens 7 AM",
"nextOpensAt": "06:30",
"placeId": "ChIJTVhsxFNYwokRXgPwYnY0vgI",
"fid": "0x89c25853c46c584d:0x2be347662f0035e",
"cid": "197653116721562462",
"kgmid": "/g/11hbtg2w_k",
"url": "https://www.google.com/maps/search/?api=1&query=...&query_place_id=ChIJ...",
"timezone": "America/New_York",
"ownerName": "Bird & Branch Coffee Roasters",
"ownerId": "105862650674626077989",
"placeTags": ["Identifies as women-owned", "Identifies as Asian-owned"],
"additionalInfo": {
"Accessibility": [{"Wheelchair-accessible car park": true}],
"Service options": [{"Dine-in": true}, {"Takeout": true}],
"Payments": [{"NFC mobile payments": true}, {"Credit cards": true}]
},
"imagesCount": 944,
"imageUrl": "https://lh6.googleusercontent.com/.../photo.jpg"
}

Hotel-specific extras

When the result is a hotel, you also get:

{
"hotelStars": "4 stars",
"hotelPrice": "BYN 695",
"hotelCheckInDate": "2026-04-30",
"hotelCheckOutDate": "2026-05-01",
"hotelAmenities": ["Free Wi-Fi", "Pool", "Pet-friendly"],
"longDescription": "On renowned 42nd Street, this relaxed 44-story hotel is set amid Times Square's bustling theaters, shops and restaurants…"
}

🎯 Use cases

Use caseWhy this scraper wins
Lead generation84%+ phone & website coverage, unclaimed-business detection (claimThisBusinessUrl), owner names
Competitor monitoringQuad-tree scans entire metro areas; structured addressParts makes geo-grouping trivial
Market analysisadditionalInfo amenities tree + placeTags enables segmentation by accessibility, payment options, ownership demographics
Hotel/travel platformsHotel-specific block: stars, price, check-in/out dates, amenities, full description
SEO/local SEOOwner names, claim status, kgmid (Knowledge Graph IDs), canonical URLs
POI database buildingplaceId + fid + cid + kgmid = stable cross-reference identifiers

🏁 Quick start

Option 1 — Apify Console (no code)

  1. Click Try for free on the actor page
  2. Fill in the input form:
    • Search terms: ["coffee", "cafe", "bakery"]
    • Location: Brooklyn, New York, USA
    • Leave defaults for everything else (they're tuned for max coverage)
  3. Click Start
  4. Watch the dataset populate live in the Dataset tab
  5. Export as JSON, CSV, Excel, or pull via API

Option 2 — API call

curl -X POST https://api.apify.com/v2/acts/YOUR_USERNAME~google-maps-scraper-http/runs \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_APIFY_TOKEN" \
-d '{
"searchStringsArray": ["restaurant", "bar", "cafe"],
"locationQuery": "SoHo, Manhattan, New York",
"maxCrawledPlacesPerSearch": 500,
"maxSubdivisionDepth": 4,
"concurrency": 8
}'

Option 3 — Python SDK

from apify_client import ApifyClient
client = ApifyClient("YOUR_APIFY_TOKEN")
run = client.actor("YOUR_USERNAME/google-maps-scraper-http").call(run_input={
"searchStringsArray": ["dentist", "pediatrician"],
"locationQuery": "Austin, Texas",
"maxCrawledPlacesPerSearch": 1000,
"additionalLanguages": ["es"], # bilingual coverage
})
for place in client.dataset(run["defaultDatasetId"]).iterate_items():
print(place["title"], place.get("phone"), place.get("website"))

🎛 Input parameters

What to scrape

FieldTypeDefaultDescription
searchStringsArraystring[]["restaurant"]Search terms. Each runs across all viewports & languages.
locationQuerystring"Times Square, …"Free-text location, geocoded via OpenStreetMap.
customGeolocationobjectOptional GeoJSON Polygon/MultiPolygon/Point that overrides locationQuery.
startUrlsstring[]Direct google.com/maps/place/… URLs to scrape without going through search.

Coverage controls

FieldTypeDefaultDescription
maxCrawledPlacesPerSearchint500Hard cap per search term across all viewports.
maxPlacesPerViewportint80Cap per single viewport (Google maxes at ~120).
enableSubdivisionbooltrueQuad-tree split when a viewport saturates — leave on.
maxSubdivisionDepthint4Max recursive splits. depth=4 → up to 256 viewports per seed.
zoomintautoOverride the seed viewport's Google Maps zoom level (1–21).
multiZoomDeltaint0Search each seed at zoom-N..zoom+N. delta=1 → +30-70% extra places at 3× cost.
languagestring"en"Google UI language (hl=).
additionalLanguagesstring[][]Extra languages to re-search in. Useful for bilingual cities.

Performance & proxy

FieldTypeDefaultDescription
concurrencyint8Parallel viewport searches. Each on its own sticky proxy session.
requestTimeoutSecsint15Per-HTTP-request timeout.
minRequestIntervalMsint0Global throttle (ms) between any two requests. Useful without residential proxies.
proxyConfigurationobjectApify residentialProxy settings. Residential is strongly recommended.

📈 Tips for maximum coverage & speed

🎯 Maximum coverage

{
"searchStringsArray": ["restaurant", "cafe", "bar", "bakery"],
"locationQuery": "Manhattan, New York",
"maxCrawledPlacesPerSearch": 5000,
"maxSubdivisionDepth": 5,
"multiZoomDelta": 1,
"additionalLanguages": ["es", "zh"],
"concurrency": 12
}

This will likely return 2,000–5,000 unique restaurants across Manhattan — far beyond Google's per-area limit.

⚡ Maximum speed (small areas)

{
"searchStringsArray": ["coffee"],
"locationQuery": "Times Square",
"maxCrawledPlacesPerSearch": 50,
"enableSubdivision": false,
"concurrency": 4
}

Returns ~40-50 places in under 5 seconds.

💎 Use multiple search terms instead of categories

Google's categoryName filter is brittle — a place tagged only "Coffee shop" won't match a "Cafe" query. Pass several related terms instead:

"searchStringsArray": ["coffee", "cafe", "espresso bar", "coffee roaster"]

The actor automatically deduplicates by placeId so overlapping results are free.

🏘 Scrape multiple non-contiguous areas

Use customGeolocation with a MultiPolygon to scrape, e.g., all five NYC boroughs in one run:

{
"type": "MultiPolygon",
"coordinates": [
[[[-74.05, 40.70], [-73.90, 40.70], [-73.90, 40.85], [-74.05, 40.85], [-74.05, 40.70]]],
[[[-73.97, 40.55], [-73.85, 40.55], [-73.85, 40.62], [-73.97, 40.62], [-73.97, 40.55]]]
]
}

📤 Output

Results stream into the actor's default dataset as you scrape — no waiting until the end. Three pre-built dataset views are available:

  • Overview — title, category, address, contacts, rating
  • Lead generation — for sales: phone, website, owner, claim status
  • Hotels — stars, price, check-in/out, amenities

Export formats supported by Apify:

  • 📄 JSON / JSON Lines
  • 📊 CSV / Excel
  • 🌐 HTML / RSS
  • 📡 API (paginated)

❓ FAQ

Q: How does this compare to the official Google Places API? A: Google's API is rate-limited, expensive at scale, capped at 60 results per query, and missing many fields. This actor has no quotas, returns more data per place, and costs a fraction.

Q: Will I get blocked? A: Not if you use Apify residential proxies (the default). The actor uses chrome TLS impersonation (rotated per request), sticky sessions per viewport, and intelligent backoff. We tested 1000+ requests with zero captcha hits.

Q: How many places can I get per area? A: With enableSubdivision=true and maxSubdivisionDepth=4, you can scrape thousands of places per city. Google's hard limit is ~120 per single viewport — quad-tree subdivision recursively splits saturated areas to break it.

Q: Does this extract reviews? A: No. Google's /maps/preview/review/listentitiesreviews endpoint requires browser-bound session tokens that JavaScript constructs from in-memory state — impossible to obtain from HTTP-only requests. If you need reviews, use a browser-based scraper (which costs 5–10× more compute). For the same reason, full week opening hours, popular times, and photo lists aren't extractable HTTP-only. Everything else IS.

Q: Which fields are 100% reliable? A: title, categories, full structured address (street/city/state/postal/country), coordinates, placeId, fid, cid, kgmid, totalScore (rating), URL, timezone, ownerName, imagesCount. Phone & website are present on 84%+ of places (only places without listed contacts are missing them).

Q: Can I resume an interrupted run? A: Yes — state is checkpointed every 30 seconds and on Apify's PERSIST_STATE event. Migrated runs auto-resume without re-pushing duplicates.

Q: Multi-language passes — when should I use them? A: When scraping non-English areas (e.g. Tokyo, Mexico City, Berlin) — Google returns slightly different category names and translations per hl setting. Each language costs 1× the search budget but typically catches 5–15% more unique places.


🔧 Honest limitations

We believe in being upfront. Here's what this scraper does not extract:

FieldWhy not
Review text & individual reviewsEndpoint requires browser-bound session tokens
Full week opening hoursOnly "today" is in the search XHR
Popular times histogramBrowser-only XHR
Photo URLs (per-photo list)Browser-only XHR (only imagesCount and a thumbnail URL are HTTP-accessible)
Q&ABrowser-only XHR
reviewsCount for non-hotelsGoogle has stripped it from the search XHR for restaurants/retail/etc.

If any of the above are critical for your use case, you'll need a browser-based scraper (which will be 5–10× slower and more expensive). For the 95% of users who just need leads, addresses, contacts, ratings, and place identifiers — this scraper is the optimal choice.


🛠 Tech stack

  • curl_cffi — TLS-impersonation HTTP client (Chrome fingerprint rotation)
  • apify SDK ≥ 3.3.0 — Actor runtime, dataset, KV store, proxy
  • crawlee ≥ 1.5.0 — Used as a browserforge bug workaround dependency
  • Python 3.12 — Slim Apify base image, no system Chromium installed

No Node.js. No browser. No headless detection risk. Just Python + curl.


📜 License & feedback

Built by independent developers. Open to feature requests on the actor's Issues tab.

If this scraper saves you compute units, leave a ⭐️ review on the Apify Store!