Google Maps Lead Monitor avatar

Google Maps Lead Monitor

Pricing

Pay per event

Go to Apify Store
Google Maps Lead Monitor

Google Maps Lead Monitor

Collect ONLY NEW Google Maps business leads on every run — no duplicates, no waste, just high-value prospects you can use immediately for outreach or CRM workflows.

Pricing

Pay per event

Rating

0.0

(0)

Developer

Hayder Al-Khalissi

Hayder Al-Khalissi

Maintained by Community

Actor stats

0

Bookmarked

5

Total users

3

Monthly active users

2 days ago

Last modified

Share

Google Maps Lead Monitor - Stateful Lead Generation with Change Detection

Monitor local markets. Detect new and updated listings. Prioritize outreach automatically.

Turn Google Maps into a recurring lead feed - without duplicates.


What is the Google Maps Lead Monitor?

Google Maps Lead Monitor is an Apify Actor that extracts Google Maps business listings and (optionally) enriches them with contact data for outreach.

The key difference: it's stateful. It remembers what it has already seen - and only outputs what changed. Built for recurring lead monitoring, not one-time scraping.

What you get (first screen)

  • Finds businesses + phone + website from Google Maps
  • Attempts to find public contact channels (emails + contact forms) from business websites (best-effort)
  • Runs on a schedule (weekly/daily) and outputs only new/updated listings (incremental mode)
  • Exports as CSV/Excel for outreach
  • Includes leadPriority (1-5) and opportunityScore (0-7) for prioritization

Sample row: leadPriority: 4 | opportunityScore: 6 | title: "Joe's Pizza" | primaryEmail: "hello@joespizza.com" | phone: "+1 212-555-0100" | bestContact: "email" | changeType: "new"


Quickstart (30 seconds)

Mode A - First run (baseline)

Use the first time you scrape an area, or when you want a full list from scratch.

{
"searchQueries": ["restaurant"],
"location": "Berlin",
"maxPlaces": 50,
"leadMode": true,
"runMode": "fresh",
"resetState": true,
"useProxy": true
}

Mode B - Recurring monitor (only new/updated)

Use for recurring runs so you only get new or updated leads.

{
"searchQueries": ["restaurant"],
"location": "Berlin",
"maxPlaces": 200,
"leadMode": true,
"runMode": "incremental",
"detectUpdates": true,
"useProxy": true
}

Who is this for?

  • Lead generation agencies
  • Local SEO consultants
  • Sales teams monitoring new and updated local businesses
  • Market intelligence & competitor tracking
  • n8n / automation builders who need structured place data

Why not just use a regular Google Maps scraper?

Most Google Maps scrapers are built for one-time extraction. This one is built for monitoring.

Regular scrapers:

  • Return the same places every run
  • Create duplicate rows
  • Waste compute on already-seen listings

Google Maps Lead Monitor:

  • Remembers previous runs
  • Outputs only new/updated listings
  • Designed for recurring monitoring workflows

Example: Weekly restaurant monitoring

  • Week 1: 180 restaurants scraped -> baseline created
  • Week 2: 12 new listings -> output = 12 rows
  • Week 3: 7 new + 3 updated -> output = 10 rows

No duplicates. No manual filtering.


What makes this different

  • Lead scoring built-in - leadPriority, opportunityScore, and opportunityReason so you can sort and filter the best leads first.
  • Incremental mode - only new and changed places are output; no duplicate rows across runs.
  • Enrichment - attempts to find public contact channels (emails + contact forms) from business websites (best-effort).

Scores are based on signals like missing website/phone, high rating with low reviews, and other outreach-friendly patterns.

Built for agencies and automation workflows that run weekly or daily.


Typical workflow

  1. Run on a schedule in incremental mode.
  2. Sort by leadPriority -> filter primaryEmail not empty (or hasContactForm=true).
  3. Export CSV -> send to your CRM / n8n outreach workflow.

For the best experience in the Apify dataset table, pin these columns in the default dataset view:

leadPriority, opportunityScore, title, categoryName, rating, reviewsCount, primaryEmail, phone, bestContact, hasContactForm, website, city, state, changeType, scrapedAt

Tip: Sort by leadPriority desc -> filter where primaryEmail is not empty -> export CSV and start outreach.


Key features

Stateful scraping & efficiency

FeatureDescription
Smart stateStores seen places in Apify Key-Value Store; skip duplicates across runs.
Incremental modeOutput only new and updated places; never charged for skipped duplicates.
Change detectionCompare key fields (rating, phone, website, etc.) to detect updates.
CachingOptional in-run cache to avoid re-processing unchanged data.
StreamingOptional real-time streaming of results as they are collected.

Scraping & reliability

FeatureDescription
Playwright (Chromium)Full JavaScript rendering for Google Maps search and place pages.
Consent handlingAutomatically accepts consent dialogs (EN, DE, FR, ES, IT).
Proxy supportUse Apify Proxy to reduce blocks and consent redirects (useProxy: true).
Navigation timeoutConfigurable navigationTimeout (seconds); increase if pages load slowly.
Page detectionDetects consent/captcha/blocked pages and logs clear warnings.
Rate limitingModes: adaptive, conservative, moderate, aggressive.

Data you get

From search results:

  • title, address, rating, reviewsCount, categoryName, price, url, placeId

With place details on (leadMode: true or includePlaceDetails: true):

  • phone, website, coordinates, openingHours

With contacts enrichment (includeContacts: true):

  • Attempts to find public emails and contact forms from the business website (best-effort)

Optional extras:

  • Reviews (includeReviews: true, requires place details): review text, author, date, rating -> datasets: reviews, reviews-flat
  • Images (includeImages: true): optional image extraction

Download as JSON, CSV, or Excel from the run's Storage tab.


How it works

When place details are off (list-only):

  • The actor opens each search URL, scrolls the results, extracts list cards, and pushes one row per place.

When place details are on (leadMode or includePlaceDetails: true):

  • It uses a two-phase crawl:

    • LIST - extract places from search and enqueue each as a DETAIL request (uniqueKey = placeId)
    • DETAIL - open each place URL, extract phone/website/openingHours, merge with list data, push one merged row per place

The queue drains with no idle wait; run log shows detailPagesEnqueued, detailPagesSuccess, detailPagesFailed and requestsTotal = 1 + DETAIL count.

Use detailsConcurrency (default 2) and maxDetailsToFetch (default = maxPlaces) to limit parallel detail fetches and avoid CPU overload.

Run modes

Fresh mode (runMode: "fresh"):

  • Full scan; output all places; update state.

Incremental mode (runMode: "incremental"):

  • Skip already-seen places; output only new/updated; charge only for written items.
Run 1 (fresh): Scrape -> store state -> output all
Run 2 (incremental): Scrape -> compare to state -> output new/updated only

How to use

  1. Open the Actor on Apify Store -> Input tab.
  2. Paste one of the Quickstart inputs above (Mode A or B), or set searchQueries, location, maxPlaces, leadMode: true, useProxy: true.
  3. Click Start. Results go to the run's default dataset.
  4. Open Storage -> Datasets -> default -> sort by leadPriority desc, filter primaryEmail not empty if you want email leads -> export CSV or Excel.

Cost

Apify bills by compute (time and resources used per run), not by number of rows.

Incremental mode reduces compute by skipping duplicates and outputting only new/updated places. This actor does not add any extra per-row fees.

Use maxPlaces and navigationTimeout to keep runs predictable.


Input options

Full schema: .actor/input_schema.json. Below is a reference of the main options, grouped for clarity.

Search & location

OptionTypeDefaultDescription
searchQueriesarray of strings["restaurant"]Search terms (e.g. ["restaurant", "cafe"]).
locationstring-Location string (e.g. "New York, NY").
categoriesarray of strings-Google Maps categories to filter results.
customGeolocationobject-Custom area: Polygon, MultiPolygon, or Point with coordinates.
placeIdsarray of strings-Google Maps Place IDs to scrape directly (no search).
placeUrlsarray of strings-Full Google Maps place URLs to scrape directly.
targetCategorystring-Optional. When set, places whose category contains this text get +1 on opportunityScore.

Lead mode & data

OptionTypeDefaultDescription
leadModebooleantrueOne toggle: enables includePlaceDetails, includeContacts, includeEnrichment and keeps includeReviews off.
includePlaceDetailsbooleanfalse (or from leadMode)Two-phase crawl: LIST then DETAIL. Extract phone/website/openingHours and merge into output.
includeContactsbooleanfalse (or from leadMode)Enrich with contact info from the website (best-effort).
includeReviewsbooleanfalseExtract reviews; requires includePlaceDetails: true. Off by default to save cost.
detailsConcurrencyinteger2When place details are on: max concurrent detail-page requests.
maxDetailsToFetchinteger= maxPlacesWhen place details are on: cap how many place detail pages to fetch.

Limits

OptionTypeDefaultDescription
maxPlacesinteger100Stop after this many valid places (1-100000).
maxResultsinteger100Alias for maxPlaces; if both set, the lower value is used.
maxCrawledPlacesPerSearchinteger-Cap places crawled per search query (optional).

Data & enrichment

OptionTypeDefaultDescription
includeImagesbooleanfalseExtract images for each place.
includeEnrichmentbooleanfalseEnable additional data enrichment.
includeReviewerNamesbooleantrueIf false, reviewer names are omitted/anonymized (reviews views).
minimalLoggingbooleanfalseAvoid logging place titles/addresses/URLs at info level (privacy-friendly).

Performance & reliability

OptionTypeDefaultDescription
navigationTimeoutinteger90Max seconds to wait for a Google Maps page to load (10-300).
useProxybooleanfalseUse Apify Proxy; recommended to reduce blocks/consent pages.
maxConcurrencyinteger10Max concurrent requests (1-50).
rateLimitstring"adaptive"adaptive, conservative, moderate, aggressive.
enableCachingbooleantrueEnable in-run caching to avoid re-processing unchanged data.
streamResultsbooleanfalseStream results to the dataset in real time.

State & incremental mode

OptionTypeDefaultDescription
runModestring"fresh""fresh" = full scan; "incremental" = output only new/updated.
stateKeystring"places-state"KVS key where the actor stores seen places between runs.
resetStatebooleanfalseClear stored state before running (fresh baseline).
dedupeStrategystring"placeId"placeId or a hash of key fields.
detectUpdatesbooleantrueIn incremental mode, emit places whose key fields changed as updated.
updateFieldsarray of stringssee codeFields used to detect updates (default: title, address, rating, reviewsCount, phone, website, categoryName, openingHours).

Export & output

OptionTypeDefaultDescription
exportFormatstring"json"json, csv, or excel.
compressOutputbooleanfalseCompress output files.
saveSearchResultsbooleanfalseSave all extracted list items to dataset search-results for follow-up runs.

n8n integration

This actor works with the Apify integration for n8n. Install the Apify community node (@apify/n8n-nodes-apify), add credentials, then use Run Actor.

Workflow shape

  • Trigger: Webhook / Schedule / Manual
  • Apify: Run Actor (this actor)
  • Optional: Get Dataset Items to process results (use defaultDatasetId)

Canonical input (Apify Console style)

{
"searchQueries": ["restaurant"],
"location": "New York, NY",
"maxPlaces": 20,
"useProxy": true
}

n8n-style input (short keys)

{
"query": "restaurants",
"city": "Berlin",
"limit": 20,
"workflowId": "{{ $workflow.id }}",
"executionId": "{{ $execution.id }}",
"nodeName": "Run Actor"
}

The actor maps:

  • query -> searchQueries
  • city -> location
  • limit -> maxPlaces

OpenClaw integration (optional)

Send scraped places to an OpenClaw gateway so an agent can use the data. Uses bounded concurrency so outbound calls don't block scraping.

OptionTypeDefaultDescription
openClaw.enabledbooleanfalseEnable sending places to OpenClaw.
openClaw.gatewayUrlstring-OpenClaw gateway base URL.
openClaw.tokenstring-Bearer token; prefer env OPENCLAW_GATEWAY_TOKEN.
openClaw.agentIdstring"main"Target agent id header.
openClaw.concurrencyinteger5Max concurrent outbound requests (1-20).
openClaw.sendModestring"perPlace"perPlace or batch.
openClaw.batchSizeinteger10Places per batch if sendMode is batch.
openClaw.apistring"responses"responses or chatCompletions.

Example:

{
"searchQueries": ["cafe"],
"location": "Berlin",
"maxPlaces": 20,
"openClaw": {
"enabled": true,
"gatewayUrl": "https://gateway.example.com:18789",
"agentId": "main",
"concurrency": 5,
"sendMode": "perPlace",
"api": "responses"
}
}

Usage recommendations

  • Use Apify Proxy: set useProxy: true to reduce consent pages, blocks, and bot detection.
  • Navigation timeout: if you see "Navigation timed out", increase navigationTimeout (e.g. 120-180).
  • Memory: Playwright needs headroom; use 2-4 GB for larger runs.
  • Incremental runs: use runMode: "incremental" for monitoring so you only pay for new/updated places.

Example inputs

Basic search (proxy + higher timeout):

{
"searchQueries": ["restaurant", "cafe"],
"location": "New York, NY",
"maxResults": 50,
"useProxy": true,
"navigationTimeout": 120
}

Incremental run (only new/updated):

{
"searchQueries": ["plumber"],
"location": "London, UK",
"maxPlaces": 200,
"runMode": "incremental",
"useProxy": true
}

Scrape by place URLs:

{
"placeUrls": ["https://www.google.com/maps/place/My+Business/..."],
"includePlaceDetails": true,
"navigationTimeout": 90
}

Custom geolocation (polygon):

{
"searchQueries": ["restaurant"],
"customGeolocation": {
"type": "Polygon",
"coordinates": [[
[-0.322813, 51.597165],
[-0.31499, 51.388023],
[0.060493, 51.389199],
[0.051936, 51.60036],
[-0.322813, 51.597165]
]]
}
}

Output

Results are stored in the default dataset (one object per place). Download as JSON, CSV, or Excel from the run's Storage tab.

Example JSON

[
{
"title": "Joe's Pizza",
"address": "123 Main St, New York, NY 10001",
"phone": "+1 212-555-0100",
"website": "https://joespizza.com",
"rating": 4.5,
"reviewsCount": 72,
"categoryName": "Pizza restaurant",
"price": "$$",
"url": "https://www.google.com/maps/place/Joes+Pizza/...",
"placeId": "ChIJ...",
"scrapedAt": "2026-02-23T14:30:00.000Z",
"changeType": "new",
"opportunityScore": 6,
"opportunityReason": ["no_website", "high_rating", "low_reviews"]
}
]

Dataset views (when enabled)

DatasetWhenContents
defaultalwaysPlaces (one row per place)
reviewsincludeReviewsOne row per review (full fields)
reviews-flatincludeReviewsFlat review rows for CSV/automation
place-detailsincludePlaceDetailsFlattened place detail rows
list-vs-detail-mismatchincludePlaceDetailsPlaces where list differs from detail page
changesincremental runNew/updated places (optional view)

Data & privacy

This actor may collect data that qualifies as personal data under GDPR and similar laws, including:

  • Place/business data: addresses, phone numbers, websites, business names
  • Review data (when includeReviews: true): reviewer names, review text, review URLs, reviewer photos

You should have a legitimate purpose for processing this data and comply with GDPR, Google's Terms of Service, and any other applicable law. Limit retention and access (e.g. via Apify retention settings and downstream controls). Enabling reviews or contacts increases the amount of personal data collected; use includeReviewerNames: false for more privacy-friendly output.


Development

  • Node.js 20+ required.
  • Setup: npm install then npx playwright install chromium
  • Local test: npm run test:local [input-file.json]
  • Scripts: npm start (Apify), npm test, npm run lint

See FEATURES.md for architecture and technical details.


FAQ

This actor extracts publicly visible data from Google Maps (business names, addresses, ratings, etc.). You are responsible for complying with Google's Terms of Service, GDPR, and other applicable laws. Do not scrape personal data without a legitimate purpose.

Why am I seeing "Navigation timed out"?

Google Maps can be slow to load. Increase navigationTimeout (e.g. 90-180 seconds) and ensure the run has enough memory (2-4 GB). Using useProxy: true can also improve reliability.

Where do I get reviews data?

Enable includePlaceDetails: true and includeReviews: true. Reviews appear in the reviews and reviews-flat datasets for that run (Storage -> Datasets).

Support and feedback

  • Use the Issues tab on the Actor Store page (or repository) for bugs and feature requests.
  • You can run this actor via the Apify API, schedule it, and connect results to Zapier/Make/n8n.
  • For custom scraping needs or enterprise use, contact the maintainer via the Actor Store page.

License

Apache-2.0