Google Maps Lead Monitor
Pricing
Pay per event
Google Maps Lead Monitor
Collect ONLY NEW Google Maps business leads on every run — no duplicates, no waste, just high-value prospects you can use immediately for outreach or CRM workflows.
Pricing
Pay per event
Rating
0.0
(0)
Developer

Hayder Al-Khalissi
Actor stats
0
Bookmarked
5
Total users
3
Monthly active users
2 days ago
Last modified
Categories
Share
Google Maps Lead Monitor - Stateful Lead Generation with Change Detection
Monitor local markets. Detect new and updated listings. Prioritize outreach automatically.
Turn Google Maps into a recurring lead feed - without duplicates.
What is the Google Maps Lead Monitor?
Google Maps Lead Monitor is an Apify Actor that extracts Google Maps business listings and (optionally) enriches them with contact data for outreach.
The key difference: it's stateful. It remembers what it has already seen - and only outputs what changed. Built for recurring lead monitoring, not one-time scraping.
What you get (first screen)
- Finds businesses + phone + website from Google Maps
- Attempts to find public contact channels (emails + contact forms) from business websites (best-effort)
- Runs on a schedule (weekly/daily) and outputs only new/updated listings (incremental mode)
- Exports as CSV/Excel for outreach
- Includes leadPriority (1-5) and opportunityScore (0-7) for prioritization
Sample row: leadPriority: 4 | opportunityScore: 6 | title: "Joe's Pizza" | primaryEmail: "hello@joespizza.com" | phone: "+1 212-555-0100" | bestContact: "email" | changeType: "new"
Quickstart (30 seconds)
Mode A - First run (baseline)
Use the first time you scrape an area, or when you want a full list from scratch.
{"searchQueries": ["restaurant"],"location": "Berlin","maxPlaces": 50,"leadMode": true,"runMode": "fresh","resetState": true,"useProxy": true}
Mode B - Recurring monitor (only new/updated)
Use for recurring runs so you only get new or updated leads.
{"searchQueries": ["restaurant"],"location": "Berlin","maxPlaces": 200,"leadMode": true,"runMode": "incremental","detectUpdates": true,"useProxy": true}
Who is this for?
- Lead generation agencies
- Local SEO consultants
- Sales teams monitoring new and updated local businesses
- Market intelligence & competitor tracking
- n8n / automation builders who need structured place data
Why not just use a regular Google Maps scraper?
Most Google Maps scrapers are built for one-time extraction. This one is built for monitoring.
Regular scrapers:
- Return the same places every run
- Create duplicate rows
- Waste compute on already-seen listings
Google Maps Lead Monitor:
- Remembers previous runs
- Outputs only new/updated listings
- Designed for recurring monitoring workflows
Example: Weekly restaurant monitoring
- Week 1: 180 restaurants scraped -> baseline created
- Week 2: 12 new listings -> output = 12 rows
- Week 3: 7 new + 3 updated -> output = 10 rows
No duplicates. No manual filtering.
What makes this different
- Lead scoring built-in -
leadPriority,opportunityScore, andopportunityReasonso you can sort and filter the best leads first. - Incremental mode - only new and changed places are output; no duplicate rows across runs.
- Enrichment - attempts to find public contact channels (emails + contact forms) from business websites (best-effort).
Scores are based on signals like missing website/phone, high rating with low reviews, and other outreach-friendly patterns.
Built for agencies and automation workflows that run weekly or daily.
Typical workflow
- Run on a schedule in incremental mode.
- Sort by
leadPriority-> filterprimaryEmailnot empty (orhasContactForm=true). - Export CSV -> send to your CRM / n8n outreach workflow.
Recommended dataset columns (CRM-style)
For the best experience in the Apify dataset table, pin these columns in the default dataset view:
leadPriority, opportunityScore, title, categoryName, rating, reviewsCount, primaryEmail, phone, bestContact, hasContactForm, website, city, state, changeType, scrapedAt
Tip: Sort by leadPriority desc -> filter where primaryEmail is not empty -> export CSV and start outreach.
Key features
Stateful scraping & efficiency
| Feature | Description |
|---|---|
| Smart state | Stores seen places in Apify Key-Value Store; skip duplicates across runs. |
| Incremental mode | Output only new and updated places; never charged for skipped duplicates. |
| Change detection | Compare key fields (rating, phone, website, etc.) to detect updates. |
| Caching | Optional in-run cache to avoid re-processing unchanged data. |
| Streaming | Optional real-time streaming of results as they are collected. |
Scraping & reliability
| Feature | Description |
|---|---|
| Playwright (Chromium) | Full JavaScript rendering for Google Maps search and place pages. |
| Consent handling | Automatically accepts consent dialogs (EN, DE, FR, ES, IT). |
| Proxy support | Use Apify Proxy to reduce blocks and consent redirects (useProxy: true). |
| Navigation timeout | Configurable navigationTimeout (seconds); increase if pages load slowly. |
| Page detection | Detects consent/captcha/blocked pages and logs clear warnings. |
| Rate limiting | Modes: adaptive, conservative, moderate, aggressive. |
Data you get
From search results:
title,address,rating,reviewsCount,categoryName,price,url,placeId
With place details on (leadMode: true or includePlaceDetails: true):
phone,website, coordinates,openingHours
With contacts enrichment (includeContacts: true):
- Attempts to find public emails and contact forms from the business website (best-effort)
Optional extras:
- Reviews (
includeReviews: true, requires place details): review text, author, date, rating -> datasets:reviews,reviews-flat - Images (
includeImages: true): optional image extraction
Download as JSON, CSV, or Excel from the run's Storage tab.
How it works
When place details are off (list-only):
- The actor opens each search URL, scrolls the results, extracts list cards, and pushes one row per place.
When place details are on (leadMode or includePlaceDetails: true):
-
It uses a two-phase crawl:
- LIST - extract places from search and enqueue each as a DETAIL request (
uniqueKey = placeId) - DETAIL - open each place URL, extract phone/website/openingHours, merge with list data, push one merged row per place
- LIST - extract places from search and enqueue each as a DETAIL request (
The queue drains with no idle wait; run log shows detailPagesEnqueued, detailPagesSuccess, detailPagesFailed and requestsTotal = 1 + DETAIL count.
Use detailsConcurrency (default 2) and maxDetailsToFetch (default = maxPlaces) to limit parallel detail fetches and avoid CPU overload.
Run modes
Fresh mode (runMode: "fresh"):
- Full scan; output all places; update state.
Incremental mode (runMode: "incremental"):
- Skip already-seen places; output only new/updated; charge only for written items.
Run 1 (fresh): Scrape -> store state -> output allRun 2 (incremental): Scrape -> compare to state -> output new/updated only
How to use
- Open the Actor on Apify Store -> Input tab.
- Paste one of the Quickstart inputs above (Mode A or B), or set
searchQueries,location,maxPlaces,leadMode: true,useProxy: true. - Click Start. Results go to the run's default dataset.
- Open Storage -> Datasets -> default -> sort by
leadPrioritydesc, filterprimaryEmailnot empty if you want email leads -> export CSV or Excel.
Cost
Apify bills by compute (time and resources used per run), not by number of rows.
Incremental mode reduces compute by skipping duplicates and outputting only new/updated places. This actor does not add any extra per-row fees.
Use maxPlaces and navigationTimeout to keep runs predictable.
Input options
Full schema: .actor/input_schema.json. Below is a reference of the main options, grouped for clarity.
Search & location
| Option | Type | Default | Description |
|---|---|---|---|
| searchQueries | array of strings | ["restaurant"] | Search terms (e.g. ["restaurant", "cafe"]). |
| location | string | - | Location string (e.g. "New York, NY"). |
| categories | array of strings | - | Google Maps categories to filter results. |
| customGeolocation | object | - | Custom area: Polygon, MultiPolygon, or Point with coordinates. |
| placeIds | array of strings | - | Google Maps Place IDs to scrape directly (no search). |
| placeUrls | array of strings | - | Full Google Maps place URLs to scrape directly. |
| targetCategory | string | - | Optional. When set, places whose category contains this text get +1 on opportunityScore. |
Lead mode & data
| Option | Type | Default | Description |
|---|---|---|---|
| leadMode | boolean | true | One toggle: enables includePlaceDetails, includeContacts, includeEnrichment and keeps includeReviews off. |
| includePlaceDetails | boolean | false (or from leadMode) | Two-phase crawl: LIST then DETAIL. Extract phone/website/openingHours and merge into output. |
| includeContacts | boolean | false (or from leadMode) | Enrich with contact info from the website (best-effort). |
| includeReviews | boolean | false | Extract reviews; requires includePlaceDetails: true. Off by default to save cost. |
| detailsConcurrency | integer | 2 | When place details are on: max concurrent detail-page requests. |
| maxDetailsToFetch | integer | = maxPlaces | When place details are on: cap how many place detail pages to fetch. |
Limits
| Option | Type | Default | Description |
|---|---|---|---|
| maxPlaces | integer | 100 | Stop after this many valid places (1-100000). |
| maxResults | integer | 100 | Alias for maxPlaces; if both set, the lower value is used. |
| maxCrawledPlacesPerSearch | integer | - | Cap places crawled per search query (optional). |
Data & enrichment
| Option | Type | Default | Description |
|---|---|---|---|
| includeImages | boolean | false | Extract images for each place. |
| includeEnrichment | boolean | false | Enable additional data enrichment. |
| includeReviewerNames | boolean | true | If false, reviewer names are omitted/anonymized (reviews views). |
| minimalLogging | boolean | false | Avoid logging place titles/addresses/URLs at info level (privacy-friendly). |
Performance & reliability
| Option | Type | Default | Description |
|---|---|---|---|
| navigationTimeout | integer | 90 | Max seconds to wait for a Google Maps page to load (10-300). |
| useProxy | boolean | false | Use Apify Proxy; recommended to reduce blocks/consent pages. |
| maxConcurrency | integer | 10 | Max concurrent requests (1-50). |
| rateLimit | string | "adaptive" | adaptive, conservative, moderate, aggressive. |
| enableCaching | boolean | true | Enable in-run caching to avoid re-processing unchanged data. |
| streamResults | boolean | false | Stream results to the dataset in real time. |
State & incremental mode
| Option | Type | Default | Description |
|---|---|---|---|
| runMode | string | "fresh" | "fresh" = full scan; "incremental" = output only new/updated. |
| stateKey | string | "places-state" | KVS key where the actor stores seen places between runs. |
| resetState | boolean | false | Clear stored state before running (fresh baseline). |
| dedupeStrategy | string | "placeId" | placeId or a hash of key fields. |
| detectUpdates | boolean | true | In incremental mode, emit places whose key fields changed as updated. |
| updateFields | array of strings | see code | Fields used to detect updates (default: title, address, rating, reviewsCount, phone, website, categoryName, openingHours). |
Export & output
| Option | Type | Default | Description |
|---|---|---|---|
| exportFormat | string | "json" | json, csv, or excel. |
| compressOutput | boolean | false | Compress output files. |
| saveSearchResults | boolean | false | Save all extracted list items to dataset search-results for follow-up runs. |
n8n integration
This actor works with the Apify integration for n8n. Install the Apify community node (@apify/n8n-nodes-apify), add credentials, then use Run Actor.
Workflow shape
- Trigger: Webhook / Schedule / Manual
- Apify: Run Actor (this actor)
- Optional: Get Dataset Items to process results (use
defaultDatasetId)
Canonical input (Apify Console style)
{"searchQueries": ["restaurant"],"location": "New York, NY","maxPlaces": 20,"useProxy": true}
n8n-style input (short keys)
{"query": "restaurants","city": "Berlin","limit": 20,"workflowId": "{{ $workflow.id }}","executionId": "{{ $execution.id }}","nodeName": "Run Actor"}
The actor maps:
query->searchQueriescity->locationlimit->maxPlaces
OpenClaw integration (optional)
Send scraped places to an OpenClaw gateway so an agent can use the data. Uses bounded concurrency so outbound calls don't block scraping.
| Option | Type | Default | Description |
|---|---|---|---|
| openClaw.enabled | boolean | false | Enable sending places to OpenClaw. |
| openClaw.gatewayUrl | string | - | OpenClaw gateway base URL. |
| openClaw.token | string | - | Bearer token; prefer env OPENCLAW_GATEWAY_TOKEN. |
| openClaw.agentId | string | "main" | Target agent id header. |
| openClaw.concurrency | integer | 5 | Max concurrent outbound requests (1-20). |
| openClaw.sendMode | string | "perPlace" | perPlace or batch. |
| openClaw.batchSize | integer | 10 | Places per batch if sendMode is batch. |
| openClaw.api | string | "responses" | responses or chatCompletions. |
Example:
{"searchQueries": ["cafe"],"location": "Berlin","maxPlaces": 20,"openClaw": {"enabled": true,"gatewayUrl": "https://gateway.example.com:18789","agentId": "main","concurrency": 5,"sendMode": "perPlace","api": "responses"}}
Usage recommendations
- Use Apify Proxy: set
useProxy: trueto reduce consent pages, blocks, and bot detection. - Navigation timeout: if you see "Navigation timed out", increase
navigationTimeout(e.g. 120-180). - Memory: Playwright needs headroom; use 2-4 GB for larger runs.
- Incremental runs: use
runMode: "incremental"for monitoring so you only pay for new/updated places.
Example inputs
Basic search (proxy + higher timeout):
{"searchQueries": ["restaurant", "cafe"],"location": "New York, NY","maxResults": 50,"useProxy": true,"navigationTimeout": 120}
Incremental run (only new/updated):
{"searchQueries": ["plumber"],"location": "London, UK","maxPlaces": 200,"runMode": "incremental","useProxy": true}
Scrape by place URLs:
{"placeUrls": ["https://www.google.com/maps/place/My+Business/..."],"includePlaceDetails": true,"navigationTimeout": 90}
Custom geolocation (polygon):
{"searchQueries": ["restaurant"],"customGeolocation": {"type": "Polygon","coordinates": [[[-0.322813, 51.597165],[-0.31499, 51.388023],[0.060493, 51.389199],[0.051936, 51.60036],[-0.322813, 51.597165]]]}}
Output
Results are stored in the default dataset (one object per place). Download as JSON, CSV, or Excel from the run's Storage tab.
Example JSON
[{"title": "Joe's Pizza","address": "123 Main St, New York, NY 10001","phone": "+1 212-555-0100","website": "https://joespizza.com","rating": 4.5,"reviewsCount": 72,"categoryName": "Pizza restaurant","price": "$$","url": "https://www.google.com/maps/place/Joes+Pizza/...","placeId": "ChIJ...","scrapedAt": "2026-02-23T14:30:00.000Z","changeType": "new","opportunityScore": 6,"opportunityReason": ["no_website", "high_rating", "low_reviews"]}]
Dataset views (when enabled)
| Dataset | When | Contents |
|---|---|---|
| default | always | Places (one row per place) |
| reviews | includeReviews | One row per review (full fields) |
| reviews-flat | includeReviews | Flat review rows for CSV/automation |
| place-details | includePlaceDetails | Flattened place detail rows |
| list-vs-detail-mismatch | includePlaceDetails | Places where list differs from detail page |
| changes | incremental run | New/updated places (optional view) |
Data & privacy
This actor may collect data that qualifies as personal data under GDPR and similar laws, including:
- Place/business data: addresses, phone numbers, websites, business names
- Review data (when
includeReviews: true): reviewer names, review text, review URLs, reviewer photos
You should have a legitimate purpose for processing this data and comply with GDPR, Google's Terms of Service, and any other applicable law. Limit retention and access (e.g. via Apify retention settings and downstream controls). Enabling reviews or contacts increases the amount of personal data collected; use includeReviewerNames: false for more privacy-friendly output.
Development
- Node.js 20+ required.
- Setup:
npm installthennpx playwright install chromium - Local test:
npm run test:local [input-file.json] - Scripts:
npm start(Apify),npm test,npm run lint
See FEATURES.md for architecture and technical details.
FAQ
Is it legal to scrape Google Maps?
This actor extracts publicly visible data from Google Maps (business names, addresses, ratings, etc.). You are responsible for complying with Google's Terms of Service, GDPR, and other applicable laws. Do not scrape personal data without a legitimate purpose.
Why am I seeing "Navigation timed out"?
Google Maps can be slow to load. Increase navigationTimeout (e.g. 90-180 seconds) and ensure the run has enough memory (2-4 GB). Using useProxy: true can also improve reliability.
Where do I get reviews data?
Enable includePlaceDetails: true and includeReviews: true. Reviews appear in the reviews and reviews-flat datasets for that run (Storage -> Datasets).
Support and feedback
- Use the Issues tab on the Actor Store page (or repository) for bugs and feature requests.
- You can run this actor via the Apify API, schedule it, and connect results to Zapier/Make/n8n.
- For custom scraping needs or enterprise use, contact the maintainer via the Actor Store page.
License
Apache-2.0
