Google Search Scraper
Pricing
from $1.00 / 1,000 results
Google Search Scraper
Extract structured Google results across Search, Images, Videos, Places, Maps, News, Shopping, Scholar, Autocomplete, and Patents. Supports query/category batching, localization, pagination, retries, and clean dataset output.
Pricing
from $1.00 / 1,000 results
Rating
5.0
(1)
Developer

API ninja
Actor stats
1
Bookmarked
5
Total users
4
Monthly active users
2 days ago
Last modified
Categories
Share
What does Google Search Scraper do?
Google Search Scraper lets you scrape Google search results and other Google data into a structured dataset on Apify. It supports Search, Images, Videos, Places, Maps, News, Shopping, Scholar, Autocomplete, and Patents, and works as a Google Search API alternative for extracting public Google data.
You provide queries, categories, localization settings (geo, lang, location), and result limits, then the Actor handles pagination, retries, and dataset delivery. It extracts structured data from Google pages and endpoints through one workflow.
This Actor works for both no-code users (run in Apify UI and download CSV/Excel) and developers (use Apify API/SDK in Node.js or Python).
Why use this Google Search Scraper?
- SEO monitoring and Google SERP tracking across organic, images, videos, and news
- Lead generation from Google Maps / Places business discovery
- Market and content research from Google News, Scholar, and web results
- Competitive analysis and catalog monitoring from Google Shopping data
What can this Actor do?
- 🔍 Run many queries in one run (
["coffee", "plumber in ny", "best crm"]) - 🗂️ Fetch multiple categories per query in one workflow
- 🌍 Localize results with country/language parameters
- ♻️ Automatically retry failed requests (up to 3 times)
- 📄 Paginate until result target is met (or all available results if enabled)
- 📦 Push normalized output records into Apify dataset
- ⏱️ Use scheduling, API access, webhooks, and integrations through Apify platform
Run it on the Apify platform
On Apify, you can:
- Schedule Google Search Scraper runs automatically (daily/weekly/custom)
- Access extracted Google SERP data through API and Apify SDKs
- Export datasets as JSON, CSV, Excel, or XML
- Connect runs to Make, Zapier, webhooks, and other integrations
What Google search results and data can this scraper extract?
The Actor stores category results as raw result objects with metadata (query, category, page).
| Field | Type | Description |
|---|---|---|
query | string | Original query provided by user |
category | string | Data source category (search, images, maps, etc.) |
page | number | Page number used for this batch |
raw | object | Full raw item returned by the category endpoint |
Typical raw data includes titles, links, snippets, thumbnails, source metadata, ranking positions, and category-specific fields.
How to scrape Google data with this Actor
- Open the Actor in Apify Store and go to the Input tab.
- Add one or more
queries. - Select one or more
categories. - Set localization (
lang,geo, optionallocation). - Choose either:
resultsPerCategoryfor capped scraping, orparseAllResults=trueto collect all available pages.
- Run the Actor.
- Download output from the dataset as JSON, CSV, Excel, XML, or via API.
For a step-by-step Google SERP scraping tutorial with Apify, see: https://blog.apify.com/unofficial-google-search-api-from-apify-22a20537a951/
Pricing and usage expectations
Cost depends on:
- Number of queries
- Number of selected categories
- Pagination depth per category
- Retry frequency caused by transient errors
For lower cost, start with a small query list and a limited resultsPerCategory. For maximum coverage, enable parseAllResults, which may increase run time and compute usage. Apify provides full run logs, usage tracking, and scheduling so you can control and optimize spend over time.
Input example
{"queries": ["Coffee", "Plumber in NY"],"categories": ["search", "places", "maps", "news"],"resultsPerCategory": 100,"parseAllResults": false,"autocorrect": true,"timeFrame": "qdr:w","lang": "en","geo": "US","location": "New York, New York, United States","ll": "@40.7128,-74.0060,11z","placeId": "","cid": ""}
Output
Google Search Scraper saves results into an Apify dataset. You can download the dataset in JSON, CSV, Excel, or XML, or read it programmatically via Apify API.
Output example
[{"query": "Coffee","category": "search","page": 1,"raw": {"title": "Coffee - Wikipedia","link": "https://en.wikipedia.org/wiki/Coffee","snippet": "Coffee is a beverage..."}},{"query": "Coffee","category": "news","page": 1,"raw": {"title": "Coffee prices rise globally","source": "Example News","link": "https://example.com/article"}}]
Related Google Actors
If you also work with local business intelligence, use these related tools:
FAQ
Is this Actor good for no-code users?
Yes. Most users can run it with only queries + categories + localization settings. Advanced map parameters (ll, placeId, cid) are optional.
How does pagination work?
If parseAllResults=false, pagination stops after resultsPerCategory items (or earlier if no more results exist). If parseAllResults=true, pagination continues until the endpoint returns an empty result list.
Is this suitable for production pipelines?
Yes. The Actor includes retries, structured logs, and predictable output format, and it can be scheduled or triggered via API/webhooks.
Is scraping legal?
Our scrapers are intended to collect publicly available data only. Results may still contain personal data, which can be regulated (for example by GDPR and similar laws). Do not scrape or use personal data without a legitimate reason, and always review the target website terms plus applicable laws in your jurisdiction.
Support
- Check run logs first (request params, retries, pagination stop reason are logged).
- If you need custom fields or workflow extensions, open an issue in the Actor repository or contact the maintainer through Apify Store profile.
- You can connect this Actor output to other services via Apify integrations and API.