Google Search Results Scraper
Pricing
$19.99/month + usage
Google Search Results Scraper
Extract Google search results for any keyword using this scraper. Collect titles, URLs, descriptions, rankings, and featured snippets from SERPs. Useful for SEO research, keyword tracking, competitor analysis, and monitoring search result changes across Google queries.
Pricing
$19.99/month + usage
Rating
0.0
(0)
Developer
ScrapAPI
Actor stats
0
Bookmarked
3
Total users
0
Monthly active users
8 days ago
Last modified
Categories
Share
Google Search Results Scraper
The Google Search Results Scraper is a production-ready Google SERP scraper that extracts structured search engine results pages (SERPs) at scale — including organic listings, paid ads, shopping products, People Also Ask, related queries, and optional AI Overviews — in clean JSON. It solves the challenge of turning live Google results into analyzable data for SEO monitoring, keyword tracking, and competitive research. Built for marketers, developers, data analysts, and researchers, this Google search results scraping tool acts as a Google SERP data extractor that you can automate, integrate, and export to CSV/JSON for downstream analysis.
What data / output can you get?
Below are the primary fields the actor pushes to the dataset on each page. Field names match the output exactly as pushed during the run.
| Data type | Description | Example value |
|---|---|---|
| searchQuery.term | Full search term sent to Google (with applied filters) | "python intitle:"tutorial"" |
| url | Final SERP URL for the page | "https://www.google.com/search?q=python&hl=en&gl=us" |
| resultsTotal | Parsed “About X results” count (if detected) | 123000000 |
| hasNextPage | Whether another results page is available and allowed | true |
| organicResults[].title | Organic result title | "Welcome to Python.org" |
| organicResults[].url | Cleaned target URL | "https://www.python.org" |
| organicResults[].displayedUrl | URL displayed without protocol | "www.python.org" |
| organicResults[].description | Snippet extracted from the result | "Python is a programming language..." |
| paidResults[].url | Google Ads final URL (when present) | "https://example.com/offer" |
| paidProducts[].prices | Detected product price texts | ["$19.99", "$24.99"] |
| relatedQueries[].title | Titles of related/suggested queries | "learn python" |
| peopleAlsoAsk[].question | PAA question text (with optional link) | "What is Python used for?" |
| aiOverview.text | AI Overview summary text (when enabled) | "Python is a popular programming language..." |
Bonus fields:
- suggestedResults: SERP suggestions derived from related queries (with type and position)
- customData: Add-on settings echo (Perplexity/ChatGPT toggles, leads enrichment parameters)
- html / htmlSnapshotUrl / _htmlPayloads: Full HTML snapshot in dataset and/or links saved to key-value store
- In organicResults, an optional icon field may appear when includeIcons is enabled
You can export the dataset in JSON, CSV, or Excel from Apify to extract Google search results to CSV for SEO workflows.
Key features
-
⚡ Real-time SERP streaming Pushes results page-by-page with each iteration of the crawl. This enables real-time pipelines for dashboards and monitoring.
-
🧠 Optional AI Overview (via SerpApi) Turn on aiMode and provide serpApiKey to retrieve aiOverview { text, references } alongside results for AEO/GEO tracking.
-
🌍 Location & language targeting Control gl (countryCode), hl (languageCode), lr (searchLanguage), and exact location using UULE (locationUule) to build a local Google search results scraper.
-
🎯 Advanced filters Apply forceExactMatch, site, relatedToSite, wordsInTitle, wordsInText, wordsInUrl, fileTypes, and date filters quickDateRange / beforeDate / afterDate.
-
📱 Mobile or desktop modes Choose mobileResults for mobile SERPs or stick with desktop to emulate different devices.
-
🧹 Include unfiltered results includeUnfilteredResults adds results Google typically filters out to expand coverage.
-
🛡️ Robust proxy handling Always-on proxies with automatic fallback. Starts with GOOGLE_SERP proxy and, if blocked, switches to RESIDENTIAL with retries and persistence for reliability in real-time Google SERP scraping.
-
🧩 Add-on toggles in output customData echoes Perplexity and ChatGPT add-on settings as part of each item for cross-platform analysis pipelines.
-
💾 HTML snapshots for debugging Save HTML directly in the dataset (html) and/or to the key-value store with links in htmlSnapshotUrl for easy browser review.
-
👩💻 Developer-friendly and scalable Works great as a Google search results scraper API within the Apify platform, integrates with Python workflows, and supports bulk Google search scraper runs.
How to use Google Search Results Scraper - step by step
-
Sign in to Apify Create or log into your Apify account at console.apify.com.
-
Open the actor Find “Google Search Results Scraper” in the Apify Store or your workspace and open it.
-
Enter input data In the Input tab, paste your queries into “queries” — one search term or full Google Search URL per line. You can use advanced operators (site:, OR, etc.) directly.
-
Configure scope & limits
- resultsPerPage (1–100) and maxPagesPerQuery control volume (≈10 organic per page).
- Adjust countryCode, languageCode, searchLanguage, and locationUule for local targeting.
- Set quickDateRange, beforeDate, and/or afterDate to filter by time.
-
Fine-tune filters & settings Toggle forceExactMatch, site/relatedToSite, wordsInTitle/Text/Url, fileTypes, and includeUnfilteredResults. Choose mobileResults if needed.
-
Optional add-ons
- aiMode + serpApiKey to capture AI Overview text and references.
- focusOnPaidAds to emphasize ad retrieval attempts (with retry logic).
- Save HTML to dataset (html) and/or key-value store (htmlSnapshotUrl).
-
Start the run Click Start. The actor will use proxies automatically (GOOGLE_SERP → RESIDENTIAL fallback on block). Watch logs for progress.
-
Export results Open the Output tab to download your dataset as JSON/CSV/Excel. Use it in SEO analysis, dashboards, or data pipelines.
Pro Tip: Use the Apify API to schedule runs, stream results into your data warehouse, or connect this Google search results crawler to Make.com/n8n workflows for automation.
Use cases
| Use case name | Description |
|---|---|
| SEO teams – ranking & SERP tracking | Monitor organic listings, paid ads, and PAA for keywords at scale using a Google search ranking data scraper. |
| Content strategy – keyword discovery | Analyze related queries and PAA to uncover content gaps with a Google search results parser. |
| Competitor monitoring – ads & offers | Track paidResults and paidProducts to see messaging and price points over time. |
| Local SEO – geo-targeted audits | Emulate precise locations via locationUule for a local Google search results scraper that reflects on-the-ground SERPs. |
| Market research – product pricing | Extract paidProducts pricing signals for pricing intelligence and product monitoring. |
| Data engineering – bulk pipelines | Use as a bulk Google search scraper to build datasets and export to CSV/JSON for analytics. |
| Academic & trend analysis | Track query ecosystems and result shifts over time, including peopleAlsoAsk and relatedQueries. |
| Developer workflows – API ingestion | Integrate this Google search results scraper API into Python or ETL for real-time dashboards. |
Why choose Google Search Results Scraper?
-
🎯 Precision SERP JSON Structured fields for organicResults, paidResults, paidProducts, peopleAlsoAsk, relatedQueries, and suggestedResults — ready for analysis.
-
🌐 True geo & language control Target gl/hl/lr and exact-location UULE to reflect local SERPs globally.
-
📈 Built for scale Bulk input with one query per line and configurable pagination for large keyword sets.
-
🔌 Developer-friendly Works seamlessly with the Apify API and Python-based pipelines for programmatic runs.
-
🧪 Debuggability Save HTML to dataset and/or key-value store for quick inspection and reproducibility.
-
🛡️ Reliable infrastructure Always-on Google SERP proxy with automatic fallback to RESIDENTIAL when blocked (with retries), then persistent use for the rest of the run.
Compared to brittle browser extensions or ad-hoc scripts, this Google search scraping software is a stable, production-grade Google SERP scraper for repeatable, automated workflows.
Is it legal / ethical to use Google Search Results Scraper?
Yes — when done responsibly. This actor collects data from publicly available Google Search result pages.
Guidelines to follow:
- Only scrape publicly available data and respect platform terms and applicable laws (e.g., GDPR/CCPA).
- Avoid using scraped data for spam or unethical purposes.
- Configure location/language responsibly and do not attempt to access private or authenticated content.
- Consult your legal team for jurisdiction-specific compliance questions.
Input parameters & output format
Example JSON input
{"queries": "python\nsite:apify.com google scraper","resultsPerPage": 50,"maxPagesPerQuery": 2,"countryCode": "us","languageCode": "en","searchLanguage": "en","forceExactMatch": false,"site": null,"relatedToSite": null,"wordsInTitle": ["tutorial"],"wordsInText": [],"wordsInUrl": [],"fileTypes": ["pdf"],"quickDateRange": "m1","beforeDate": null,"afterDate": null,"mobileResults": false,"includeUnfilteredResults": false,"aiMode": "aiModeOff","serpApiKey": null,"perplexitySearch": {"enablePerplexity": false,"searchRecency": "","returnImages": false,"returnRelatedQuestions": false},"chatGptSearch": {"enableChatGpt": false},"maximumLeadsEnrichmentRecords": 0,"leadsEnrichmentDepartments": [],"focusOnPaidAds": false,"locationUule": null,"saveHtml": false,"saveHtmlToKeyValueStore": true,"includeIcons": false,"proxyConfiguration": {"useApifyProxy": true,"apifyProxyGroups": ["GOOGLE_SERP"]}}
Input parameter reference
-
queries (string, required) Description: Use regular search words or enter Google Search URLs. Advanced search operators supported. Ensure queries do not exceed 32 words. Default: none.
-
resultsPerPage (integer) Description: Desired results per page (num parameter). Actual count may vary; enable Unfiltered results to include more. Default: 100. Min: 1, Max: 100.
-
maxPagesPerQuery (integer) Description: Maximum number of pages to scrape per query (≈10 results per page). Default: 1. Min: 1.
-
aiMode (string) Description: Enable scraping of Google's AI Mode for AEO/GEO tracking and competitor analysis. Default: "aiModeOff". Enum: "aiModeOff", "aiModeWithSearchResults", "aiModeOnly".
-
serpApiKey (string, nullable, secret) Description: Optional. When set and AI Mode is not off, fetches AI Overview via SerpApi. Default: null.
-
perplexitySearch (object) Description: Add-on settings for Perplexity AI. Default: { "enablePerplexity": false, "searchRecency": "", "returnImages": false, "returnRelatedQuestions": false }.
- enablePerplexity (boolean) — Default: false.
- searchRecency (string, nullable) — Enum: "", "day", "week", "month", "year".
- returnImages (boolean) — Default: false.
- returnRelatedQuestions (boolean) — Default: false.
-
chatGptSearch (object) Description: Add-on settings for ChatGPT Search. Default: { "enableChatGpt": false }.
- enableChatGpt (boolean) — Default: false.
-
maximumLeadsEnrichmentRecords (integer) Description: Maximum number of business leads per domain to enrich. Default: 0 (disabled). Min: 0.
-
leadsEnrichmentDepartments (array) Description: Filter leads by departments (works only if maximumLeadsEnrichmentRecords > 0). Default: []. Allowed values include: "c-suite", "product", "engineering-technical", "design", "education", "finance", "human-resources", "information-technology", "legal", "marketing", "medical-health", "operations", "sales", "consulting".
-
focusOnPaidAds (boolean) Description: Enable paid results (Google Ads) extraction with enhanced retry strategy. Default: false.
-
countryCode (string) Description: Country for the search and domain (e.g., google.es for Spain). Default: "us". Enum includes global country codes.
-
searchLanguage (string, nullable) Description: Restrict results to a specific language (lr parameter). Default: null. Enum includes "", "ar", "de", "en", etc.
-
languageCode (string) Description: Interface language for Google (hl parameter). Default: "en". Enum includes "en", "de", "es", etc.
-
locationUule (string, nullable) Description: Exact location (UULE parameter). Default: null.
-
forceExactMatch (boolean) Description: Wraps the query in quotes to enforce exact match. Default: false.
-
site (string, nullable) Description: Limits search to a specific site (site:). Default: null.
-
relatedToSite (string, nullable) Description: Filters pages related to a site (related:). Ignored if site is set. Default: null.
-
wordsInTitle (array) Description: intitle:"..." filters (multiple allowed). Default: [].
-
wordsInText (array) Description: intext:"..." filters (multiple allowed). Default: [].
-
wordsInUrl (array) Description: inurl:"..." filters (multiple allowed). Default: [].
-
quickDateRange (string, nullable) Description: Quick date range (qdr: d/h/w/m/y). Default: null.
-
beforeDate (string, nullable) Description: Filter results before a date (YYYY-MM-DD) or relative (e.g., "8 days", "3 months"). Default: null.
-
afterDate (string, nullable) Description: Filter results after a date (YYYY-MM-DD) or relative. Default: null.
-
fileTypes (array) Description: filetype: filters (supports multiple). Default: [] (max 10).
-
mobileResults (boolean) Description: Return mobile SERPs if true; otherwise desktop. Default: false.
-
includeUnfilteredResults (boolean) Description: Include lower-quality results Google normally filters out. Default: false.
-
saveHtml (boolean) Description: Save page HTML directly to the dataset under html (increases dataset size). Default: false.
-
saveHtmlToKeyValueStore (boolean) Description: Save page HTML to the key-value store and link via htmlSnapshotUrl (may slow the run). Default: true.
-
includeIcons (boolean) Description: Include icon image data if found. Default: false.
-
proxyConfiguration (object) Description: Always uses proxies. Starts with GOOGLE_SERP and falls back to RESIDENTIAL (with 3 retries); persists on residential if used. Prefill: { "useApifyProxy": true, "apifyProxyGroups": ["GOOGLE_SERP"] }.
Example JSON output
{"searchQuery": {"term": "python","url": "https://www.google.com/search?q=python&hl=en&gl=us","device": "DESKTOP","page": 1,"type": "SEARCH","domain": "google.com","countryCode": "US","languageCode": "en","locationUule": null,"resultsPerPage": 10},"searchQueryTerm": "python","url": "https://www.google.com/search?q=python&hl=en&gl=us","hasNextPage": true,"serpProviderCode": "O","resultsTotal": 123000000,"relatedQueries": [{"title": "learn python","url": "https://www.google.com/search?q=learn+python&hl=en&gl=us"}],"paidResults": [{"title": "Start Coding with Python","url": "https://example.com/python-course","displayedUrl": "example.com/python-course","description": "","emphasizedKeywords": ["python"],"siteLinks": [],"productInfo": {},"type": "paid","position": 1}],"paidProducts": [{"title": "Python Programming Book","displayedUrl": "store.example.com/python-book","prices": ["$19.99"]}],"aiOverview": {"text": "Python is a popular, high-level programming language...","references": [{"title": "Python.org","url": "https://www.python.org"}]},"organicResults": [{"title": "Welcome to Python.org","url": "https://www.python.org","displayedUrl": "www.python.org","description": "Python is a programming language that lets you work quickly...","emphasizedKeywords": ["python"],"siteLinks": [],"productInfo": {},"type": "organic","position": 1,"icon": "https://www.google.com/s2/favicons?domain=python.org"}],"suggestedResults": [{"title": "Welcome to Python.org","url": "https://www.google.com/search?q=Welcome+to+Python.org&hl=en&gl=us","type": "organic","position": 1}],"peopleAlsoAsk": [{"answer": null,"question": "What is Python used for?","title": "What is Python used for?","url": "https://example.com/what-is-python-used-for","date": null}],"customData": {"perplexitySearch": {"enablePerplexity": false,"searchRecency": null,"returnImages": false,"returnRelatedQuestions": false},"chatGptSearch": {"enableChatGpt": false},"maximumLeadsEnrichmentRecords": 0,"leadsEnrichmentDepartments": []},"htmlSnapshotUrl": "python_20260416_120000_p1.html","html": "<!-- PAGE 1 --> ...","_htmlPayloads": [["python_20260416_120000_p1.html", "<html>...</html>"]]}
Note:
- aiOverview appears only when aiMode is not off and serpApiKey is provided.
- html appears only when saveHtml is true; htmlSnapshotUrl/_htmlPayloads appear only when saveHtmlToKeyValueStore is true.
- An icon field may be present on organicResults when includeIcons is true.
FAQ
Is there a free tier or trial?
Yes. You can run the actor on Apify and use trial minutes to evaluate performance and output. Billing is based on platform usage and the number of pages navigated.
Do I need to log in to Google or use cookies?
No. The actor scrapes publicly available Google Search result pages and does not require Google login or cookies.
Can I use it with Python or via API?
Yes. You can control this Google search results scraper API through Apify’s REST API, integrate into Python pipelines, and automate exports to JSON/CSV/Excel.
How many results can I collect per query?
Set resultsPerPage (1–100) and maxPagesPerQuery (e.g., 10 for ≈100 organic results). Actual per-page counts may vary due to Google filtering; enable includeUnfilteredResults to include more items Google normally filters out.
How does proxy handling work?
The actor always uses proxies for reliability. It starts with the GOOGLE_SERP proxy group and, if blocked, switches to RESIDENTIAL with retries and persists with residential for subsequent requests.
Can I target specific countries or languages?
Yes. Use countryCode (gl), languageCode (hl), and searchLanguage (lr). For hyperlocal results, set locationUule to emulate precise location.
Does it capture AI Overviews?
Yes, optionally. Set aiMode to a non-off value and provide serpApiKey to include aiOverview { text, references } in the output for cross-engine AEO/GEO analysis.
Can I extract paid ads and shopping products?
Yes. The output includes paidResults (ads) and paidProducts (pricing information) when present on the page. You can also enable focusOnPaidAds to emphasize ad retrieval attempts with retry logic.
How do I export results to CSV or Excel?
Open the Output tab of your run on Apify and choose your preferred export format (JSON, CSV, or Excel). This makes it easy to extract Google search results to CSV for analysis.
What data does customData contain?
customData echoes your add-on settings and enrichment preferences: perplexitySearch, chatGptSearch, maximumLeadsEnrichmentRecords, and leadsEnrichmentDepartments. This helps downstream systems understand contextual run settings.
Closing CTA / Final thoughts
The Google Search Results Scraper is built for accurate, scalable extraction of Google SERPs into structured, analytics-ready data. With robust proxy handling, advanced filters, optional AI Overviews, and real-time streaming, it’s a dependable Google search results crawler for marketers, developers, analysts, and researchers. Run it from the Apify console, integrate via API into Python workflows, and automate bulk runs for ongoing SERP intelligence. Start extracting smarter SERP data today and turn Google into a reliable data source for your SEO and research pipelines.
