Google Search Results Scraper avatar

Google Search Results Scraper

Pricing

Pay per event

Go to Apify Store
Google Search Results Scraper

Google Search Results Scraper

Extract Google search results data: organic listings, People Also Ask questions, related searches, and featured snippets. Supports pagination, country and language targeting, and advanced search operators. Export structured SERP data to JSON or CSV.

Pricing

Pay per event

Rating

0.0

(0)

Developer

Stas Persiianenko

Stas Persiianenko

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

5 hours ago

Last modified

Share

Extract structured data from Google Search results pages at scale. Get organic results, People Also Ask questions, and related searches in one clean dataset โ€” ready for SEO analysis, keyword research, and competitive intelligence. No API key required, no browser overhead, just fast and reliable SERP data.

๐Ÿ” What does Google Search Results Scraper do?

Google Search Results Scraper pulls structured SERP data from Google Search for any list of queries you provide. Enter your keywords, set your target country and language, and receive a clean dataset with every organic result, PAA question, and related search suggestion โ€” all timestamped and ready to export.

Under the hood it uses CheerioCrawler (pure HTTP, no browser) paired with Apify's dedicated GOOGLE_SERP proxy. That combination means near-instant page loads with no CAPTCHA friction โ€” the same reliability you'd get from a paid SERP API, but at a fraction of the cost.

Key capabilities at a glance:

  • Scrape organic results with title, URL, display URL, snippet, and rank position
  • Extract People Also Ask questions and answers for content ideation
  • Capture related search suggestions for keyword expansion
  • Target any country and language with standard Google locale parameters
  • Paginate automatically โ€” set maxResultsPerQuery up to 100 and the scraper fetches as many pages as needed

๐Ÿ‘ฅ Who is Google Search Results Scraper for?

SEO specialists and agencies

  • Track keyword rankings across dozens of markets without paying for a monthly SERP API subscription
  • Audit SERP features (PAA boxes, related searches) for a full client keyword set
  • Pull competitive share-of-voice data at scale by checking which domains appear for target keywords

Content marketers and strategists

  • Mine People Also Ask questions to discover what your audience is actually searching for
  • Build topic cluster maps from related search suggestions
  • Identify content gaps by comparing SERP results against your own published pages

Competitive analysts and market researchers

  • Map competitor URL appearances across hundreds of industry keywords
  • Track brand visibility changes over time with scheduled daily runs
  • Compare SERP results between countries to understand localization opportunities

Lead generation and growth teams

  • Find directories, review sites, and aggregators ranking for your target keywords
  • Identify guest posting and link-building opportunities from organic results
  • Build prospecting lists of domains competing in your niche

โœ… Why use Google Search Results Scraper?

  • No API key or login required โ€” access public Google Search data directly, nothing to set up
  • Pure HTTP, no browser โ€” CheerioCrawler runs on 256 MB memory, starts in seconds, and costs far less than Playwright-based alternatives
  • GOOGLE_SERP proxy included โ€” Apify's dedicated proxy handles geo-targeting and anti-bot evasion automatically; no proxy configuration needed
  • Three result types in one run โ€” organic results, PAA questions, and related searches are all scraped in the same request
  • Multi-market support โ€” set country and language independently to target any locale Google supports
  • Pay per event โ€” you pay only for what you scrape, not a flat monthly fee
  • Schedule and monitor โ€” run on a cron schedule to track rankings over time; Apify handles retries, alerts, and dataset storage
  • Export anywhere โ€” JSON, CSV, Excel, or connect via API, webhooks, Zapier, or Make
FieldTypeDescription
querystringThe search query that produced this result
positionintegerRank position within the result type (1-based)
titlestringPage title for organic results, question text for PAA, suggestion text for related searches
urlstringFull destination URL (empty for some PAA and related search items)
displayUrlstringBreadcrumb URL shown in the SERP (e.g. www.example.com โ€บ blog โ€บ post)
snippetstringMeta description or extracted page summary
resultTypestringOne of organic, people_also_ask, or related_search
scrapedAtstringISO 8601 timestamp of when the result was collected

Every field is returned for every item. Fields that Google does not supply (e.g. url on some related searches) are returned as empty strings rather than null, keeping downstream parsing simple.

๐Ÿ’ฐ How much does it cost to scrape Google search results?

This actor uses pay-per-event pricing โ€” you pay only for what you scrape. No monthly subscription. All proxy and compute costs are included.

FreeStarter ($29/mo)Scale ($199/mo)Business ($999/mo)
Per search page$0.0069$0.006$0.00468$0.0036
100 pages$0.69$0.60$0.47$0.36
1,000 pages$6.90$6.00$4.68$3.60

Each run also has a flat start fee of $0.005 (one-time, regardless of how many queries you run).

Real-world cost examples (Free tier):

JobQueriesPagesCost
Quick keyword check5 queries ร— 10 results5 pages~$0.04
Weekly SEO audit50 queries ร— 10 results50 pages~$0.35
Keyword research sprint100 queries ร— 20 results200 pages~$1.38
Agency monthly report500 queries ร— 10 results500 pages~$3.45

Free plan estimate: Apify's free plan includes $5 of monthly credits. That covers roughly 720 search pages (about 7,200 organic results) before you pay anything. Scale up by upgrading your plan for volume discounts shown in the table above.

๐Ÿš€ How to scrape Google Search results

  1. Go to Google Search Results Scraper on the Apify Store
  2. Click Try for free โ€” no credit card required to start
  3. In the Search section, enter your queries (one per line). Example: best CRM software 2026
  4. In the Output section, set Max Results per Query (default 10 = one SERP page per query)
  5. Set Country and Language to target your market (e.g. gb / en for UK English)
  6. Enable or disable People Also Ask and Related Searches to include those result types
  7. Click Start and wait for the run to complete (typically a few seconds per query)
  8. Download your results as JSON, CSV, or Excel, or connect via the API

Input example โ€” basic keyword tracking:

{
"queries": ["project management software", "best CRM for startups"],
"maxResultsPerQuery": 10,
"country": "us",
"language": "en",
"includePeopleAlsoAsk": true,
"includeRelatedSearches": true
}

Input example โ€” multi-market competitive analysis:

{
"queries": ["web scraping tools", "data extraction software"],
"maxResultsPerQuery": 20,
"country": "gb",
"language": "en",
"includePeopleAlsoAsk": false,
"includeRelatedSearches": false
}

Input example โ€” deep keyword research with PAA mining:

{
"queries": ["how to learn python", "python for beginners", "python tutorial"],
"maxResultsPerQuery": 10,
"country": "us",
"language": "en",
"includePeopleAlsoAsk": true,
"includeRelatedSearches": true
}

โš™๏ธ Input parameters

ParameterTypeDefaultRequiredDescription
queriesarray of stringsโ€”YesGoogle search queries to scrape. One SERP page is fetched per query per 10 results. Supports Google operators: "exact phrase", site:domain.com, -exclude.
maxResultsPerQueryinteger10NoMaximum organic results to extract per query. Google shows 10 per page โ€” values above 10 trigger pagination (e.g. 20 โ†’ 2 pages, 100 โ†’ 10 pages). Range: 1โ€“100.
countrystringusNoTwo-letter country code for the gl (geolocation) parameter. Examples: us, gb, de, fr, jp, br, in, au. Controls which Google data center serves results.
languagestringenNoLanguage code for the hl (host language) parameter. Examples: en, de, fr, es, ja, pt. Controls the language of the SERP interface and result language preference.
includePeopleAlsoAskbooleantrueNoWhen enabled, extracts People Also Ask questions (and answers when Google includes them) as separate items with resultType: "people_also_ask".
includeRelatedSearchesbooleantrueNoWhen enabled, extracts related search suggestions from the bottom of the SERP as separate items with resultType: "related_search".
maxRequestRetriesinteger3NoNumber of retry attempts for failed HTTP requests. Increase to 5 or 10 if you encounter frequent timeouts on large batches. Range: 1โ€“10.

๐Ÿ“‹ Output examples

Organic result:

{
"query": "web scraping tools",
"position": 1,
"title": "Best Web Scraping Tools Tested & Ranked for 2026",
"url": "https://www.scrapingbee.com/blog/web-scraping-tools/",
"displayUrl": "www.scrapingbee.com โ€บ blog โ€บ web-scraping-tools",
"snippet": "Compare the top web scraping tools by features, pricing, and ease of use. We tested 20+ tools so you don't have to.",
"resultType": "organic",
"scrapedAt": "2026-03-23T00:42:43.299Z"
}

People Also Ask result:

{
"query": "web scraping tools",
"position": 1,
"title": "What is the best AI tool for web scraping?",
"url": "",
"displayUrl": "",
"snippet": "",
"resultType": "people_also_ask",
"scrapedAt": "2026-03-23T00:42:43.316Z"
}

Related search result:

{
"query": "web scraping tools",
"position": 1,
"title": "web scraping tools python",
"url": "",
"displayUrl": "",
"snippet": "",
"resultType": "related_search",
"scrapedAt": "2026-03-23T00:42:50.100Z"
}

๐Ÿ’ก Tips for best results

  • Start with 10 results per query โ€” that is one SERP page and the most cost-efficient batch size. Increase to 20โ€“50 only when you need deeper rankings.
  • Use specific, long-tail queries โ€” broad single-word queries often return noisy results. Phrases like "best CRM for small business 2026" produce more actionable data.
  • Always set country and language together โ€” a query like "best pizza" in us/en vs it/it returns completely different results. Both parameters are needed for accurate geo-targeting.
  • Use Google search operators โ€” the scraper supports site:, "exact match", -minus, and other operators. Example: site:reddit.com best CRM to pull only Reddit results.
  • Mine PAA for content outlines โ€” run 10โ€“20 seed keywords with People Also Ask enabled, then group the questions by theme. Each cluster is a content brief.
  • Schedule weekly for rank tracking โ€” set up a scheduled run in the Apify Console to track the same keyword set every week. Use the dataset diff to spot position changes.
  • Batch queries efficiently โ€” one run handles hundreds of queries. Group all your keywords in a single run rather than making multiple small runs to save on start fees.
  • Filter by resultType in post-processing โ€” the dataset mixes organic, PAA, and related search items. Filter on resultType === 'organic' when you only need rank data.

๐Ÿ”— Integrations

Google Search Results Scraper โ†’ Google Sheets (rank tracking dashboard) Connect via Apify's Google Sheets integration or export CSV. Schedule a weekly run for your keyword set and append results to a tracking sheet. Use a pivot table to compare position 1 ownership over time across your target keywords.

Google Search Results Scraper โ†’ Slack (brand mention alerts) Use a Zapier or Make scenario: trigger on new dataset items where url contains your competitor's domain, then post a Slack message with the query and position. Get notified whenever a competitor enters your tracked SERPs.

Google Search Results Scraper โ†’ Make (content brief automation) Build a Make workflow that runs the scraper on new keywords from an Airtable base, filters for resultType: "people_also_ask", and appends questions to a Notion content brief template. Fully automated topic research.

Google Search Results Scraper โ†’ Zapier (CMS integration) Trigger a Zapier zap when a scraper run completes, filter for position 1โ€“3 organic results, and create cards in Trello or tasks in Asana to review each top-ranking competitor page.

Scheduled monitoring In the Apify Console, go to your actor run โ†’ Schedule โ†’ set a daily or weekly cron. Apify stores all historical datasets so you can query position changes via API over time.

Webhook processing Add a webhook URL to your run configuration to receive a POST with the dataset ID as soon as the scraper finishes. Pull the results immediately into your own pipeline for real-time processing.

๐Ÿ–ฅ๏ธ Using the Apify API

Node.js

import { ApifyClient } from 'apify-client';
const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });
const run = await client.actor('automation-lab/google-search-scraper').call({
queries: ['best project management software', 'CRM for startups'],
maxResultsPerQuery: 10,
country: 'us',
language: 'en',
includePeopleAlsoAsk: true,
includeRelatedSearches: true,
});
const { items } = await client.dataset(run.defaultDatasetId).listItems();
// Filter organic results only
const organic = items.filter(item => item.resultType === 'organic');
console.log(`Found ${organic.length} organic results`);
organic.forEach(r => console.log(`#${r.position} ${r.title} โ€” ${r.url}`));

Python

from apify_client import ApifyClient
client = ApifyClient('YOUR_API_TOKEN')
run = client.actor('automation-lab/google-search-scraper').call(run_input={
'queries': ['best project management software', 'CRM for startups'],
'maxResultsPerQuery': 10,
'country': 'us',
'language': 'en',
'includePeopleAlsoAsk': True,
'includeRelatedSearches': True,
})
items = client.dataset(run['defaultDatasetId']).list_items().items
# Separate result types
organic = [i for i in items if i['resultType'] == 'organic']
paa = [i for i in items if i['resultType'] == 'people_also_ask']
print(f"Organic: {len(organic)} | PAA questions: {len(paa)}")
for r in organic:
print(f" #{r['position']} {r['title']}")

cURL

curl -X POST \
"https://api.apify.com/v2/acts/automation-lab~google-search-scraper/runs?token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"queries": ["best project management software"],
"maxResultsPerQuery": 10,
"country": "us",
"language": "en",
"includePeopleAlsoAsk": true,
"includeRelatedSearches": true
}'

Retrieve the results once the run finishes:

$curl "https://api.apify.com/v2/datasets/DATASET_ID/items?token=YOUR_API_TOKEN&format=json"

๐Ÿค– Use with AI agents via MCP

Google Search Results Scraper is available as a tool for AI assistants that support the Model Context Protocol (MCP).

Add the Apify MCP server to your AI client โ€” this gives you access to all Apify actors, including this one:

Setup for Claude Code

$claude mcp add --transport http apify "https://mcp.apify.com?tools=automation-lab/google-search-scraper"

Setup for Claude Desktop, Cursor, or VS Code

Add this to your MCP config file:

{
"mcpServers": {
"apify": {
"url": "https://mcp.apify.com?tools=automation-lab/google-search-scraper"
}
}
}

Your AI assistant will use OAuth to authenticate with your Apify account on first use.

Example prompts

Once connected, try asking your AI assistant:

  • "Use automation-lab/google-search-scraper to find the top 10 Google results for 'best email marketing tools' in the US and summarize what types of sites are ranking"
  • "Scrape People Also Ask questions for 'how to start a podcast' and group them into topic clusters for a content strategy"
  • "Compare Google SERP results for 'CRM software' in the US vs Germany and tell me which domains appear in both markets"

Learn more in the Apify MCP documentation.

Web scraping of publicly available data is generally considered lawful. The landmark 2022 US Ninth Circuit ruling in hiQ Labs v. LinkedIn affirmed that accessing publicly available information does not constitute unauthorized computer access. Google Search results are publicly accessible to any user without login or authentication.

This scraper:

  • Only accesses data that is publicly visible to any browser user
  • Does not bypass authentication or access private data
  • Uses Apify's GOOGLE_SERP proxy with reasonable request rates built in
  • Does not store or redistribute personal data

You are responsible for:

  • Complying with your local data protection laws (GDPR, CCPA, etc.)
  • Ensuring your use of scraped data does not violate Google's Terms of Service
  • Using the data only for lawful purposes

For a comprehensive overview of web scraping legality, see Apify's guide to ethical web scraping.

โ“ FAQ

How many Google results can I scrape per query? Up to 100 organic results per query. Google shows 10 results per page, so 100 results = 10 SERP pages fetched. For most SEO use cases the top 10โ€“20 results cover everything that matters. Higher maxResultsPerQuery values are useful for comprehensive competitive audits.

How fast is the scraper? Each SERP page fetches in 1โ€“3 seconds with Apify's GOOGLE_SERP proxy. A batch of 50 queries at 10 results each (50 pages) typically completes in under 2 minutes. Pure HTTP with no browser means near-zero startup overhead.

How does this compare to the official Google Search API? Google's Custom Search JSON API costs $5 per 1,000 queries with a hard limit of 100 results per query and 10,000 queries per day. This scraper has no hard daily limits, returns more result types (PAA, related searches), and is significantly cheaper โ€” especially for Scale and Business plan users. The trade-off is that it scrapes the public SERP rather than using an official API, so result formatting may occasionally change when Google updates its HTML.

Why are some url and snippet fields empty? People Also Ask items and related searches do not always include a destination URL or snippet in Google's HTML. The scraper returns all available fields; empty strings indicate Google did not provide that data for that result type. Organic results always include title, url, and snippet.

Why did my run return fewer results than maxResultsPerQuery? Google returns fewer than 10 results for some niche or low-volume queries โ€” this is expected behavior from Google, not a scraper issue. The scraper also deduplicates results by URL, so if Google repeats a result across pages, the count may be lower than requested.

What should I do if results look wrong or positions seem off? First verify your country and language settings โ€” SERP results vary significantly by locale. If the issue persists, try reducing your batch size or adding a delay by splitting your queries into smaller runs. Google occasionally rate-limits high-volume scrapers even with proxy rotation; the scraper retries automatically, but reducing throughput helps for very large batches.

๐Ÿ”— Other Google and SEO scrapers