Google Search Scraper — SERP Results, 2x Cheaper
Pricing
Pay per usage
Google Search Scraper — SERP Results, 2x Cheaper
Scrape Google search results. Get organic results, People Also Ask, related searches, featured snippets. Perfect for SEO, keyword research. 2x cheaper than alternatives.
Pricing
Pay per usage
Rating
0.0
(0)
Developer
Ken Digital
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
2 days ago
Last modified
Categories
Share
🔍 Google Search Scraper — SERP Results
Scrape Google Search Engine Results Pages (SERPs) for any query. Extract organic results, People Also Ask, related searches, featured snippets, and knowledge panels.
2x cheaper than competitors. Pay just $0.003 per search — most alternatives charge $0.006–$0.012+.
✨ What You Get
| Result Type | Extracted Fields |
|---|---|
| Organic Results | Title, URL, description, position |
| People Also Ask | Questions from the PAA box |
| Related Searches | Suggested searches at bottom of SERP |
| Featured Snippets | Answer box content + source URL |
| Knowledge Panel | Entity title + description |
🚀 Use Cases
- SEO Analysis — Track keyword rankings across countries and languages
- Keyword Research — Discover related searches and PAA questions to target
- Competitor Monitoring — See who ranks for your target keywords
- Content Strategy — Find featured snippet opportunities
- Market Research — Analyze search landscapes for any industry
- Lead Generation — Find businesses ranking for specific services
💰 Pricing Comparison
| Actor | Price per Search |
|---|---|
| This actor | $0.003 |
| Competitor A | $0.006 |
| Competitor B | $0.008 |
| Competitor C | $0.012 |
Save 50%+ on your SERP data costs.
📥 Input
{"queries": ["best CRM software", "project management tools"],"resultsPerPage": 10,"language": "en","country": "us","maxPages": 1}
| Parameter | Type | Default | Description |
|---|---|---|---|
queries | array | required | Search queries to scrape |
resultsPerPage | number | 10 | Results per page (1–100) |
language | string | "en" | Google interface language (en, fr, de, es…) |
country | string | "us" | Country for localized results (us, uk, fr, de…) |
maxPages | number | 1 | SERP pages per query (1–10) |
📤 Output Example
[{"searchQuery": "best CRM software","position": 1,"title": "The 10 Best CRM Software for 2026 - Forbes","url": "https://www.forbes.com/advisor/business/software/best-crm-software/","description": "We evaluated dozens of CRM platforms to find the best options for small businesses, enterprises, and every team in between.","type": "organic","page": 1},{"searchQuery": "best CRM software","position": 0,"title": "What is the best CRM for small businesses?","url": "","description": "","type": "people_also_ask","page": 1},{"searchQuery": "best CRM software","position": 0,"title": "best free crm software","url": "","description": "","type": "related_search","page": 1}]
🌍 Supported Countries & Languages
Works with any Google-supported country code (gl parameter) and language code (hl parameter):
- Countries: us, uk, ca, au, de, fr, es, it, br, in, jp, kr, and 100+ more
- Languages: en, fr, de, es, pt, it, nl, ja, ko, zh, ar, and 50+ more
⚡ Features
- GDPR bypass — Automatically handles EU consent screens
- Anti-blocking — Rotating user agents, random delays, exponential backoff on rate limits
- Pagination — Scrape multiple SERP pages per query
- Lightweight — Minimal dependencies (httpx only, no browser needed)
- Fast — HTTP-based scraping, no headless browser overhead
- Pay per event — Only pay for successful searches, not compute time
🔧 Technical Details
- Uses direct HTTP requests (no Playwright/Puppeteer)
- Parses raw HTML with regex for zero-dependency performance
- Handles HTTP 429 with exponential backoff (up to 3 retries)
- Random delays between requests (1–4s) to avoid detection
- Supports Google's various HTML layouts and class names
📋 Tips
- Start small — Test with 1 query before running bulk jobs
- Use pagination wisely — Most useful results are on page 1
- Localize — Set country + language to get region-specific results
- Batch queries — More efficient than running separate tasks per query
- Monitor results — Google's HTML structure changes occasionally; report issues
🐛 Limitations
- Google may serve CAPTCHAs or JS-only pages under heavy use or from datacenter IPs — the actor retries but can't solve them. On Apify's infrastructure with residential proxies, this works reliably.
- HTML class names change — we use multiple fallback patterns to cover various layouts, but Google's markup evolves.
- Knowledge panel extraction may be incomplete for complex entities.
- Featured snippets vary widely in format; we extract the visible text but structured data is not guaranteed.
- No guaranteed scraping — Use at your own risk; Google's TOS applies.
🧪 Local Testing
Important: Google often serves JavaScript-only pages to datacenter IPs, which will result in no parseable results when running locally on a VPS. The actor is designed to run on Apify's platform with residential proxies, where it works reliably.
To test the parser locally with a saved HTML file:
# Save a real Google SERP HTML to file (e.g., using a browser's Save As)python3 -m tests.run sample.html
Or write a quick script:
from src.main import parse_organic_results, parse_people_also_ask, parse_related_searcheswith open('sample.html') as f:html = f.read()organic = parse_organic_results(html, 'test query', 1)print(f'Organic: {len(organic)} results')
📦 What's Inside
google-serp-scraper/├── .actor/│ ├── actor.json # Actor metadata│ ├── input_schema.json # Input configuration UI│ └── pay_per_event.json # $0.003 per search event├── src/│ ├── __init__.py│ ├── __main__.py│ └── main.py # Scraper logic├── requirements.txt # Dependencies: apify, httpx├── Dockerfile # Apify Python 3.12 base└── README.md # This file
🔒 Disclaimer
This actor is provided as-is. Use responsibly and respect Google's Terms of Service. The author is not responsible for any blocks, CAPTCHAs, or legal issues arising from misuse.
📄 License
MIT
📄 License
MIT
🔗 Integration Examples
Python
from apify_client import ApifyClientclient = ApifyClient("YOUR_API_TOKEN")run = client.actor("joyouscam35875/google-serp-scraper").call(run_input={"queries": ["best tools 2026"], "resultsPerPage": 10})for item in client.dataset(run["defaultDatasetId"]).iterate_items():print(item)
Node.js
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });const run = await client.actor('joyouscam35875/google-serp-scraper').call({"queries": ["best tools 2026"], "resultsPerPage": 10});const { items } = await client.dataset(run.defaultDatasetId).listItems();console.log(items);
Make / Zapier / n8n
Use the Apify integration — search for this actor by name in the Apify app connector. No code needed.