Google Search Scraper - Free, No API Key Required
Pricing
$5.00 / 1,000 result scrapeds
Google Search Scraper - Free, No API Key Required
DeprecatedDEPRECATED — Google search anti-scraping renders this actor unreliable. Use SERP API services or other webdatalabs scrapers.
Pricing
$5.00 / 1,000 result scrapeds
Rating
0.0
(0)
Developer
Web Data Labs
Actor stats
0
Bookmarked
10
Total users
5
Monthly active users
4 days ago
Last modified
Categories
Share
Google Search Scraper — SERP Results From Multiple Engines, No API Key
Scrape organic search results from major search engines without an API key, OAuth flow, or paid subscription. Returns ranked results with URL, title, and snippet from DuckDuckGo, Bing, Brave, or Google — with automatic fallback when one engine throttles you.
Pay-per-result pricing means you only pay for results you actually receive. Output ships as JSON, CSV, or Excel.
Why a Multi-Engine Search Scraper?
Search-engine APIs are either expensive, restrictive, or both. Google's official Custom Search JSON API caps free use at 100 queries per day and starts at $5 per 1,000 queries beyond that. SerpAPI and Bright Data charge $50–$300 per month for modest volume. Independent developers, SEO consultants, and content teams often need just clean SERP data, fast — without committing to a monthly subscription.
This actor takes a different approach. It queries publicly visible search-engine result pages — the same pages anyone can view in a browser — across multiple engines, with automatic fallback. If DuckDuckGo throttles, it falls back to Bing. If Bing fails, it tries Brave or Google.
The pain points it solves:
- No public free API for bulk SERP data on most engines
- Paid SERP APIs charge per query whether the result is useful or not
- Single-engine reliance is fragile — if the one engine you depend on changes its anti-bot stance, you're stuck
- Localized results (German, Japanese, Brazilian-Portuguese) require careful country/language configuration that most APIs make awkward
This actor handles all of it. Submit a query, optionally pick an engine and locale, and get back clean, structured search results.
What You Get Per Result
Each result record includes:
- position — ranking position in search results (1-based)
- title — page title as shown in the SERP
- url — full URL of the result page
- description — snippet/description text from the search engine
- source — which search engine returned this result (
duckduckgo,bing,brave,google)
Output is delivered as a structured Apify dataset — exportable to JSON, CSV, Excel, or XML, or fetchable via the Apify API.
Use Cases
1. SEO rank tracking
Track your website's ranking for target keywords across markets. Schedule daily runs for your top 50 keywords and store results in a database. Visualize ranking shifts over time, correlate with content updates, and demonstrate ROI to clients.
2. Content research and editorial planning
Before writing on a topic, pull the top 30 search results to understand what's already ranked. Map content gaps, identify ranking opportunities, and avoid wasting effort on keywords already saturated by stronger sites.
3. Competitor SERP analysis
For every keyword you care about, who outranks you? Pull the top 10 results, extract the URL hostnames, and you have a competitive map. Useful for SEO consultants reporting to clients and growth teams sizing the opportunity in a new content vertical.
4. Local and international SEO research
Need to know what ranks for "best CRM" in Germany vs the UK vs the US? Set the country code and language code, run the same query three times, and compare. Localized SERPs differ enormously — relying on US-only data while marketing globally leaves money on the table.
5. Lead generation by search query
Search for "directory of [niche] in [city]" or "[product type] suppliers" and you have a starting list of leads. The actor returns clean, deduplicated URLs that drop directly into prospecting workflows.
6. Brand monitoring
Schedule a daily run on your brand name and key product names. Get alerts the moment a new page indexes (a customer review, a forum thread, a competitor's comparison post). Faster awareness = faster response.
7. Academic and policy research
Researchers studying online discourse, misinformation, search-result diversity, or platform behavior need bulk SERP data. This actor produces citation-ready datasets without per-query subscription costs.
8. SERP feature analysis
Cross-engine comparison reveals where each engine excels. DuckDuckGo and Brave often surface independent, less-SEO'd content. Bing and Google reward authoritative domains more aggressively. Pull the same query across all four and the diversity tells a story about each engine's algorithm.
Supported Engines
The actor supports four search engines, each with its own characteristics:
| Engine | Best for | Notes |
|---|---|---|
duckduckgo | Most reliable from datacenter IPs | Default. No tracking, no personalization. |
bing | Localized results | Strong country and language support |
brave | Privacy-focused queries | Independent search index — different result mix |
google | Highest result quality | Most aggressive bot-detection — residential proxies recommended |
auto | Best reliability | Tries DuckDuckGo first, falls back to Bing/Brave/Google |
Use auto for the most reliable runs — the actor handles fallback for you.
Input Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
query | string | Yes | — | The search query to look up |
maxResults | integer | No | 10 | Maximum results to return (1–100) |
searchEngine | string | No | duckduckgo | Engine to use — auto, duckduckgo, bing, brave, google |
language | string | No | en | Language code for results (e.g. en, de, fr, ja) |
countryCode | string | No | us | Country code for localized results (e.g. us, uk, de, jp) |
Example — basic search
{"query": "best python web framework 2026","maxResults": 20}
Example — localized German search via Bing
{"query": "beste web scraping tools","maxResults": 15,"searchEngine": "bing","language": "de","countryCode": "de"}
Example — auto-fallback for resilience
{"query": "ergonomic mechanical keyboard reviews","maxResults": 30,"searchEngine": "auto"}
Example — site-restricted operator on Bing
{"query": "site:github.com web scraper","maxResults": 50,"searchEngine": "bing"}
Output Example
Each result is a JSON object:
{"position": 1,"title": "FastAPI — Modern, Fast (high-performance), web framework for Python","url": "https://fastapi.tiangolo.com/","description": "FastAPI is a modern, fast (high-performance), web framework for building APIs with Python 3.7+ based on standard Python type hints. Key features: Fast, easy to code...","source": "duckduckgo"}
A typical run with maxResults: 20 returns 20 such records. Datasets can be exported as JSON, CSV, XML, or Excel from the Apify console, or fetched programmatically via the dataset API.
Calling the Actor Programmatically
Python — apify-client
from apify_client import ApifyClientclient = ApifyClient("YOUR_APIFY_TOKEN")run = client.actor("cryptosignals/google-search-scraper").call(run_input={"query": "best static site generator 2026","maxResults": 20,"searchEngine": "auto",})for item in client.dataset(run["defaultDatasetId"]).iterate_items():print(f"#{item['position']} {item['title']}")print(f" {item['url']}")
Python — track keyword rankings
from apify_client import ApifyClientimport pandas as pdfrom datetime import datetimeclient = ApifyClient("YOUR_APIFY_TOKEN")keywords = ["data engineer salary", "python tutorial", "best crm software"]results = []for kw in keywords:run = client.actor("cryptosignals/google-search-scraper").call(run_input={"query": kw,"maxResults": 10,"searchEngine": "auto",})for item in client.dataset(run["defaultDatasetId"]).iterate_items():item["keyword"] = kwitem["scraped_at"] = datetime.utcnow().isoformat()results.append(item)df = pd.DataFrame(results)print(df[["keyword", "position", "url", "title"]].head(30))
Node.js — apify-client
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_APIFY_TOKEN' });const run = await client.actor('cryptosignals/google-search-scraper').call({query: 'site:medium.com web scraping',maxResults: 50,searchEngine: 'bing',});const { items } = await client.dataset(run.defaultDatasetId).listItems();items.forEach(r => console.log(`#${r.position} ${r.title} — ${r.url}`));
cURL — direct API call
curl -X POST \"https://api.apify.com/v2/acts/cryptosignals~google-search-scraper/run-sync-get-dataset-items?token=YOUR_APIFY_TOKEN" \-H "Content-Type: application/json" \-d '{"query": "best home espresso machine","maxResults": 25,"searchEngine": "auto"}'
Tips for Best Results
- Use
automode for the highest reliability without proxy configuration - For SEO research, run the same query across multiple engines (
bing,brave,duckduckgo) and compare rankings — different engines surface different result mixes - Advanced operators like
site:,intitle:, andfiletype:work best with Google and Bing - Set
languageandcountryCodetogether for accurate localized results — passing only one can produce inconsistent geos - For bulk keyword tracking, schedule recurring runs and store results in your own database for time-series analysis
- Keep
maxResultsreasonable (10–50 per query) — beyond 50 results, most queries don't have meaningful tail content anyway
Pricing
This actor uses Pay-per-event pricing — you only pay for results you actually receive. No charges for failed runs, empty searches, or compute time.
- Cost: per-result pricing as listed on the actor's pricing page
- Free tier: Apify's free plan includes $5/month of platform credits — enough to run hundreds of queries
- Cost examples:
| Use case | Queries × Results | Approximate cost |
|---|---|---|
| Quick research | 5 queries × 10 results | < $0.50 |
| Daily rank check | 50 queries × 10 results | ~$2.50/day |
| Weekly SEO sweep | 100 queries × 20 results | ~$10/week |
| Research dataset | 1,000 queries × 25 results | ~$125 |
FAQ
Is scraping search engines legal?
Scraping publicly visible data — pages anyone can view without logging in — is generally considered lawful in many jurisdictions. The U.S. Ninth Circuit's hiQ v. LinkedIn (2022) decision held that scraping publicly accessible data does not violate the Computer Fraud and Abuse Act. This actor only collects data from publicly visible search-engine result pages. Always consult your own legal counsel for your specific use case.
Do I need a search-engine API key?
No. The actor doesn't use any search-engine API. It works with the publicly visible web pages — the same pages anyone can view in a browser.
Why DuckDuckGo as the default?
DuckDuckGo is the most lenient toward datacenter IPs and has no result personalization based on user history — which means more consistent, comparable results across runs. Use it as the default unless you specifically need Google-quality results or a particular engine's index.
What about Google's result quality vs the others?
Google has the largest index and tends to surface the most authoritative results — but it also has the most aggressive anti-bot detection. For Google-only runs, use Apify residential proxies. For a balance of quality and reliability, use auto mode.
How fast is it?
A query for 10 results typically completes in 5–15 seconds. Larger runs (50–100 results) usually finish within 30–60 seconds. Multiple queries can execute in parallel using Apify's concurrent-run feature.
Can I schedule recurring scrapes?
Yes. Apify's built-in scheduler runs any cron expression — hourly, daily, weekly. Pipe results to a webhook, Google Sheets, Airtable, Slack, or your own database. Combine with Apify's Zapier and Make integrations for downstream rank-tracking dashboards.
What output formats are supported?
JSON, CSV, Excel, and XML are all supported natively in the Apify console. You can also fetch results programmatically via the dataset API.
Will personalization affect my results?
The actor uses neutral, unauthenticated requests with rotating proxy IPs — there's no logged-in profile, browsing history, or location bias affecting results. If you set a country code, the results reflect that country's default SERP for an anonymous user.
Why This Actor vs Alternatives?
- No API key, no subscription. Most SERP APIs require a monthly commitment. This actor is pay-per-result.
- Multi-engine fallback. Other tools rely on a single engine — when it changes its anti-bot stance, you're stuck. This one falls back automatically across four engines.
- Pay-per-result. You only pay for results you actually receive. Failed runs cost zero.
- Localized results. Built-in country and language parameters work across all four engines without per-engine configuration.
- Apify-native integrations. Pipe results into Zapier, Make, Google Sheets, Airtable, Slack, or any of Apify's 50+ integrations.
- Maintained. The actor is updated as engines evolve their layouts and anti-bot stances.
Limitations
- Google results from datacenter IPs are unreliable — use residential proxies or
autofallback for Google-quality data - Advanced search operators behave differently across engines; for
site:,inurl:,filetype:, Bing and Google are most consistent - Maximum 100 results per query — for larger datasets, run multiple queries with refined keywords
- Search engines may return slightly different results on each run due to ranking-algorithm freshness and anti-bot rotation
Related Actors From Web Data Labs
- YouTube Search Scraper — Search YouTube and pull bulk video results
- Google Maps Scraper — Local business listings, reviews, ratings
- Bing Image Search Scraper — Image search results from Bing
- Crunchbase Scraper — Company data, funding, and startup research
About Web Data Labs
This actor is built and maintained by Web Data Labs — a team focused on production-grade web data extraction across jobs, e-commerce, social media, software reviews, and search data. We publish 100+ public actors on the Apify platform, all pay-per-result.
Need a custom build, enterprise SLA, or private actor for your team? Reach out via web-data-labs.com or the Apify contact form.