Free Google Trends API — Interest Over Time + Related
Pricing
Pay per usage
Free Google Trends API — Interest Over Time + Related
Free Google Trends API Scraper — Interest Over Time, Related Queries, Regional Breakdown
A free Google Trends API alternative that pulls keyword interest over time, related queries, related topics, and regional interest for any search term in any country. Hosted Apify Actor — no captchas to solve, no IPs to rotate, no pytrends rate limits to debug. Pay only for the trend queries you run.
What you get
For every keyword you submit:
- Interest over time — full timeseries, 0-100 normalized, with
is_partialflag so you can drop the unfinished current period - Related queries (top + rising) — the queries Google associates with
your keyword, ranked by search interest plus a
risinglist withextracted_valuepercentages - Related topics (top + rising) — entity-level associations (Knowledge Graph topics), useful for topical clustering and content strategy
- Regional breakdown — interest by country/region (
geo_code,geo_name,value,formatted_value,has_data) when you toggle the flag on - Configurable timeframe — pytrends-format strings (
today 12-m,today 5-y,now 7-d) or explicitYYYY-MM-DD YYYY-MM-DDranges - 50+ countries and languages — pass
geo=NL,hl=nl-NL, etc.; leave geo blank for worldwide - Built-in retry-on-429 — rotates a fresh Google session from the cookie pool on every rate-limited widget call (up to 8 retries)
- Optional Apify residential proxy — opt-in for the related-topics widget which is consistently rate-limited from datacenter IPs
Why scrape Google Trends instead of using pytrends
The obvious free alternative is pytrends, the unofficial Python library
that's been the de-facto Google Trends client since 2016. It works —
until it doesn't.
Rate limits. Google's /trends/api/widgetdata/* endpoints throttle
per (cookie, IP) pair. A bare pytrends client starts eating 429s within
a few dozen requests. The fix is maintaining a pool of fresh Google
sessions, rotating proxies, and implementing backoff — and then updating
that infrastructure every time Google tweaks the protocol. A side
project, not a weekend script.
The related-topics widget specifically. RELATED_TOPICS is the
strictest of all the Trends widgets and most pytrends installs get
empty data on the first call. This Actor handles it: synthetic GA/utm
cookies, Chrome 131 TLS impersonation via curl_cffi, optional Apify
residential proxy for the hardest queries.
The Apify cloud also frees you from running a long-lived scraper on your laptop. Submit 100 keywords, walk away, come back to a clean JSON dataset. No Heroku worker, no cron tab, no captcha-solving service.
Input
| Field | Default | Description |
|---|---|---|
keywords | required | Array of search terms — one trend query per keyword |
geo | US | Two-letter country code (US, GB, NL, DE); empty = worldwide |
hl | en-US | Language code (en-US, nl-NL, de-DE) |
timeframe | today 12-m | pytrends format: today 12-m, today 5-y, now 7-d, or YYYY-MM-DD YYYY-MM-DD |
include_related_queries | true | Top + rising related search terms |
include_related_topics | true | Top + rising related topics (entities) |
include_regional_interest | false | Country/region breakdown (slower) |
use_cookies | true | Rotate a fresh session per widget call |
use_proxy | false | Route through the Tim IPv6 proxy (default off — direct usually works better) |
use_residential_proxy | false | Use Apify residential proxy for the hardest related-topics calls (incurs Apify proxy CUs) |
Output
{"keyword": "pizza","geo": "US","hl": "en-US","timeframe": "today 12-m","interest_over_time": [{"date": "May 5, 2025", "value": 78, "is_partial": false},{"date": "May 12, 2025", "value": 81, "is_partial": false}],"related_queries_top": [{"query": "pizza near me", "value": 100, "extracted_value": 100,"link": "/trends/explore?q=pizza+near+me", "formatted_value": "100"}],"related_queries_rising": [{"query": "detroit style pizza", "value": 250, "extracted_value": 250,"link": "/trends/explore?q=detroit+style+pizza", "formatted_value": "+250%"}],"related_topics_top": [...],"related_topics_rising": [...],"interest_by_region": [{"geo_code": "US-NY", "geo_name": "New York", "value": 100,"formatted_value": "100", "has_data": true}]}
Use cases
SEO content strategy — pull rising related queries for your money
keywords, sort by extracted_value, and build a content calendar around
gaining queries before competitors notice. Cheaper than Ahrefs Trending
Topics and updates daily.
Ecommerce demand forecasting — track interest over time for product
categories across countries to time inventory buys and plan launch
windows. Combine interest_over_time with regional breakdown to spot
the country where a product is taking off 12-16 weeks before peak.
Content strategist — feed related_topics_rising into a
topical-cluster brief so writers cover the entities Google's Knowledge
Graph associates with the seed keyword. The 5-year timeframe doubles as
a seasonality detector.
Marketing analyst / agency — run nightly trend tracking across a 50-200 keyword client portfolio and surface anomalies. The cookie pool
- residential-proxy fallback makes batch jobs pytrends can't sustain actually feasible.
How it compares
| Tool | Pricing model | Maintenance | Hosted? |
|---|---|---|---|
pytrends (Python lib) | Free | You debug rate limits, captchas, cookie expiry | No — runs on your box |
| Google Trends API Alpha | Closed alpha, application-gated | Google manages | Yes |
| DataForSEO Trends API | $0.0006-$0.002/req minimums + plan | Vendor handles | Yes |
| Bright Data SERP / Trends | $1.50-$3.50/1k requests + proxy | Vendor handles | Yes |
| SerpApi Google Trends | $50-$250/mo plans | Vendor handles | Yes |
| This Actor | $0.005/trend_query, no plan | Apify handles | Yes |
DataForSEO, Bright Data, and SerpApi all work — they're just on subscription plans that don't suit ad-hoc use. This Actor is pay-per-event on Apify: pay for the queries you run, walk away when you stop.
Pricing
$0.005 per trend_query (one keyword = one event). Includes interest over time, related queries, related topics, and optional regional breakdown in the same charge.
No actor-start fee. No compute units billed separately. The optional
residential proxy adds Apify proxy compute units at $1.20/GB only when you
explicitly toggle use_residential_proxy=true.
Limits and gotchas
- Timeframe normalization — values are normalized 0-100 within the timeframe. Two timeframes for the same keyword aren't directly comparable; use a fixed window when benchmarking
- Sparse keywords return zeros — under ~50 monthly searches in your geo, Trends returns all-zero timeseries (Google policy, not a scraper bug)
- Related topics can be flaky — toggle
use_residential_proxy=truefor highest reliability on the topics widget specifically - Regional breakdown adds latency — each country pull is a separate widget call; expect ~3-5x runtime when enabled
is_partial: truerows — the most recent period is provisional; drop or annotate for analysis- One keyword per request — Google's
/exploreaccepts up to 5 keywords per comparisonItem but the values rescale; we send one keyword per request for consistent cross-keyword scaling - No sub-7-day granularity beyond
nowranges — Google returns hourly data only fornowtimeframes; older ranges get weekly/monthly buckets
FAQ
Does this use the official Google Trends API Alpha?
No — that API is in closed alpha and gated behind an application. This
Actor scrapes the same /trends/api/widgetdata/* endpoints that the
trends.google.com web UI uses, with Chrome 131 TLS impersonation and a
rotating cookie pool to avoid rate limits.
Can I get hourly trend data?
Yes, for the now 7-d, now 4-H, and now 1-H timeframes. Anything
older than 7 days returns weekly buckets (timeframes up to 5 years) or
monthly buckets (all and longer ranges) — that's a Google constraint,
not a scraper limitation.
How do I compare interest across multiple countries?
Run the same keyword with different geo codes in separate calls. Each
result is normalized 0-100 within its own geo, so for true cross-country
comparison either (a) use the regional breakdown of one global call, or
(b) check the extracted_value fields rather than the raw value.
Why does the related-topics widget sometimes return empty?
The topics widget has the strictest rate limits on Trends. We retry up to
8 times with fresh cookies, but if your keyword has very low volume or
Trends is heavily throttling that day, the array will be empty. Toggle
use_residential_proxy=true for the highest success rate.
Is the data the same as what trends.google.com shows?
Yes — same widget endpoints, same req payloads. The only difference is
that this scraper bundles all four widgets (timeseries, related queries,
related topics, regional) into a single result object per keyword.
Related Actors
- Free Keyword Research Tool — monthly search volume, CPC, SEO difficulty, and intent for any seed keyword across 50+ countries. Pair with this Actor: use Trends to spot rising queries, then keyword-research them for volume + difficulty.
- Free Google News API — pull news articles for any keyword. Pair with Trends to correlate news cycles with search interest spikes.
- Free Google AI Overview Scraper — track which queries surface AI Overviews and which domains Google's AI cites. Combine with Trends to find rising queries where AI Overview cannibalisation is most acute.