Google AI Overview Tracker & AI Visibility API avatar

Google AI Overview Tracker & AI Visibility API

Pricing

from $7.00 / 1,000 prompt checks

Go to Apify Store
Google AI Overview Tracker & AI Visibility API

Google AI Overview Tracker & AI Visibility API

Google AI Overview tracker and AI search visibility API for paid weekly SEO, GEO, AEO, and LLM visibility reports. Check prompt packs, extract AIO status. Guide: https://konabayev.com/tools/google-ai-overview-tracker/?utm_source=apify_info&utm_medium=referral&utm_campaign=google-ai-overview-tracker

Pricing

from $7.00 / 1,000 prompt checks

Rating

0.0

(0)

Developer

Tugelbay Konabayev

Tugelbay Konabayev

Maintained by Community

Actor stats

0

Bookmarked

9

Total users

7

Monthly active users

2 hours ago

Last modified

Share

Google AI Overview tracker for AI visibility prompts

Google AI Overview prompt input to AI visibility dataset

Google AI Overview source domain dataset preview

Track whether Google shows an AI Overview for your SEO, GEO, AEO, answer engine optimization, AI search visibility, and LLM visibility prompts, which external source domains appear on the rendered SERP, and whether your brand, domain, or competitors appear in the generated answer.

Use it as a lightweight Google AI Overview visibility tracker: prompt in, AIO status out, with answer text, source domains, brand mentions, domain citation checks, competitor mentions, and a clean aiOverviewAvailable=false result when Google does not show an AI Overview.

This actor is intentionally narrow. It supports Google AI Overview only. It does not claim ChatGPT, Gemini, Perplexity, Claude, Copilot, Grok, or Meta AI coverage.

Best fit: paid weekly AI visibility monitoring for commercial prompts, client reporting, SEO/GEO dashboards, and source-domain gap analysis. Default run: a 5-prompt monitoring pack that produces one row per prompt, with visibility status and score fields ready for dashboards. Pricing: pay per prompt checked. Each prompt produces one dataset row and one prompt-check billing event.

Search Demand This Targets

Local DataForSEO baseline generated on 2026-05-08 shows commercial demand around this workflow:

QueryLocationMonthly volumeCPC
answer engine optimizationUS1,900$17.35
ai overview trackerUS590$22.65
ai search visibilityUS390$38.33
ai visibility trackerUS320$22.15
llm visibilityUS260$25.98

Use this actor when those queries map to a real reporting job: weekly prompt checks, source-domain gap analysis, client SEO/GEO reporting, or competitor visibility monitoring.

Fast Answers

What is a Google AI Overview tracker?

A Google AI Overview tracker checks specific Google search prompts and records whether an AI Overview appears, what answer text was visible, which external source domains were extracted from the rendered SERP, and whether your brand, target domain, or competitors appeared.

Can this track AI visibility for SEO and GEO reports?

Yes. This actor is built for SEO, GEO, AEO, and AI visibility reporting where you need repeatable prompt checks, brand mention tracking, domain citation checks, competitor mention detection, and CSV/API-ready datasets.

Does it track ChatGPT, Gemini, Perplexity, or Claude?

No. This actor is intentionally limited to Google AI Overview. That narrower scope keeps the output honest and avoids mixing Google SERP behavior with unrelated AI answer engines.

Starter monitoring run

Start with the 5-prompt monitoring pack below. Each prompt opens a rendered Google SERP, so this is still small enough to validate quickly while producing a useful report shape.

{
"prompts": [
"best answer engine optimization tools",
"best AI visibility tracker",
"how to optimize for Google AI Overviews",
"best GEO tools for SaaS SEO",
"AI search visibility software"
],
"brandName": "Semrush",
"domain": "semrush.com",
"competitors": ["Ahrefs", "Surfer SEO", "Conductor"],
"country": "US",
"language": "en",
"maxPrompts": 5,
"timeoutSeconds": 30,
"maxConcurrency": 1
}

What it returns

For each prompt, the actor returns one dataset item with:

  • aiOverviewAvailable - whether Google displayed an AI Overview
  • answerText - extracted AI Overview text when available
  • sourceLinks and sourceDomains - external URLs found on the rendered SERP
  • targetBrandMentioned - whether your brand appears in the AI Overview text
  • targetDomainCited - whether your domain appears in source links
  • competitorMentions - which competitor names appeared
  • visibilityStatus and visibilityScore - dashboard-ready classification for owned, competitor, source-gap, no-AIO, blocked, timeout, or error outcomes
  • sourceCount and competitorCount - quick numeric fields for BI tools and weekly reporting
  • engineStatus - ok_aio, ok_no_aio, blocked, timeout, or error

If Google does not show an AI Overview for a prompt, the run still succeeds and returns aiOverviewAvailable=false.

Larger input example

{
"prompts": [
"best answer engine optimization tools",
"best AI visibility tracker",
"how to optimize for Google AI Overviews",
"best GEO tools for SaaS SEO",
"AI search visibility software",
"LLM visibility platform",
"best AEO software for agencies",
"AI search optimization tools",
"Google AI Overview monitoring",
"AI visibility reporting software"
],
"brandName": "Semrush",
"domain": "semrush.com",
"competitors": ["Ahrefs", "Surfer SEO", "Conductor"],
"country": "US",
"language": "en",
"maxPrompts": 10,
"timeoutSeconds": 30,
"maxConcurrency": 1
}

For a quick validation, reduce maxPrompts to 1. For a useful paid report, keep the default 5 prompts or run 10-50 prompts weekly.

Use the default 5-prompt pack to verify that the actor can render Google for your country/language and that the output fields fit your report. For production monitoring, use 10-50 prompts per run and schedule the task weekly.

Useful paid prompt sets usually come from:

  • Google Search Console queries with commercial intent
  • paid-search keywords where CPC is high
  • DataForSEO or Google Ads keyword rows such as answer engine optimization, ai search visibility, and llm visibility
  • "best X", "X alternatives", and "X vs Y" category prompts
  • sales objections and buyer questions
  • competitor pages and comparison pages

For agencies, create one Apify Task per client and send the dataset to Google Sheets, BigQuery, Slack, or a reporting webhook.

Output example

{
"prompt": "best answer engine optimization tools",
"engine": "google_ai_overview",
"country": "US",
"language": "en",
"aiOverviewAvailable": true,
"answerText": "AI Overview ...",
"sourceDomains": ["semrush.com", "searchengineland.com", "ahrefs.com"],
"targetBrand": "Semrush",
"targetDomain": "semrush.com",
"targetBrandMentioned": true,
"targetDomainCited": true,
"competitorMentions": ["Ahrefs", "Surfer SEO"],
"visibilityStatus": "owned_visibility",
"visibilityScore": 100,
"sourceCount": 3,
"competitorCount": 2,
"engineStatus": "ok_aio",
"runtimeSeconds": 3.2
}

Output field reference

Each dataset item is one prompt check. The schema is intentionally flat enough for CSV, Google Sheets, webhooks, and BI dashboards, while still keeping source links and competitors as arrays for automation.

FieldTypeDescription
promptstringThe search prompt/query that was checked.
enginestringAlways google_ai_overview for this actor.
queryUrlstringRendered Google Search URL with country/language parameters.
answerUrlstringFinal browser URL after redirects or consent flows.
countrystringGoogle country code used for the check, for example US.
languagestringGoogle interface language, for example en.
aiOverviewAvailablebooleanWhether an AI Overview block was detected.
answerTextstringExtracted AI Overview text when present; empty when no AIO was shown.
sourceLinksarrayExternal links found on the rendered SERP after Google/link wrappers are filtered.
sourceDomainsarrayDeduplicated domains from sourceLinks.
targetBrandstringBrand name supplied in input.
targetDomainstringNormalized target domain supplied in input.
targetBrandMentionedbooleanWhether the brand name appears in the extracted AI Overview text.
targetDomainCitedbooleanWhether the target domain appears among extracted source domains.
competitorMentionsarrayCompetitor names found in the AI Overview text or fallback SERP text window.
visibilityStatusstringDashboard status such as owned_visibility, competitor_visible, source_gap, no_ai_overview, blocked, timeout, or error.
visibilityScoreinteger0-100 score for quick sorting and weekly trend dashboards.
sourceCountintegerNumber of unique extracted source domains.
competitorCountintegerNumber of detected competitor mentions.
engineStatusstringok_aio, ok_no_aio, blocked, timeout, or error.
errorstringShort error message when rendering or parsing fails.
blockMarkersarrayDetected block/CAPTCHA phrases when Google blocks the session.
checkedAtstringUTC timestamp for the check.
runtimeSecondsnumberRuntime for that prompt attempt.

Use cases

  • AI visibility tracking for target prompts
  • GEO / AEO monitoring for target prompts
  • AI Overview source tracking
  • Brand visibility checks
  • Domain citation checks
  • Competitor visibility checks
  • SEO source-gap research
  • Weekly prompt monitoring for marketing teams

Who should use it

This actor is useful when the question is not "what rank do I have in classic organic search?" but "does Google's AI answer show up, which brands are mentioned, and which external domains appear around that AI answer?"

Typical users:

  • SEO teams tracking AI Overview exposure for commercial prompts
  • content teams checking whether their domain appears near answer-style SERPs
  • agencies monitoring client and competitor visibility across prompt lists
  • SaaS teams watching category prompts such as "best CRM for small business"
  • affiliate teams checking which publisher domains are surfaced for buyer-intent queries
  • internal growth teams building weekly AEO/GEO dashboards

Suggested monitoring workflow

  1. Start with 5-20 high-intent prompts from Search Console, paid-search terms, sales calls, or competitor pages.
  2. Run the actor with maxConcurrency=1 and timeoutSeconds=30.
  3. Export the dataset to Google Sheets, BigQuery, or a webhook.
  4. Track these weekly fields: aiOverviewAvailable, targetBrandMentioned, targetDomainCited, sourceDomains, and competitorMentions.
  5. Split prompts into buckets:
    • AIO exists and your brand/domain appears: maintain and monitor.
    • AIO exists but competitors appear: improve topical coverage and citation-worthy pages.
    • AIO exists but only publishers/tools appear: identify source-gap opportunities.
    • No AIO: monitor less often or test related prompt variants.

Prompt list examples

For SaaS:

{
"prompts": [
"best CRM software for startups",
"best email marketing software for ecommerce",
"best customer support software for SaaS"
],
"brandName": "HubSpot",
"domain": "hubspot.com",
"competitors": ["Salesforce", "Zoho", "Pipedrive"],
"country": "US",
"language": "en",
"maxPrompts": 3,
"timeoutSeconds": 30,
"maxConcurrency": 1
}

For local services:

{
"prompts": [
"best emergency plumber in Austin",
"best HVAC repair company in Austin",
"how much does water heater repair cost in Austin"
],
"brandName": "Example Plumbing",
"domain": "exampleplumbing.com",
"competitors": ["Roto-Rooter", "Mr. Rooter"],
"country": "US",
"language": "en",
"maxPrompts": 3
}

For affiliate/content sites:

{
"prompts": [
"best standing desk for home office",
"best ergonomic chair for back pain",
"best monitor light bar"
],
"brandName": "Example Reviews",
"domain": "examplereviews.com",
"competitors": ["Wirecutter", "Tom's Guide", "PCMag"],
"country": "US",
"language": "en"
}

Comparison with classic SERP rank tracking

NeedClassic rank trackerGoogle AI Overview Tracker
Track organic positionYesNo
Detect whether AI Overview appearsNoYes
Extract answer textNoYes, when AIO is present
Track brand mention in AI answerNoYes
Track domains surfaced near AI answerLimitedYes
Monitor competitor mentions in generated answerNoYes
Best cadenceDaily/weekly rankingsWeekly prompt visibility checks

Use both when you need a complete search view: rank tracker for organic positions, this actor for AI Overview visibility.

API usage

Python

from apify_client import ApifyClient
client = ApifyClient("YOUR_APIFY_TOKEN")
run = client.actor("tugelbay/google-ai-overview-tracker").call(run_input={
"prompts": [
"best answer engine optimization tools",
"best AI visibility tracker",
"how to optimize for Google AI Overviews",
"best GEO tools for SaaS SEO",
"AI search visibility software",
],
"brandName": "Semrush",
"domain": "semrush.com",
"competitors": ["Ahrefs", "Surfer SEO", "Conductor"],
"country": "US",
"language": "en",
"maxPrompts": 5,
"timeoutSeconds": 30,
"maxConcurrency": 1,
})
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
print(item["prompt"], item["visibilityStatus"], item["visibilityScore"])

JavaScript

import { ApifyClient } from "apify-client";
const client = new ApifyClient({ token: "YOUR_APIFY_TOKEN" });
const run = await client.actor("tugelbay/google-ai-overview-tracker").call({
prompts: [
"best answer engine optimization tools",
"best AI visibility tracker",
"how to optimize for Google AI Overviews",
"best GEO tools for SaaS SEO",
"AI search visibility software",
],
brandName: "Semrush",
domain: "semrush.com",
competitors: ["Ahrefs", "Surfer SEO", "Conductor"],
country: "US",
language: "en",
maxPrompts: 5,
timeoutSeconds: 30,
maxConcurrency: 1,
});
const { items } = await client.dataset(run.defaultDatasetId).listItems();
console.log(items[0]);

Webhook / scheduled monitoring

Create an Apify Task with your prompt list, then schedule it weekly. Use a webhook on run success to send the dataset to:

  • Google Sheets
  • BigQuery
  • Slack
  • Make/Zapier
  • your internal dashboard

Validation

Validation snapshot from 2026-04-25. This actor was built only after a 20-prompt stability probe:

  • 20/20 clean headless Google SERP checks
  • 19/20 prompts showed an AI Overview
  • 1/20 prompts cleanly returned no AI Overview
  • Median local probe runtime: 3.0 seconds. Apify runtime varies with proxy/session state.

Related validation file:

docs/research/google-ai-overview-tracker-validation-results.md

Limitations

  • Google decides whether AI Overview appears. Some queries, countries, or sessions may return no AI Overview.
  • Google SERP layouts change. The actor returns engineStatus and error fields so monitoring jobs can detect layout or block changes.
  • Browser rendering is required; this is not a raw HTTP-only scraper.
  • Keep concurrency low. The default is 1, maximum is 3.
  • The default input checks 5 prompts so the first run produces a useful monitoring report. Reduce maxPrompts to 1 only for a tiny validation run.

Pricing

Pay per event:

  • prompt-check - one event per prompt checked

Base price: $0.01 / prompt-check.

Cost examples

Run sizeInputPrompt-check eventsApprox. prompt cost
Tiny validation1 prompt1~$0.01
Default starter monitor5 prompts5~$0.05
Small category monitor10 prompts10~$0.10
Weekly SaaS prompt set50 prompts50~$0.50
Four weekly 50-prompt checks200 prompts/month200~$2.00/month

Actual platform cost also depends on Apify compute/runtime and your account plan. Keep prompt batches small until you know how often AIO appears for your niche.

Troubleshooting

IssueLikely causeWhat to do
aiOverviewAvailable=falseGoogle did not show an AI Overview for that query/session/countryThis is a valid result. Try related prompts or another country/language.
engineStatus=blockedGoogle showed CAPTCHA, unusual-traffic, or block textKeep maxConcurrency=1, use residential proxy, retry later, or reduce batch size.
engineStatus=timeoutSERP or AIO did not finish rendering within timeoutIncrease timeoutSeconds to 45-60 for harder prompts.
Empty answerText with source linksClassic SERP links were available but no AIO text was detectedTreat as no-AIO or inspect the rendered SERP manually.
Competitor not detectedName spelling differs from inputAdd short brand variants to competitors, for example Salesforce and Salesforce CRM.
Target domain not citedDomain does not appear in extracted external source linksUse sourceDomains to identify which domains are currently surfaced.

FAQ

Does this track ChatGPT, Gemini, Perplexity, Claude, Copilot, Grok, or Meta AI?

No. This actor intentionally supports Google AI Overview only. Other engines require separate validation because login, browser mode, rendering behavior, and source extraction differ.

No. The actor returns external source links found on the rendered SERP after Google wrapper links are cleaned and Google-owned domains are filtered. This is useful for source-domain monitoring, but it should not be treated as a guaranteed one-to-one list of AIO citation cards.

Why does the same prompt sometimes return different results?

Google personalizes and experiments with SERP layouts. Country, language, time, browser session, proxy, and Google's own AIO rollout can change whether an AI Overview appears.

Should I run this daily?

For most teams, weekly is enough. AI Overview visibility can move, but daily checks may add noise and cost before you have a stable prompt set.

Can I use this for client reporting?

Yes, if you explain the metric correctly: it tracks AI Overview availability, answer text, brand/domain visibility, and extracted external source domains for specific prompts at a specific time.

What is a good first prompt set?

Start with 10-20 prompts that already matter to the business: high-impression Search Console queries, paid search terms, "best X" category searches, sales objections, and competitor-comparison prompts.

Data quality notes

  • No-AIO is a valid output, not a failed run.
  • engineStatus should be monitored; filter out blocked, timeout, and error before calculating visibility rates.
  • Use exact brand names and common short variants in competitors.
  • Keep prompt wording stable over time if you want trend charts.
  • Store raw datasets for repeatable historical reporting.

Reporting template

For a weekly report, group the dataset by prompt and track these columns:

MetricHow to calculate
Visibility scoreAverage visibilityScore across valid rows, or trend it by prompt week over week.
AIO coverageCount rows where aiOverviewAvailable=true divided by total valid rows.
Brand visibilityCount rows where targetBrandMentioned=true divided by rows with AIO.
Domain visibilityCount rows where targetDomainCited=true divided by rows with source domains.
Competitor pressureCount prompts where competitorMentions is not empty.
Source diversityCount unique sourceDomains across the prompt set.
Block/error rateCount blocked, timeout, and error statuses divided by total rows.

Keep the report honest by separating visibility changes from collection quality. A week with many blocked or timeout rows should be marked as a data-quality issue, not as a real visibility drop.

Good prompt hygiene

  • Use searcher language, not internal product jargon.
  • Avoid changing prompt wording every week if you need trend lines.
  • Include buying-intent prompts, comparison prompts, and problem-aware prompts.
  • Keep country/language stable for each tracked prompt set.
  • Do not mix unrelated markets in one dashboard; separate SaaS, local, ecommerce, and affiliate prompts.
  • Add competitor aliases when brand names have common abbreviations.
  • Review no-AIO prompts monthly and replace prompts that never trigger useful AI Overview data.

Operational guardrails

Large prompt lists are supported, but small batches are easier to debug. A practical operating pattern is:

  1. run the default 5-prompt starter monitor;
  2. inspect visibilityStatus, visibilityScore, and engineStatus distribution;
  3. expand to 25-50 prompts;
  4. schedule weekly;
  5. alert only on valid rows, not on blocked/error rows.

If you need hundreds of prompts, split them into multiple Tasks by category, country, or client. That keeps datasets easier to inspect and reduces the chance that one transient block affects the whole report.

Changelog

  • 0.1.23 (2026-05-11) - switched the Store/input example to a 5-prompt paid monitoring pack and added dashboard-ready visibility status, score, source count, and competitor count fields.
  • 0.1.10 (2026-04-26) - expanded README, clarified source-link semantics, and kept one-prompt first-run path.
  • 0.1 (2026-04-26) - initial Google AI Overview-only MVP.