AEO Citation Monitor — Brand Tracking in AI Search
Pricing
from $10.00 / 1,000 perplexity resolutions
AEO Citation Monitor — Brand Tracking in AI Search
Monitor how your brand appears in AI search responses. Submits prompts to ChatGPT, Claude, Gemini, Perplexity, xAI Grok, and Google AI Overviews; emits structured records of brand mentions, cited URLs, competitor positions, and list-rank position. Pay-per-resolution pricing.
Pricing
from $10.00 / 1,000 perplexity resolutions
Rating
0.0
(0)
Developer
Alex Lowe
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
3 days ago
Last modified
Categories
Share
AEO Citation Monitor
Track how your brand appears in AI search responses across ChatGPT, Claude, Gemini, Perplexity, xAI Grok, and Google AI Overviews. The Actor sends your prompts to each engine, parses every response into a structured record (brand mentions, cited URLs, competitor positions, list-rank, optional sentiment), and emits the dataset for your dashboard / SQL / spreadsheet.
🎯 5-minute quickstart
If you've never run this Actor before, the fastest path from "open the page" to "first usable record" is:
1. Click Start with this input
Paste this into the JSON tab of the input form and click Start. The only thing you need to edit are the brand and competitor names.
{"prompts": ["What is the best <category> for <audience>?"],"brand": {"name": "Your Brand Name","aliases": ["Common abbreviation", "Legal entity name"],"ownedDomains": ["yourbrand.com"]},"competitors": [{ "name": "Competitor A", "ownedDomains": ["competitora.com"] },{ "name": "Competitor B", "ownedDomains": ["competitorb.com"] }],"providers": ["perplexity", "anthropic"],"acknowledgePublicBrandsOnly": true}
This is a $0.06 starter run that completes in ~30 seconds. It runs 1 prompt across 2 fast engines (Perplexity and Anthropic), enough to verify the wiring works and you understand the output shape. Once you're comfortable, add "openai", "google-gemini", "xai-grok", and "google-aio" to providers for the full 6-engine sweep.
2. Watch records arrive in Storage → Dataset
While the run executes, click Storage → Dataset in the run page to see records as they're emitted. With 1 prompt × 2 providers, you'll see exactly 2 records.
3. The 5 fields you'll likely care about
| Field | What it tells you |
|---|---|
brandMentions[].mentionCount | How many times the AI mentioned your brand in this response |
brandMentions[].rankPosition | What position your brand appeared at in the AI's enumerated list (1 = first, undefined = response wasn't list-shaped) |
competitorMentions[] | The same shape, for every competitor you configured. Compare your mentionCount against theirs. |
citations[].domain | Every URL the AI cited as a source for this response, with the source domain extracted |
citations[].isOwned | true when the AI cited YOUR domain — you're the source it pulled from |
If brandMentions is empty across all your records, see Troubleshooting → Why isn't my brand mentioned? at the bottom of this page.
🔍 Annotated example record
Here's a real record from a run tracking Clash Coach AI (an AI-powered Clash Royale coaching app) across Perplexity. Annotations explain each field as if you were reading it for the first time.
{// STABLE IDENTIFIER — sha256 hash of (prompt + provider + run-start time).// The same prompt + provider in a re-run produces a different recordId// (different runStartedAt) but the responseText may be identical."recordId": "448ef4a63f1a...",// RUN GROUPING — every record from one Actor invocation shares this UUID.// Use it for reporting "the May 7 weekly run.""runId": "ef810d23-4fce-4216-8509-789801b95c16",// PROMPT — verbatim what was sent to the provider."promptText": "What is the best generative engine optimization tool for B2B SaaS in 2026?",// CATEGORY — buyer-supplied or template-supplied tag for dashboard pivots."promptCategory": "comparison",// WHICH AI ENGINE answered. Always one of the 7 ProviderId enum values."provider": "perplexity",// PROVIDER-SPECIFIC MODEL — useful when you want to know whether you got// gpt-5.5 vs gpt-5.4-mini, claude-sonnet vs claude-opus, etc."model": "sonar",// TRACKING vs UTILITY — "tracking" records are the primary data buyers// monitor; "utility" records are sentiment/discovery helper outputs."modelTier": "tracking",// GROUNDED? true = AI used live web search. false = answered from training only.// Critical for trust — grounded answers reflect real-time reality."groundingUsed": true,// WALL-CLOCK LATENCY for this single call. Useful for debugging slow runs// and identifying flaky providers."responseLatencyMs": 11342,// UPSTREAM COST — what the AI provider charged us in USD. Excludes Apify's// PPR price; this is the wholesale cost. Buyers can audit pricing changes."costUsd": 0.000786,// TRANSPORT — which path served the call. "direct" = the provider's own// API. "vercel" or "openrouter" = a gateway fallback was used. "serp-direct"// / "serp-fallback" = AIO via DataForSEO or SerpAPI respectively."transport": "direct",// LOCALE — only present when input.locale is set. method='native' means// the provider's own user_location param applied; 'system-prompt-instruction'// means we prepended a "respond as if helping a user in DE" prefix."locale": { "country": "US", "language": "en", "method": "native" },// FULL RESPONSE TEXT — verbatim. Use this if you want to re-parse for any// reason or fact-check what the AI said."responseText": "Several AI-powered tools can help Clash Royale players...",// YOUR BRAND'S MENTIONS — Clash Coach AI was mentioned once. The AI listed// it 4th in an enumerated coaching-app list. 3 surrounding-text snippets// are captured (capped at 3 per mention to keep records bounded)."brandMentions": [{"brand": "Clash Coach AI","aliases": ["Clash Coach AI"],"mentionCount": 1,"rankPosition": 4, // ← 4th in the AI's coaching-apps list"contexts": [{"text": "...Clash Coach AI offers AI-driven battle analysis...","charStart": 312,"charEnd": 326}// ...up to 3 contexts]}],// COMPETITOR MENTIONS — same shape as brandMentions, one entry per// configured competitor that appeared in this response. Royale Buddy// ranked above Clash Coach AI; that's a real competitive signal."competitorMentions": [{"brand": "Royale Buddy","aliases": ["Royale Buddy"],"mentionCount": 2,"rankPosition": 1},{"brand": "RoyaleAPI","aliases": ["RoyaleAPI"],"mentionCount": 1,"rankPosition": 3}],// CITATIONS — every URL the AI cited. clashcoachai.com (grounded source —// the AI pulled from your own pages, isOwned:true); royaleapi.com is// marked isCompetitor:true (matches a configured competitor's ownedDomains)."citations": [{"url": "https://clashcoachai.com/features","domain": "clashcoachai.com","title": "Clash Coach AI — Features","citationType": "grounded", // ← "grounded" = AI declared as source// "inline" = AI mentioned URL in text"isOwned": true, // ← Clash Coach AI's own page — they// successfully placed in the AI's// grounding sources"isCompetitor": false,"rankPosition": 2},{"url": "https://royaleapi.com/blog/best-coaching-apps","domain": "royaleapi.com","title": "Best Clash Royale Coaching Apps","citationType": "grounded","isOwned": false,"isCompetitor": true, // ← matches a configured competitor's// ownedDomains"rankPosition": 1}// ... typically 5-15 citations per response, varies by provider]}
The "winning" record looks like: non-zero brandMentions[0].mentionCount + rankPosition: 1, 2, or 3 + at least one isOwned: true citation. That signals "the AI knows you exist, ranks you well, and pulls from your own content."
What you get per record
Each (prompt × provider) call produces one record with:
responseText— the full provider response, verbatimbrandMentions[]— your brand's match count, surrounding contexts, and list-rank positioncompetitorMentions[]— same shape, for each configured competitorcitations[]— every URL the response cited, marked asinline(text-mentioned) orgrounded(declared as source), withisOwned/isCompetitorflagscostUsd,responseLatencyMs,transport,groundingUsed— observability you can audit- Optional
locale— the country/language applied for this run, withmethod: 'native'or'system-prompt-instruction'so dashboards can distinguish the two
The full Zod schema is published as @apify-portfolio/aeo-schema on npm — drop it into Looker, Hex, or your own ETL with type-safe shape guarantees.
Providers
| Provider | Default model | Transport |
|---|---|---|
| Perplexity | sonar | direct API → Vercel AI Gateway → OpenRouter |
| OpenAI ChatGPT | gpt-5.5 | direct API → Vercel AI Gateway → OpenRouter |
| Anthropic Claude | claude-sonnet-4-6 | direct API → Vercel AI Gateway → OpenRouter |
| Google Gemini | gemini-3.1-pro-preview | direct API → Vercel AI Gateway → OpenRouter |
| xAI Grok | grok-4.20-non-reasoning | direct API → Vercel AI Gateway → OpenRouter |
| Google AI Overviews | n/a (Google) | DataForSEO → SerpAPI |
Every LLM provider supports a 3-tier transport chain. If your direct key 429s or 5xxs, the Actor falls back to Vercel AI Gateway, then OpenRouter. Records are stamped with transport: 'direct' | 'vercel' | 'openrouter' | 'serp-direct' | 'serp-fallback' so you see what handled each call.
Pricing (Pay-per-Event)
Per-resolution charges scale with the upstream cost basis of each provider:
| Event | Price | When |
|---|---|---|
aeo-resolve-perplexity | $0.010 | One Perplexity record |
aeo-resolve-aio | $0.015 | One Google AI Overview record |
aeo-resolve-light | $0.020 | One Anthropic or xAI Grok record |
aeo-resolve-gemini | $0.025 | One Gemini record (grounded) |
aeo-resolve-openai-base | $0.075 | One OpenAI record (base, always charged) |
aeo-resolve-openai-grounding-light | +$0.05 | OpenAI grounded + upstream < $0.05 |
aeo-resolve-openai-grounding-medium | +$0.20 | OpenAI grounded + $0.05 ≤ upstream < $0.20 |
aeo-resolve-openai-grounding-heavy | +$0.50 | OpenAI grounded + upstream ≥ $0.20 |
aeo-sentiment-tagged | +$0.005 | When enableSentimentTagging: true |
aeo-prompt-discovery | $0.050 | Once per run when discoverPromptsFromUrl is set |
aeo-raw-response-passthrough | +$0.001 | When emitRawProviderResponse: true |
OpenAI grounding — what the buckets mean
OpenAI's web_search tool charges for the underlying tokens including search-result content, which makes per-call cost highly variable. v1.1.1 splits OpenAI into a flat base + a bracketed grounding event so the price scales with actual cost:
| Query type | Typical upstream cost | Bracket | Buyer total |
|---|---|---|---|
| Training-only (web_search off) | ~$0.04 | none | $0.075 |
| Narrow factual / shallow grounding | < $0.05 | light | $0.125 |
| Brand comparison / moderate grounding | $0.05–$0.20 | medium | $0.275 |
| Vague open-ended / deep grounding | ≥ $0.20 | heavy | $0.575 |
Defaults to useWebSearch: true because that matches what real ChatGPT users actually see. If you want training-only signal (cheaper, deterministic), set providerConfig.openai.useWebSearch: false and pay only the $0.075 base. The optional providerConfig.openai.maxCostUsdPerRecord (default $0.50) logs outliers above the heavy bracket to RUN_SUMMARY for transparency.
Cost example
A 10-prompt × 6-provider sweep with default settings (OpenAI grounded at medium bracket, the typical case):
| Provider | Per record | × 10 prompts |
|---|---|---|
| Perplexity | $0.010 | $0.10 |
| Google AIO | $0.015 | $0.15 |
| Anthropic | $0.020 | $0.20 |
| xAI Grok | $0.020 | $0.20 |
| Gemini | $0.025 | $0.25 |
| OpenAI (base + medium grounding) | $0.275 | $2.75 |
| Sweep total | $3.65 |
Roughly $0.36 per prompt across 6 engines, vs $99–$5K/mo SaaS minimums for the same data shape.
v1.1 features
Vertical templates
Skip writing prompts by hand. Set template to any of the 7 built-ins, optionally pass templateVariables to fill {category} / {audience} / {year}:
| Template | Built for |
|---|---|
saas-b2b | B2B SaaS — comparison, discovery, features, pricing, integration |
ecommerce-d2c | Direct-to-consumer e-commerce |
local-services | Local service businesses (HVAC, dental, legal, etc.) |
agency | Marketing/PR agencies |
media-publisher | News/magazine publishers |
fintech | Banking, lending, payments, investing |
custom | None — supply your own prompts |
Each template ships prompts pre-grouped into intent categories (comparison, discovery, feature-evaluation, pricing, etc.). Categories propagate to records as promptCategory for direct dashboard pivots. You can mix templates with your own prompts and queryGroups.
Locale targeting
Set locale: { country: 'DE', language: 'de' } and the Actor routes per provider:
- Native for Perplexity (
web_search_options.user_location), OpenAI (web_search.user_locationwhen grounding is on), and Google AI Overviews (DataForSEO country/language) - System-prompt instruction for Anthropic, Gemini, and xAI Grok (no native primitive — we prepend a fragment asking the model to respond as if helping a user in that locale)
Each record's locale.method field tells you which approach was used.
Query groups
For workflows that pre-organize prompts by group, supply queryGroups: [{ groupName: 'Awareness', queries: [...] }, ...]. Internally transposed to promptCategories so all the existing pivot-by-category behavior just works.
Input
Required:
acknowledgePublicBrandsOnly: true— ToS attestation. The Actor refuses to run without it.brand: { name, aliases?, ownedDomains? }— the brand you're trackingproviders: ['openai', 'anthropic', ...]— at least one- One of:
prompts,template,queryGroups,discoverPromptsFromUrl
See @apify-portfolio/aeo-schema for the full Zod schema with field descriptions.
Example input
{"prompts": ["What is the best AI coach app for Clash Royale players?","How can I improve my ladder ranking in Clash Royale?"],"brand": {"name": "Clash Coach AI","aliases": ["ClashCoachAI", "Clash Coach", "ClashCoach.ai"],"ownedDomains": ["clashcoachai.com"]},"competitors": [{ "name": "Royale Buddy", "ownedDomains": ["royalebuddy.com"] },{ "name": "MetaDecks", "ownedDomains": ["metadecks.gg"] },{ "name": "RoyaleAPI", "ownedDomains": ["royaleapi.com"] }],"providers": ["perplexity", "anthropic", "xai-grok", "google-aio"],"locale": { "country": "US", "language": "en" },"acknowledgePublicBrandsOnly": true}
ToS attestation
Anthropic's usage policy and OpenAI's terms prohibit using their APIs for surveillance, tracking, or profiling of individuals. This Actor is for tracking public brands in AI responses — that's why acknowledgePublicBrandsOnly: true is required. The Actor heuristically rejects prompts containing honorific patterns (Mr., Mrs., Dr., etc.) unless bypassToSGuard: true is also set (use only with documented authorization for cases like journalism or authorized public-figure research).
Limitations
- Word-boundary matching only for brand mentions in v1 — no fuzzy matching. List exact spelling variants (legal name, abbreviations, ticker, product synonyms) in
brand.aliasesandcompetitors[].aliases. - No webhook output — Apify dataset is the v1 sink. Apify's own integrations (Zapier, Make, Slack via webhook) handle delivery to downstream systems.
- No multi-brand profiles per run — each run is one brand. Use Apify's scheduling + multiple Actor runs (one per brand) for portfolio monitoring.
copilotreserved but not implemented — Microsoft Copilot is on the roadmap for v1.2.
Operator FAQ
🔧 Why isn't my brand mentioned?
If brandMentions is empty across most or all of your records, it means the AI engines genuinely don't know your brand for the prompts you're asking. This is real diagnostic data — not a bug. Three things to check, in order:
- Are you matching the right name variants? AI engines may say "ClashCoach" when your
brand.nameis "Clash Coach AI". Word-boundary matching is exact (case-insensitive). List every spelling variant inbrand.aliases— abbreviations, ticker, product name, common misspellings. This catches ~30% of "missing" mentions. - Is the prompt too narrow or too vague? "How do I improve my ladder ranking in Clash Royale?" produces a different response than "Best Clash Royale coaching apps." The first pulls strategy advice; the second pulls product mentions. Run both prompt styles to see which surfaces your brand.
- Is the AI's grounding pulling from the wrong sources? Look at
citations[].domain. If the AI is citing g2.com, capterra.com, and a competitor's blog, but never your own site, that's the problem. AI engines surface brands based on what their grounding sources say. Your AEO work is producing content for those sources to cite, not the AI directly. Improve coverage on the cited domains and your brand starts showing up.
If all three check out and you still see zero mentions, your brand may genuinely have low AI-search presence — that's the signal AEO content marketing is meant to fix. Run the same prompts again in 4 weeks after publishing more content; see if the count moves.
📅 How do I run this weekly?
Apify natively supports cron-style scheduling. Two-step setup:
- Console → Schedules → New Schedule. Set cron:
0 9 * * 1(every Monday 9am UTC). - Pick this Actor and the input you want recurring (your full production input, not the demo). Apify runs it on schedule and stores each weekly dataset in your account.
For agencies tracking multiple clients: create one Schedule per brand. Each Schedule is independent so a slow run for client A doesn't block client B. Apify scales the underlying compute automatically.
To see week-over-week deltas (only emit changed records), set "deltaMode": true in the input. The Actor stores per-prompt-provider state in the KeyValueStore and after the first run only emits records where the response changed — much smaller datasets, easier to spot real movement.
📊 How do I pipe results into Google Sheets / Looker / BigQuery?
Google Sheets — easiest. Apify Console → run → Storage → Dataset → Export → Google Sheets. Or use the dataset's signed share URL with format=csv in IMPORTDATA: =IMPORTDATA("https://api.apify.com/v2/datasets/<id>/items?format=csv&clean=true").
Looker / Looker Studio — connect to the same Apify dataset URL as a CSV data source, or schedule a daily ETL into BigQuery via Apify's BigQuery integration (Console → Integrations → BigQuery).
BigQuery direct — Apify ships a native integration: Console → Integrations → BigQuery → connect → pick the dataset to mirror. Records flow into a flat table; the JSON columns (brandMentions, citations) become BigQuery STRUCT/ARRAY columns you can query with UNNEST.
Custom ETL — the dataset is just JSON over HTTP. Pull with curl, jq, or any HTTP client. The schema is published at @apify-portfolio/aeo-schema on npm — install it for type-safe parsing in TypeScript pipelines.
For weekly Slack notifications, use Apify's built-in Slack integration (Console → Integrations → Slack → on success). The Actor sets a status message at run end like "Run complete: 16 records emitted, $0.26 spent. Clash Coach AI cited 4 times. Top competitor: Royale Buddy (8 mentions)." — that's what shows up in Slack.
Other questions
Can I use my own provider keys? Yes — supply OPENAI_API_KEY etc. as Apify Console secrets and the Actor uses your keys. If you only supply VERCEL_API_KEY or OPENROUTER_API_KEY, it routes through that gateway. If you supply none, the Actor uses its built-in fallback keys.
Why isn't ChatGPT.com web UI a provider? OpenAI's terms prohibit automated access to ChatGPT.com. We use the OpenAI API — the sanctioned path. API responses differ slightly from the web UI but the data is far cleaner and auditable.
Can I monitor a public figure? Only with documented authorization (journalism, authorized research) and the bypassToSGuard: true flag. Default policy blocks honorific-style prompts.