# AEO Citation Monitor — Brand Tracking in AI Search (`glaciological_hexahedron/aeo-citation-monitor`) Actor

Monitor how your brand appears in AI search responses. Submits prompts to ChatGPT, Claude, Gemini, Perplexity, xAI Grok, and Google AI Overviews; emits structured records of brand mentions, cited URLs, competitor positions, and list-rank position. Pay-per-resolution pricing.

- **URL**: https://apify.com/glaciological\_hexahedron/aeo-citation-monitor.md
- **Developed by:** [Alex Lowe](https://apify.com/glaciological_hexahedron) (community)
- **Categories:** SEO tools, AI, Lead generation
- **Stats:** 2 total users, 1 monthly users, 80.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

from $10.00 / 1,000 perplexity resolutions

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## AEO Citation Monitor

Track how your brand appears in AI search responses across **ChatGPT, Claude, Gemini, Perplexity, xAI Grok, and Google AI Overviews**. The Actor sends your prompts to each engine, parses every response into a structured record (brand mentions, cited URLs, competitor positions, list-rank, optional sentiment), and emits the dataset for your dashboard / SQL / spreadsheet.

### 🎯 5-minute quickstart

If you've never run this Actor before, the fastest path from "open the page" to "first usable record" is:

#### 1. Click Start with this input

Paste this into the JSON tab of the input form and click **Start**. The only thing you need to edit are the brand and competitor names.

```json
{
  "prompts": [
    "What is the best <category> for <audience>?"
  ],
  "brand": {
    "name": "Your Brand Name",
    "aliases": ["Common abbreviation", "Legal entity name"],
    "ownedDomains": ["yourbrand.com"]
  },
  "competitors": [
    { "name": "Competitor A", "ownedDomains": ["competitora.com"] },
    { "name": "Competitor B", "ownedDomains": ["competitorb.com"] }
  ],
  "providers": ["perplexity", "anthropic"],
  "acknowledgePublicBrandsOnly": true
}
````

This is a **$0.06 starter run** that completes in **~30 seconds**. It runs 1 prompt across 2 fast engines (Perplexity and Anthropic), enough to verify the wiring works and you understand the output shape. Once you're comfortable, add `"openai"`, `"google-gemini"`, `"xai-grok"`, and `"google-aio"` to `providers` for the full 6-engine sweep.

#### 2. Watch records arrive in Storage → Dataset

While the run executes, click **Storage → Dataset** in the run page to see records as they're emitted. With 1 prompt × 2 providers, you'll see exactly 2 records.

#### 3. The 5 fields you'll likely care about

| Field | What it tells you |
|---|---|
| `brandMentions[].mentionCount` | How many times the AI mentioned your brand in this response |
| `brandMentions[].rankPosition` | What position your brand appeared at in the AI's enumerated list (1 = first, undefined = response wasn't list-shaped) |
| `competitorMentions[]` | The same shape, for every competitor you configured. Compare your `mentionCount` against theirs. |
| `citations[].domain` | Every URL the AI cited as a source for this response, with the source domain extracted |
| `citations[].isOwned` | `true` when the AI cited YOUR domain — you're the source it pulled from |

If `brandMentions` is empty across all your records, see [**Troubleshooting → Why isn't my brand mentioned?**](#-why-isnt-my-brand-mentioned) at the bottom of this page.

***

### 🔍 Annotated example record

Here's a real record from a run tracking [Clash Coach AI](https://clashcoachai.com) (an AI-powered Clash Royale coaching app) across Perplexity. Annotations explain each field as if you were reading it for the first time.

```jsonc
{
  // STABLE IDENTIFIER — sha256 hash of (prompt + provider + run-start time).
  // The same prompt + provider in a re-run produces a different recordId
  // (different runStartedAt) but the responseText may be identical.
  "recordId": "448ef4a63f1a...",

  // RUN GROUPING — every record from one Actor invocation shares this UUID.
  // Use it for reporting "the May 7 weekly run."
  "runId": "ef810d23-4fce-4216-8509-789801b95c16",

  // PROMPT — verbatim what was sent to the provider.
  "promptText": "What is the best generative engine optimization tool for B2B SaaS in 2026?",

  // CATEGORY — buyer-supplied or template-supplied tag for dashboard pivots.
  "promptCategory": "comparison",

  // WHICH AI ENGINE answered. Always one of the 7 ProviderId enum values.
  "provider": "perplexity",

  // PROVIDER-SPECIFIC MODEL — useful when you want to know whether you got
  // gpt-5.5 vs gpt-5.4-mini, claude-sonnet vs claude-opus, etc.
  "model": "sonar",

  // TRACKING vs UTILITY — "tracking" records are the primary data buyers
  // monitor; "utility" records are sentiment/discovery helper outputs.
  "modelTier": "tracking",

  // GROUNDED? true = AI used live web search. false = answered from training only.
  // Critical for trust — grounded answers reflect real-time reality.
  "groundingUsed": true,

  // WALL-CLOCK LATENCY for this single call. Useful for debugging slow runs
  // and identifying flaky providers.
  "responseLatencyMs": 11342,

  // UPSTREAM COST — what the AI provider charged us in USD. Excludes Apify's
  // PPR price; this is the wholesale cost. Buyers can audit pricing changes.
  "costUsd": 0.000786,

  // TRANSPORT — which path served the call. "direct" = the provider's own
  // API. "vercel" or "openrouter" = a gateway fallback was used. "serp-direct"
  // / "serp-fallback" = AIO via DataForSEO or SerpAPI respectively.
  "transport": "direct",

  // LOCALE — only present when input.locale is set. method='native' means
  // the provider's own user_location param applied; 'system-prompt-instruction'
  // means we prepended a "respond as if helping a user in DE" prefix.
  "locale": { "country": "US", "language": "en", "method": "native" },

  // FULL RESPONSE TEXT — verbatim. Use this if you want to re-parse for any
  // reason or fact-check what the AI said.
  "responseText": "Several AI-powered tools can help Clash Royale players...",

  // YOUR BRAND'S MENTIONS — Clash Coach AI was mentioned once. The AI listed
  // it 4th in an enumerated coaching-app list. 3 surrounding-text snippets
  // are captured (capped at 3 per mention to keep records bounded).
  "brandMentions": [{
    "brand": "Clash Coach AI",
    "aliases": ["Clash Coach AI"],
    "mentionCount": 1,
    "rankPosition": 4,         // ← 4th in the AI's coaching-apps list
    "contexts": [
      {
        "text": "...Clash Coach AI offers AI-driven battle analysis...",
        "charStart": 312,
        "charEnd": 326
      }
      // ...up to 3 contexts
    ]
  }],

  // COMPETITOR MENTIONS — same shape as brandMentions, one entry per
  // configured competitor that appeared in this response. Royale Buddy
  // ranked above Clash Coach AI; that's a real competitive signal.
  "competitorMentions": [
    {
      "brand": "Royale Buddy",
      "aliases": ["Royale Buddy"],
      "mentionCount": 2,
      "rankPosition": 1
    },
    {
      "brand": "RoyaleAPI",
      "aliases": ["RoyaleAPI"],
      "mentionCount": 1,
      "rankPosition": 3
    }
  ],

  // CITATIONS — every URL the AI cited. clashcoachai.com (grounded source —
  // the AI pulled from your own pages, isOwned:true); royaleapi.com is
  // marked isCompetitor:true (matches a configured competitor's ownedDomains).
  "citations": [
    {
      "url": "https://clashcoachai.com/features",
      "domain": "clashcoachai.com",
      "title": "Clash Coach AI — Features",
      "citationType": "grounded",   // ← "grounded" = AI declared as source
                                    //    "inline" = AI mentioned URL in text
      "isOwned": true,              // ← Clash Coach AI's own page — they
                                    //    successfully placed in the AI's
                                    //    grounding sources
      "isCompetitor": false,
      "rankPosition": 2
    },
    {
      "url": "https://royaleapi.com/blog/best-coaching-apps",
      "domain": "royaleapi.com",
      "title": "Best Clash Royale Coaching Apps",
      "citationType": "grounded",
      "isOwned": false,
      "isCompetitor": true,         // ← matches a configured competitor's
                                    //    ownedDomains
      "rankPosition": 1
    }
    // ... typically 5-15 citations per response, varies by provider
  ]
}
```

The "winning" record looks like: **non-zero `brandMentions[0].mentionCount`** + **`rankPosition: 1, 2, or 3`** + **at least one `isOwned: true` citation**. That signals "the AI knows you exist, ranks you well, and pulls from your own content."

### What you get per record

Each (prompt × provider) call produces one record with:

- `responseText` — the full provider response, verbatim
- `brandMentions[]` — your brand's match count, surrounding contexts, and list-rank position
- `competitorMentions[]` — same shape, for each configured competitor
- `citations[]` — every URL the response cited, marked as `inline` (text-mentioned) or `grounded` (declared as source), with `isOwned` / `isCompetitor` flags
- `costUsd`, `responseLatencyMs`, `transport`, `groundingUsed` — observability you can audit
- Optional `locale` — the country/language applied for this run, with `method: 'native'` or `'system-prompt-instruction'` so dashboards can distinguish the two

The full Zod schema is published as `@apify-portfolio/aeo-schema` on npm — drop it into Looker, Hex, or your own ETL with type-safe shape guarantees.

### Providers

| Provider | Default model | Transport |
|---|---|---|
| Perplexity | `sonar` | direct API → Vercel AI Gateway → OpenRouter |
| OpenAI ChatGPT | `gpt-5.5` | direct API → Vercel AI Gateway → OpenRouter |
| Anthropic Claude | `claude-sonnet-4-6` | direct API → Vercel AI Gateway → OpenRouter |
| Google Gemini | `gemini-3.1-pro-preview` | direct API → Vercel AI Gateway → OpenRouter |
| xAI Grok | `grok-4.20-non-reasoning` | direct API → Vercel AI Gateway → OpenRouter |
| Google AI Overviews | n/a (Google) | DataForSEO → SerpAPI |

Every LLM provider supports a 3-tier transport chain. If your direct key 429s or 5xxs, the Actor falls back to Vercel AI Gateway, then OpenRouter. Records are stamped with `transport: 'direct' | 'vercel' | 'openrouter' | 'serp-direct' | 'serp-fallback'` so you see what handled each call.

### Pricing (Pay-per-Event)

Per-resolution charges scale with the upstream cost basis of each provider:

| Event | Price | When |
|---|---:|---|
| `aeo-resolve-perplexity` | **$0.010** | One Perplexity record |
| `aeo-resolve-aio` | **$0.015** | One Google AI Overview record |
| `aeo-resolve-light` | **$0.020** | One Anthropic or xAI Grok record |
| `aeo-resolve-gemini` | **$0.025** | One Gemini record (grounded) |
| `aeo-resolve-openai-base` | **$0.075** | One OpenAI record (base, always charged) |
| `aeo-resolve-openai-grounding-light` | +$0.05 | OpenAI grounded + upstream < $0.05 |
| `aeo-resolve-openai-grounding-medium` | +$0.20 | OpenAI grounded + $0.05 ≤ upstream < $0.20 |
| `aeo-resolve-openai-grounding-heavy` | +$0.50 | OpenAI grounded + upstream ≥ $0.20 |
| `aeo-sentiment-tagged` | +$0.005 | When `enableSentimentTagging: true` |
| `aeo-prompt-discovery` | $0.050 | Once per run when `discoverPromptsFromUrl` is set |
| `aeo-raw-response-passthrough` | +$0.001 | When `emitRawProviderResponse: true` |

#### OpenAI grounding — what the buckets mean

OpenAI's `web_search` tool charges for the underlying tokens including search-result content, which makes per-call cost highly variable. v1.1.1 splits OpenAI into a flat base + a bracketed grounding event so the price scales with actual cost:

| Query type | Typical upstream cost | Bracket | Buyer total |
|---|---:|---|---:|
| Training-only (web\_search off) | ~$0.04 | none | **$0.075** |
| Narrow factual / shallow grounding | < $0.05 | light | **$0.125** |
| Brand comparison / moderate grounding | $0.05–$0.20 | medium | **$0.275** |
| Vague open-ended / deep grounding | ≥ $0.20 | heavy | **$0.575** |

Defaults to `useWebSearch: true` because that matches what real ChatGPT users actually see. If you want training-only signal (cheaper, deterministic), set `providerConfig.openai.useWebSearch: false` and pay only the $0.075 base. The optional `providerConfig.openai.maxCostUsdPerRecord` (default $0.50) logs outliers above the heavy bracket to `RUN_SUMMARY` for transparency.

#### Cost example

A 10-prompt × 6-provider sweep with default settings (OpenAI grounded at medium bracket, the typical case):

| Provider | Per record | × 10 prompts |
|---|---:|---:|
| Perplexity | $0.010 | $0.10 |
| Google AIO | $0.015 | $0.15 |
| Anthropic | $0.020 | $0.20 |
| xAI Grok | $0.020 | $0.20 |
| Gemini | $0.025 | $0.25 |
| OpenAI (base + medium grounding) | $0.275 | $2.75 |
| **Sweep total** | | **$3.65** |

Roughly $0.36 per prompt across 6 engines, vs $99–$5K/mo SaaS minimums for the same data shape.

### v1.1 features

#### Vertical templates

Skip writing prompts by hand. Set `template` to any of the 7 built-ins, optionally pass `templateVariables` to fill `{category}` / `{audience}` / `{year}`:

| Template | Built for |
|---|---|
| `saas-b2b` | B2B SaaS — comparison, discovery, features, pricing, integration |
| `ecommerce-d2c` | Direct-to-consumer e-commerce |
| `local-services` | Local service businesses (HVAC, dental, legal, etc.) |
| `agency` | Marketing/PR agencies |
| `media-publisher` | News/magazine publishers |
| `fintech` | Banking, lending, payments, investing |
| `custom` | None — supply your own `prompts` |

Each template ships **prompts pre-grouped into intent categories** (comparison, discovery, feature-evaluation, pricing, etc.). Categories propagate to records as `promptCategory` for direct dashboard pivots. You can mix templates with your own `prompts` and `queryGroups`.

#### Locale targeting

Set `locale: { country: 'DE', language: 'de' }` and the Actor routes per provider:

- **Native** for Perplexity (`web_search_options.user_location`), OpenAI (`web_search.user_location` when grounding is on), and Google AI Overviews (DataForSEO country/language)
- **System-prompt instruction** for Anthropic, Gemini, and xAI Grok (no native primitive — we prepend a fragment asking the model to respond as if helping a user in that locale)

Each record's `locale.method` field tells you which approach was used.

#### Query groups

For workflows that pre-organize prompts by group, supply `queryGroups: [{ groupName: 'Awareness', queries: [...] }, ...]`. Internally transposed to `promptCategories` so all the existing pivot-by-category behavior just works.

### Input

Required:

- `acknowledgePublicBrandsOnly: true` — ToS attestation. The Actor refuses to run without it.
- `brand: { name, aliases?, ownedDomains? }` — the brand you're tracking
- `providers: ['openai', 'anthropic', ...]` — at least one
- One of: `prompts`, `template`, `queryGroups`, `discoverPromptsFromUrl`

See [`@apify-portfolio/aeo-schema`](https://www.npmjs.com/package/@apify-portfolio/aeo-schema) for the full Zod schema with field descriptions.

#### Example input

```json
{
  "prompts": [
    "What is the best AI coach app for Clash Royale players?",
    "How can I improve my ladder ranking in Clash Royale?"
  ],
  "brand": {
    "name": "Clash Coach AI",
    "aliases": ["ClashCoachAI", "Clash Coach", "ClashCoach.ai"],
    "ownedDomains": ["clashcoachai.com"]
  },
  "competitors": [
    { "name": "Royale Buddy", "ownedDomains": ["royalebuddy.com"] },
    { "name": "MetaDecks", "ownedDomains": ["metadecks.gg"] },
    { "name": "RoyaleAPI", "ownedDomains": ["royaleapi.com"] }
  ],
  "providers": ["perplexity", "anthropic", "xai-grok", "google-aio"],
  "locale": { "country": "US", "language": "en" },
  "acknowledgePublicBrandsOnly": true
}
```

### ToS attestation

Anthropic's usage policy and OpenAI's terms prohibit using their APIs for surveillance, tracking, or profiling of **individuals**. This Actor is for tracking **public brands** in AI responses — that's why `acknowledgePublicBrandsOnly: true` is required. The Actor heuristically rejects prompts containing honorific patterns (`Mr.`, `Mrs.`, `Dr.`, etc.) unless `bypassToSGuard: true` is also set (use only with documented authorization for cases like journalism or authorized public-figure research).

### Limitations

- **Word-boundary matching only** for brand mentions in v1 — no fuzzy matching. List exact spelling variants (legal name, abbreviations, ticker, product synonyms) in `brand.aliases` and `competitors[].aliases`.
- **No webhook output** — Apify dataset is the v1 sink. Apify's own integrations (Zapier, Make, Slack via webhook) handle delivery to downstream systems.
- **No multi-brand profiles per run** — each run is one brand. Use Apify's scheduling + multiple Actor runs (one per brand) for portfolio monitoring.
- **`copilot` reserved but not implemented** — Microsoft Copilot is on the roadmap for v1.2.

### Operator FAQ

#### 🔧 Why isn't my brand mentioned?

If `brandMentions` is empty across most or all of your records, it means the AI engines genuinely don't know your brand for the prompts you're asking. This is real diagnostic data — not a bug. Three things to check, in order:

1. **Are you matching the right name variants?** AI engines may say "ClashCoach" when your `brand.name` is "Clash Coach AI". Word-boundary matching is exact (case-insensitive). List every spelling variant in `brand.aliases` — abbreviations, ticker, product name, common misspellings. **This catches ~30% of "missing" mentions.**
2. **Is the prompt too narrow or too vague?** "How do I improve my ladder ranking in Clash Royale?" produces a different response than "Best Clash Royale coaching apps." The first pulls strategy advice; the second pulls product mentions. Run both prompt styles to see which surfaces your brand.
3. **Is the AI's grounding pulling from the wrong sources?** Look at `citations[].domain`. If the AI is citing g2.com, capterra.com, and a competitor's blog, but never your own site, that's the problem. AI engines surface brands based on what their grounding sources say. Your AEO work is producing content for *those* sources to cite, not the AI directly. Improve coverage on the cited domains and your brand starts showing up.

If all three check out and you still see zero mentions, your brand may genuinely have low AI-search presence — that's the signal AEO content marketing is meant to fix. Run the same prompts again in 4 weeks after publishing more content; see if the count moves.

#### 📅 How do I run this weekly?

Apify natively supports cron-style scheduling. Two-step setup:

1. **Console → Schedules** → New Schedule. Set cron: `0 9 * * 1` (every Monday 9am UTC).
2. Pick this Actor and the input you want recurring (your full production input, not the demo). Apify runs it on schedule and stores each weekly dataset in your account.

For agencies tracking multiple clients: create one Schedule per brand. Each Schedule is independent so a slow run for client A doesn't block client B. Apify scales the underlying compute automatically.

To see week-over-week deltas (only emit changed records), set `"deltaMode": true` in the input. The Actor stores per-prompt-provider state in the KeyValueStore and after the first run only emits records where the response *changed* — much smaller datasets, easier to spot real movement.

#### 📊 How do I pipe results into Google Sheets / Looker / BigQuery?

**Google Sheets** — easiest. Apify Console → run → **Storage → Dataset → Export → Google Sheets**. Or use the dataset's signed share URL with `format=csv` in IMPORTDATA: `=IMPORTDATA("https://api.apify.com/v2/datasets/<id>/items?format=csv&clean=true")`.

**Looker / Looker Studio** — connect to the same Apify dataset URL as a CSV data source, or schedule a daily ETL into BigQuery via Apify's BigQuery integration (Console → Integrations → BigQuery).

**BigQuery direct** — Apify ships a native integration: Console → Integrations → BigQuery → connect → pick the dataset to mirror. Records flow into a flat table; the JSON columns (`brandMentions`, `citations`) become BigQuery STRUCT/ARRAY columns you can query with `UNNEST`.

**Custom ETL** — the dataset is just JSON over HTTP. Pull with `curl`, jq, or any HTTP client. The schema is published at [`@apify-portfolio/aeo-schema`](https://www.npmjs.com/package/@apify-portfolio/aeo-schema) on npm — install it for type-safe parsing in TypeScript pipelines.

For weekly Slack notifications, use Apify's built-in Slack integration (Console → Integrations → Slack → on success). The Actor sets a status message at run end like *"Run complete: 16 records emitted, $0.26 spent. Clash Coach AI cited 4 times. Top competitor: Royale Buddy (8 mentions)."* — that's what shows up in Slack.

#### Other questions

**Can I use my own provider keys?** Yes — supply `OPENAI_API_KEY` etc. as Apify Console secrets and the Actor uses your keys. If you only supply `VERCEL_API_KEY` or `OPENROUTER_API_KEY`, it routes through that gateway. If you supply none, the Actor uses its built-in fallback keys.

**Why isn't ChatGPT.com web UI a provider?** OpenAI's terms prohibit automated access to ChatGPT.com. We use the OpenAI API — the sanctioned path. API responses differ slightly from the web UI but the data is far cleaner and auditable.

**Can I monitor a public figure?** Only with documented authorization (journalism, authorized research) and the `bypassToSGuard: true` flag. Default policy blocks honorific-style prompts.

# Actor input Schema

## `brand` (type: `object`):

The brand you want to track. List every spelling variant — abbreviations, legal name, ticker — under aliases so all mentions are caught.

## `competitors` (type: `array`):

Optional. AI engines often mention multiple brands in one response — listing competitors here surfaces which ones get cited alongside yours.

## `template` (type: `string`):

Pick a built-in prompt set for your industry. Each template generates ~25 prompts grouped by intent (comparison, discovery, pricing, etc.). Leave empty if you'll write your own prompts below.

## `templateVariables` (type: `object`):

Fill in variables used by the template (e.g., {category}, {audience}, {year}). Most buyers only need to set 'category'. Look at the template's prompts to see what variables it uses.

## `prompts` (type: `array`):

Add your own prompts to run alongside (or instead of) the template's prompts. One prompt per line. Cap: 500 per run.

## `providers` (type: `array`):

Which AI search engines to check. The default 4 fast engines complete in <60s per prompt; add 'openai' and 'google-gemini' for full 6-engine coverage at the cost of longer wall-clock (grounded ChatGPT and Gemini calls take 60-180s each).

## `locale` (type: `object`):

Optional. Bias responses toward a country (ISO 3166-1 alpha-2: US, GB, DE, FR, JP) and language (ISO 639-1: en, de, fr, ja). Used natively for Perplexity/OpenAI/AI Overviews; passed as a system prompt for Anthropic/Gemini/Grok.

## `enableSentimentTagging` (type: `boolean`):

Adds 3-class sentiment classification per mention context. Useful for measuring whether the AI describes your brand positively or negatively. Adds $0.005 per record.

## `discoverPromptsFromUrl` (type: `string`):

Alternative to writing prompts: give us your brand's homepage URL, and the Actor reads the content and auto-generates 10-15 likely prompts your customers would ask. Charged $0.05 once per run.

## `acknowledgePublicBrandsOnly` (type: `boolean`):

REQUIRED. Anthropic and OpenAI's terms prohibit using their APIs for surveillance, tracking, or profiling of individuals. This Actor is for tracking public brands, products, services, and topics in AI responses.

## `providerConfig` (type: `object`):

Override defaults per provider. Most useful: providerConfig.openai.useWebSearch (default true; flip to false for cheaper training-only ChatGPT signal). See README for full schema.

## `maxBudgetUsd` (type: `integer`):

Refuse to start the run if the upper-bound cost estimate exceeds this. Useful guard for big template-driven runs.

## `maxResolutions` (type: `integer`):

Stops the run after N successful records. Use with templates if you want only the first N prompts to run.

## `maxConcurrentPrompts` (type: `integer`):

How many prompts to run in parallel. Higher = faster but more upstream API pressure. Default 3 is a good balance.

## `perResolutionTimeoutMs` (type: `integer`):

Cap on any single (prompt × provider) call. Default 120000 (2 min) — bounds wall-clock when one provider hangs.

## `queryGroups` (type: `array`):

Group prompts by name with \[{groupName, queries\[]}]. Familiar shape for buyers migrating from other AEO tools.

## `promptCategories` (type: `object`):

Categorize your custom prompts for dashboard pivots. Each prompt may appear in multiple categories.

## `includePositioningSummary` (type: `boolean`):

Adds a one-line summary per resolution about how the brand was positioned. Adds a small utility-tier LLM call per record.

## `emitRawProviderResponse` (type: `boolean`):

Add the verbatim provider response payload as rawProviderResponse for audit/replay. Adds $0.001 per record.

## `deltaMode` (type: `boolean`):

On second and subsequent runs of the same brand, emit only records whose content changed since last run. Useful for weekly tracking — saves you parsing identical responses.

## `deltaStateKey` (type: `string`):

Override for the KeyValueStore key holding cross-run delta state. Default: aeo-citation-monitor-{slug(brand.name)}-state.

## `organizationId` (type: `string`):

Pass-through tag for multi-tenant dashboards. Every record carries this value.

## `runId` (type: `string`):

Optional explicit UUID. Default: a fresh v4 UUID per run.

## `bypassToSGuard` (type: `boolean`):

Bypass the heuristic prompt-validation guard that flags honorific patterns (Mr./Mrs./Dr./...). Use only with documented authorization (e.g., authorized journalism research).

## Actor input object example

```json
{
  "brand": {
    "name": "Clash Coach AI",
    "aliases": [
      "ClashCoachAI",
      "Clash Coach",
      "ClashCoach.ai"
    ],
    "ownedDomains": [
      "clashcoachai.com"
    ]
  },
  "competitors": [
    {
      "name": "Royale Buddy",
      "aliases": [
        "RoyaleBuddy"
      ],
      "ownedDomains": [
        "royalebuddy.com"
      ]
    },
    {
      "name": "MetaDecks",
      "aliases": [
        "Meta Decks"
      ],
      "ownedDomains": [
        "metadecks.gg"
      ]
    },
    {
      "name": "RoyaleAPI",
      "aliases": [
        "Royale API"
      ],
      "ownedDomains": [
        "royaleapi.com"
      ]
    }
  ],
  "template": "ecommerce-d2c",
  "templateVariables": {
    "category": "Clash Royale coaching app",
    "audience": "competitive Clash Royale players"
  },
  "prompts": [],
  "providers": [
    "perplexity",
    "anthropic",
    "xai-grok",
    "google-aio"
  ],
  "locale": {
    "country": "US",
    "language": "en"
  },
  "enableSentimentTagging": false,
  "acknowledgePublicBrandsOnly": true,
  "maxConcurrentPrompts": 3,
  "perResolutionTimeoutMs": 120000,
  "includePositioningSummary": false,
  "emitRawProviderResponse": false,
  "deltaMode": false,
  "bypassToSGuard": false
}
```

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "brand": {
        "name": "Clash Coach AI",
        "aliases": [
            "ClashCoachAI",
            "Clash Coach",
            "ClashCoach.ai"
        ],
        "ownedDomains": [
            "clashcoachai.com"
        ]
    },
    "competitors": [
        {
            "name": "Royale Buddy",
            "aliases": [
                "RoyaleBuddy"
            ],
            "ownedDomains": [
                "royalebuddy.com"
            ]
        },
        {
            "name": "MetaDecks",
            "aliases": [
                "Meta Decks"
            ],
            "ownedDomains": [
                "metadecks.gg"
            ]
        },
        {
            "name": "RoyaleAPI",
            "aliases": [
                "Royale API"
            ],
            "ownedDomains": [
                "royaleapi.com"
            ]
        }
    ],
    "templateVariables": {
        "category": "Clash Royale coaching app",
        "audience": "competitive Clash Royale players"
    },
    "locale": {
        "country": "US",
        "language": "en"
    }
};

// Run the Actor and wait for it to finish
const run = await client.actor("glaciological_hexahedron/aeo-citation-monitor").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "brand": {
        "name": "Clash Coach AI",
        "aliases": [
            "ClashCoachAI",
            "Clash Coach",
            "ClashCoach.ai",
        ],
        "ownedDomains": ["clashcoachai.com"],
    },
    "competitors": [
        {
            "name": "Royale Buddy",
            "aliases": ["RoyaleBuddy"],
            "ownedDomains": ["royalebuddy.com"],
        },
        {
            "name": "MetaDecks",
            "aliases": ["Meta Decks"],
            "ownedDomains": ["metadecks.gg"],
        },
        {
            "name": "RoyaleAPI",
            "aliases": ["Royale API"],
            "ownedDomains": ["royaleapi.com"],
        },
    ],
    "templateVariables": {
        "category": "Clash Royale coaching app",
        "audience": "competitive Clash Royale players",
    },
    "locale": {
        "country": "US",
        "language": "en",
    },
}

# Run the Actor and wait for it to finish
run = client.actor("glaciological_hexahedron/aeo-citation-monitor").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "brand": {
    "name": "Clash Coach AI",
    "aliases": [
      "ClashCoachAI",
      "Clash Coach",
      "ClashCoach.ai"
    ],
    "ownedDomains": [
      "clashcoachai.com"
    ]
  },
  "competitors": [
    {
      "name": "Royale Buddy",
      "aliases": [
        "RoyaleBuddy"
      ],
      "ownedDomains": [
        "royalebuddy.com"
      ]
    },
    {
      "name": "MetaDecks",
      "aliases": [
        "Meta Decks"
      ],
      "ownedDomains": [
        "metadecks.gg"
      ]
    },
    {
      "name": "RoyaleAPI",
      "aliases": [
        "Royale API"
      ],
      "ownedDomains": [
        "royaleapi.com"
      ]
    }
  ],
  "templateVariables": {
    "category": "Clash Royale coaching app",
    "audience": "competitive Clash Royale players"
  },
  "locale": {
    "country": "US",
    "language": "en"
  }
}' |
apify call glaciological_hexahedron/aeo-citation-monitor --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=glaciological_hexahedron/aeo-citation-monitor",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "AEO Citation Monitor — Brand Tracking in AI Search",
        "description": "Monitor how your brand appears in AI search responses. Submits prompts to ChatGPT, Claude, Gemini, Perplexity, xAI Grok, and Google AI Overviews; emits structured records of brand mentions, cited URLs, competitor positions, and list-rank position. Pay-per-resolution pricing.",
        "version": "0.1",
        "x-build-id": "HiQmuGVdvkS0mT9Qh"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/glaciological_hexahedron~aeo-citation-monitor/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-glaciological_hexahedron-aeo-citation-monitor",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/glaciological_hexahedron~aeo-citation-monitor/runs": {
            "post": {
                "operationId": "runs-sync-glaciological_hexahedron-aeo-citation-monitor",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/glaciological_hexahedron~aeo-citation-monitor/run-sync": {
            "post": {
                "operationId": "run-sync-glaciological_hexahedron-aeo-citation-monitor",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "brand",
                    "providers",
                    "acknowledgePublicBrandsOnly"
                ],
                "properties": {
                    "brand": {
                        "title": "Your brand",
                        "type": "object",
                        "description": "The brand you want to track. List every spelling variant — abbreviations, legal name, ticker — under aliases so all mentions are caught."
                    },
                    "competitors": {
                        "title": "Competitors to track",
                        "type": "array",
                        "description": "Optional. AI engines often mention multiple brands in one response — listing competitors here surfaces which ones get cited alongside yours.",
                        "default": []
                    },
                    "template": {
                        "title": "Vertical template",
                        "enum": [
                            "saas-b2b",
                            "ecommerce-d2c",
                            "local-services",
                            "agency",
                            "media-publisher",
                            "fintech"
                        ],
                        "type": "string",
                        "description": "Pick a built-in prompt set for your industry. Each template generates ~25 prompts grouped by intent (comparison, discovery, pricing, etc.). Leave empty if you'll write your own prompts below.",
                        "default": "ecommerce-d2c"
                    },
                    "templateVariables": {
                        "title": "Customize the template",
                        "type": "object",
                        "description": "Fill in variables used by the template (e.g., {category}, {audience}, {year}). Most buyers only need to set 'category'. Look at the template's prompts to see what variables it uses."
                    },
                    "prompts": {
                        "title": "Custom prompts (optional)",
                        "maxItems": 500,
                        "type": "array",
                        "description": "Add your own prompts to run alongside (or instead of) the template's prompts. One prompt per line. Cap: 500 per run.",
                        "default": [],
                        "items": {
                            "type": "string"
                        }
                    },
                    "providers": {
                        "title": "AI engines to query",
                        "minItems": 1,
                        "uniqueItems": true,
                        "type": "array",
                        "description": "Which AI search engines to check. The default 4 fast engines complete in <60s per prompt; add 'openai' and 'google-gemini' for full 6-engine coverage at the cost of longer wall-clock (grounded ChatGPT and Gemini calls take 60-180s each).",
                        "default": [
                            "perplexity",
                            "anthropic",
                            "xai-grok",
                            "google-aio"
                        ],
                        "items": {
                            "type": "string"
                        }
                    },
                    "locale": {
                        "title": "Country / language",
                        "type": "object",
                        "description": "Optional. Bias responses toward a country (ISO 3166-1 alpha-2: US, GB, DE, FR, JP) and language (ISO 639-1: en, de, fr, ja). Used natively for Perplexity/OpenAI/AI Overviews; passed as a system prompt for Anthropic/Gemini/Grok."
                    },
                    "enableSentimentTagging": {
                        "title": "Tag mentions with sentiment (positive / neutral / negative)",
                        "type": "boolean",
                        "description": "Adds 3-class sentiment classification per mention context. Useful for measuring whether the AI describes your brand positively or negatively. Adds $0.005 per record.",
                        "default": false
                    },
                    "discoverPromptsFromUrl": {
                        "title": "Auto-discover prompts from a URL",
                        "type": "string",
                        "description": "Alternative to writing prompts: give us your brand's homepage URL, and the Actor reads the content and auto-generates 10-15 likely prompts your customers would ask. Charged $0.05 once per run."
                    },
                    "acknowledgePublicBrandsOnly": {
                        "title": "I confirm I'm tracking a public brand, not a private individual",
                        "type": "boolean",
                        "description": "REQUIRED. Anthropic and OpenAI's terms prohibit using their APIs for surveillance, tracking, or profiling of individuals. This Actor is for tracking public brands, products, services, and topics in AI responses.",
                        "default": true
                    },
                    "providerConfig": {
                        "title": "Per-provider settings (advanced)",
                        "type": "object",
                        "description": "Override defaults per provider. Most useful: providerConfig.openai.useWebSearch (default true; flip to false for cheaper training-only ChatGPT signal). See README for full schema."
                    },
                    "maxBudgetUsd": {
                        "title": "Pre-flight budget cap (USD)",
                        "minimum": 1,
                        "type": "integer",
                        "description": "Refuse to start the run if the upper-bound cost estimate exceeds this. Useful guard for big template-driven runs."
                    },
                    "maxResolutions": {
                        "title": "Hard cap on records (= prompts × providers)",
                        "minimum": 1,
                        "type": "integer",
                        "description": "Stops the run after N successful records. Use with templates if you want only the first N prompts to run."
                    },
                    "maxConcurrentPrompts": {
                        "title": "Parallel prompts",
                        "minimum": 1,
                        "maximum": 10,
                        "type": "integer",
                        "description": "How many prompts to run in parallel. Higher = faster but more upstream API pressure. Default 3 is a good balance.",
                        "default": 3
                    },
                    "perResolutionTimeoutMs": {
                        "title": "Per-call timeout (milliseconds)",
                        "minimum": 10000,
                        "maximum": 900000,
                        "type": "integer",
                        "description": "Cap on any single (prompt × provider) call. Default 120000 (2 min) — bounds wall-clock when one provider hangs.",
                        "default": 120000
                    },
                    "queryGroups": {
                        "title": "Query groups (alternative to template)",
                        "type": "array",
                        "description": "Group prompts by name with [{groupName, queries[]}]. Familiar shape for buyers migrating from other AEO tools."
                    },
                    "promptCategories": {
                        "title": "Prompt categories",
                        "type": "object",
                        "description": "Categorize your custom prompts for dashboard pivots. Each prompt may appear in multiple categories."
                    },
                    "includePositioningSummary": {
                        "title": "Include positioning summary",
                        "type": "boolean",
                        "description": "Adds a one-line summary per resolution about how the brand was positioned. Adds a small utility-tier LLM call per record.",
                        "default": false
                    },
                    "emitRawProviderResponse": {
                        "title": "Include raw provider response",
                        "type": "boolean",
                        "description": "Add the verbatim provider response payload as rawProviderResponse for audit/replay. Adds $0.001 per record.",
                        "default": false
                    },
                    "deltaMode": {
                        "title": "Delta mode — only emit changed records",
                        "type": "boolean",
                        "description": "On second and subsequent runs of the same brand, emit only records whose content changed since last run. Useful for weekly tracking — saves you parsing identical responses.",
                        "default": false
                    },
                    "deltaStateKey": {
                        "title": "Delta state key (override)",
                        "type": "string",
                        "description": "Override for the KeyValueStore key holding cross-run delta state. Default: aeo-citation-monitor-{slug(brand.name)}-state."
                    },
                    "organizationId": {
                        "title": "Organization tag",
                        "type": "string",
                        "description": "Pass-through tag for multi-tenant dashboards. Every record carries this value."
                    },
                    "runId": {
                        "title": "Run ID override",
                        "type": "string",
                        "description": "Optional explicit UUID. Default: a fresh v4 UUID per run."
                    },
                    "bypassToSGuard": {
                        "title": "Bypass ToS heuristic guard",
                        "type": "boolean",
                        "description": "Bypass the heuristic prompt-validation guard that flags honorific patterns (Mr./Mrs./Dr./...). Use only with documented authorization (e.g., authorized journalism research).",
                        "default": false
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
