# AI Brand Visibility - ChatGPT, Perplexity, Copilot, Google AI (`doesaiknow/ai-brand-visibility---chatgpt-perplexity-copilot-google-ai`) Actor

Analyst-grade AI brand visibility across ChatGPT, Perplexity, Copilot, Google AI Overview & AI Mode. $0.30/query - 15 real AI interactions, ground-truth validated. AI Visibility Score, Share of Voice, competitor matrix. Built with a SEO analyst. MCP server. AI SEO/GEO/AEO audit, no subscription.

- **URL**: https://apify.com/doesaiknow/ai-brand-visibility---chatgpt-perplexity-copilot-google-ai.md
- **Developed by:** [Dawid S](https://apify.com/doesaiknow) (community)
- **Categories:** MCP servers, SEO tools, AI
- **Stats:** 3 total users, 2 monthly users, 100.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

from $0.18 / query

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.
Since this Actor supports Apify Store discounts, the price gets lower the higher subscription plan you have.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## AI Brand Visibility Monitor — ChatGPT, Perplexity, Copilot, Google AI Overview & AI Mode

**Analyst-grade AI brand visibility tracking across 5 major AI search engines, with ground-truth validated data — no subscription, no API keys, no month‑long commitments.** Pay **$0.30 per query**. Each query automatically expands into 15–25 real AI interactions across all engines, so you cover buyer *intent*, not just three guesses.

> **Co-designed with a senior SEO / GEO analyst.** Every metric, formula, validation layer and output field reflects what a professional brand-visibility report needs to look like — the kind that holds up in a board deck, an agency client review, or a Looker dashboard for a CMO. This is not a one-prompt scraper rebadged as "AI monitoring".

Built for **Generative Engine Optimization (GEO)**, **AI Engine Optimization (AEO)**, SEO agencies, brand managers, content strategists, RevOps teams — and **AI agents consuming the data via the Model Context Protocol (MCP)** server we're building (see roadmap below).

---

### 🎯 Why this is different

AI search is replacing classic Google. When a buyer asks ChatGPT, Perplexity, or Google AI Overview *"what's the best CRM for a marketing agency?"* — does your brand appear? In what position? With what framing (leader, alternative, just-mentioned)? How do competitors perform on the *same* queries on the *same* engines?

Most AI‑visibility tools give you one prompt on one or two engines and call it a day. That's a lucky-guess audit. **We cover buyer intent.** One query you type becomes 3–5 semantically related variants (sourced from Google's People‑Also‑Ask and related searches), each run across 5 engines — so one query buys you 15–25 validated data points instead of one.

On top of that, every metric in the output is **ground‑truth validated** against the raw AI response text. Brand extraction LLMs routinely hallucinate up to **5.6× more mentions** than actually appear. We catch that inflation at three layers before it touches your numbers.

---

### 💰 Pricing — $0.30 per query, no subscription

**Every paid plan on Apify charges the same flat $0.30 per query, with automatic volume discounts via Apify tier:**

| Apify tier | Price per query | Discount |
|---|---|---|
| **Free** | $0.30 | — |
| **Bronze (Starter)** | $0.27 | −10% |
| **Silver (Scale)** | $0.23 | −23% |
| **Gold (Business)** | $0.18 | −40% |

Minimum order: 3 queries ($0.90 on Free tier, $0.54 on Gold).

#### What you actually get for $0.30

One query is not one question with one answer. It's the full fan-out pipeline:

````

1 query you typed
└── 3-5 semantically related variants (People-Also-Ask + related searches)
└── × 5 AI engines (ChatGPT, Perplexity, Copilot, Google AI Overview, AI Mode)
└── = 15-25 real AI responses
└── Each validated against the raw markdown (3-layer ground truth)
└── Scored, classified, cross-compared with competitors

````

**15–25 real AI datapoints for $0.30. At the Gold tier that's $0.007–$0.012 per datapoint.**

#### Sample costs

| Scenario | Queries | Free tier | Gold tier |
|---|---|---|---|
| Quick brand check | 3 | **$0.90** | $0.54 |
| Monthly brand scan | 10 | **$3.00** | $1.80 |
| Deep competitive audit | 30 | **$9.00** | $5.40 |
| Agency client report (5 clients) | 100 | **$30.00** | $12.00 |
| Heavy power user | 500 | **$150.00** | $60.00 |

---

### 🆚 How we compare to the alternatives

#### vs. SaaS AI‑visibility subscriptions

| | Typical enterprise AI‑visibility SaaS | **This actor** |
|---|---|---|
| Entry price | **€85–€425 / month** (annual contracts common) | **$0.30 per query, pay as you go** |
| Setup | Account, onboarding call, contract | **Zero setup — paste brand, run** |
| AI engines covered | 3–5 | **5** |
| Fan‑out per query | None or 1–2 variants | **3–5 automatic variants** |
| Ground‑truth validation | Marketing copy says yes, reality varies | **3‑layer markdown gating, documented** |
| Data format | Dashboards, rarely API | **Structured JSON in your Apify dataset** |
| Kill switch | Cancel before renewal | **No recurring bill to cancel** |

**Bottom line:** a 10‑query scan here costs $1.20–$3.00. An enterprise subscription covering the same scope costs €85/month, minimum.

#### vs. DIY using raw OpenAI / Anthropic / Google APIs

| | Build it yourself | **This actor** |
|---|---|---|
| Cost per validated datapoint | $0.30–$0.50 (LLM tokens + proxies + orchestration) | **$0.005–$0.020** |
| Engineering time | 40+ hours (scraping, rotation, schema, validation) | **0 hours** |
| API keys to manage | 5+ | **None** |
| Proxy rotation, captcha handling | On you | **Handled** |
| Hallucination validation | On you | **Built in** |

#### vs. other Apify brand‑visibility actors

We cost more per "item" on paper than some bare scrapers. Here's what the per‑item price hides:

| What you actually get per scan | Cheapest Apify alternatives ($0.008–$0.10 per item) | **This actor ($0.30/query)** |
|---|---|---|
| AI engines covered | 1–3 (often Perplexity only, or ChatGPT only) | **5** (ChatGPT, Perplexity, Copilot, Google AI Overview, AI Mode) |
| Real AI datapoints per unit you pay for | 1 | **15–25** |
| Fan‑out across related queries | No — one prompt, one answer | **3–5 variants per query, sourced from Google PAA + related** |
| Ground‑truth validation | No — LLM says brand X was mentioned, you trust it | **Yes — literal markdown substring match, 3 layers** |
| Hallucination inflation control | No — up to 5.6× inflated mention rates | **Yes — validated numbers only** |
| Real browser sessions vs. API | Often API‑only (different answers than users see) | **Real browser sessions** |
| Scoring (AIS) with consistency gating | No | **Yes — composite 0–100 AI Visibility Score** |
| Framing enum (leader / compared / …) | No | **Yes — 7‑class structured** |
| Gap analysis | No | **Yes — 5‑class structured with severity** |
| Entity resolution (dedup of "HubSpot", "HubSpot CRM") | No | **Yes — 3‑phase: normalize → prefix → LLM grouping** |
| Per‑platform × per‑brand matrix | No | **Yes — full matrix with platform‑local SoV** |
| Owned vs. earned citation split | No | **Yes** |
| Sentiment breakdown (positive / neutral / negative) | No or only positive | **Full three‑bucket** |
| Top‑cited domains ranking | No | **Yes — with engine coverage + citation roles** |
| Output format | Flat CSV or plain text | **Structured JSON with typed enums** |
| Cost per validated datapoint | $0.008–$0.10 | **$0.005–$0.020 (at Gold tier)** |

Cheaper per‑item actors give you one scrape. We give you a **full brand‑visibility audit** for the same dollar amount, with data you can actually trust in a board deck.

---

### 🔬 The technical moat — what makes our data reliable

#### 1. Ground‑Truth Validation (3 layers)

When a downstream LLM is asked "which brands are mentioned in this AI response?" it routinely **invents brands that were never actually in the response**. In our internal benchmarks the worst offender claimed a brand was mentioned **5.6× more often** than it actually appeared in the raw text.

**Our fix:**

- **Layer 1 — markdown evidence.** Every mention rate is computed by literal lowercase substring match against the raw response markdown. Not LLM extraction rows. Real text, or nothing.
- **Layer 2 — validated metrics.** Sentiment, position, prominence are computed only for response rows whose markdown actually contains the brand name. Hallucinated rows are discarded before scoring.
- **Layer 3 — consistency from evidence rates.** Our consistency score uses markdown evidence rates, not inflated LLM counts.

Every response carries `ground_truth_validated: true` so you know the gate ran.

#### 2. Query fan‑out that matches real buyer intent

Real buyers don't search "best CRM" once and stop. They ask 3–5 variations — "CRM for small business", "HubSpot vs. Salesforce", "affordable CRM with AI". If your tool checks just the seed query and your brand happens to miss it, your report says you're invisible, and you're wrong.

**Our fix:** every query you submit goes through Google's People‑Also‑Ask and related‑searches panel. An LLM picks the top 3–5 most semantically related variants. All variants run across all 5 engines.

One query you pay for = 15–25 real AI interactions.

#### 3. Five engines covering every major AI ecosystem

- **Google AI Overview** — the AI answer box above Google organic results. Most user‑facing AI surface in 2026.
- **Google AI Mode** — Google's standalone conversational mode, separate from AI Overview.
- **Microsoft Copilot** — Bing‑grounded, default for 400M+ Windows users.
- **OpenAI ChatGPT** — the consumer AI, search‑enabled by default in this actor.
- **Perplexity** — the "AI search engine" of choice for technical buyers.

Want Gemini or Grok too? Pass `platforms: [...]` explicitly and override the default mix.

#### 4. Entity resolution (no double‑counting)

"HubSpot", "HubSpot CRM", "HubSpot Inc." would triple‑count your mention rate in a naive pipeline. We run a 3‑phase resolution pipeline — L1 normalize → L1.5 prefix merge → L2 LLM grouping — so these all collapse into one canonical brand before scoring.

#### 5. Platform‑local Share of Voice

Per‑platform × per‑brand matrix is computed with **tracked‑only denominators** — your target plus the competitors you defined, excluding third‑party mentions the AI drops in. SoV of 28% on ChatGPT means 28% of *tracked* brand mentions on ChatGPT, not 28% of every brand the AI ever mentioned (which would be meaningless).

---

### 🧑‍💼 Built with a senior SEO / GEO analyst

This actor isn't a "we threw a prompt at ChatGPT and called it brand monitoring" project. The metric set, formulas, validation gates, sort orders, tier semantics and output schema were **co-designed with a senior SEO / GEO analyst** who runs brand-visibility audits for paying agency clients. Concretely:

- **AIS formula** (with consistency gating) and **5-class gap analysis** were ported from a battle-tested production reporting stack used in the field, then adapted to ground-truth validation.
- **Per-platform × per-brand matrix** matches the structure agencies already use in client decks — you can drop the JSON straight into Looker / Sheets / a slide and it makes sense without re-shaping.
- **Tracked-only Share of Voice** is the version that survived analyst scrutiny — global SoV (denominator = every brand the AI ever hallucinated) was rejected as misleading.
- **Aggregate strengths / weaknesses** and **top-cited domains with role classification** are pulled into the output because that's what content strategists ask for first when they get a brand audit.

Net result: **the JSON we ship is what an analyst would build in Looker after a week of querying raw data**, except you get it in 3–15 minutes for $0.30 per query, with hallucination control already applied.

---

### 📊 What you get in the output

| Section | What's inside |
|---|---|
| **Summary** | Your brand's AIS, mention rate, share of voice, consistency score, sentiment breakdown, owned / earned citation split, top‑10 aggregate strengths and weaknesses, dominant framing. |
| **Competitors** | Same aggregate metrics, one row per tracked competitor. |
| **Per‑platform** | Each engine's coverage %, mention rate, AIS, average position, top‑3 rate, sentiment breakdown. Sorted by AIS desc so `[0]` is your strongest engine. |
| **Per‑platform × per‑brand matrix** | One row per (engine × tracked brand). Answers "where am I winning, where am I losing" in one table. |
| **Per‑query** | Every (query × engine) cell with framing, sentiment, position, gap class, opportunity type, markdown excerpt, strengths, weaknesses, and full citation list with domain types and roles. |
| **Top cited domains** | Ranked list of domains AI engines trust on your topic. Flags your own domain as `is_brand_owned: true`. |
| **Upgrade CTA** (demo and free tiers) | Counts of platforms, queries, domains, matrix rows hidden at this tier. |

Full OpenAPI contract is in the source repo.

---

### 🚀 Input — what you pass in

```json
{
  "brand": "HubSpot",
  "category": "CRM software",
  "competitors": ["Salesforce", "Pipedrive", "Zoho CRM"],
  "language": "us",
  "queries": [
    "best CRM for marketing agencies",
    "HubSpot vs Salesforce for small business",
    "AI features in HubSpot CRM"
  ]
}
````

- `brand` — **required.** The brand you're auditing. Unicode letters supported (e.g. `Żabka`, `Müller`, `L'Oréal`).
- `queries` — **required for paid scans** (min 3, max 15). Write them in the language your customers use; each becomes 3–5 fan‑out variants × 5 engines = 15–25 real AI calls. Total cost = `N_queries × $0.30`. Omit only when `tier="demo"` (cached preview, no charge).
- `category` — *optional.* Plain‑English product/market category (e.g. `"CRM software"`, `"opony do samochodu"`). Backend treats it as `"general"` if omitted; paid scans use your `queries` verbatim and don't strictly need it.
- `language` — *optional.* ISO‑3166 country code (lowercase, 2 chars: `us`, `pl`, `de`, `fr`, `es`, `it`, `gb`, `br`, `jp`, ...). Drives DataForSEO geo + locale routing for the AI Overview / AI Mode engines (53 countries supported). Unknown codes silently fall back to US/English. Default: `us`.
- `competitors` — *optional.* Up to 5 brands you want ranked alongside you (matrix + scoring covers them). Each gets the same enriched output: AIS, mention rate, share of voice, top‑3 rate, sentiment trio, dominant framing, **plus aggregate strengths/weaknesses and citation\_mix per competitor** (since v0.5).
- `platforms` — *optional* override of the default 5‑engine mix (`chatgpt`, `perplexity`, `gemini`, `copilot`, `ai_overview`). Pick from the 7 supported (above + `grok`, `ai_mode`).

***

### 📦 Sample output (real scan, trimmed)

```json
{
  "brand": "Ahrefs",
  "category": "SEO tools",
  "summary": {
    "ais": 74.6,
    "mention_rate": 89.6,
    "share_of_voice": 14.4,
    "consistency_score": 46.7,
    "avg_position": 2.0,
    "top3_rate_pct": 86.8,
    "sentiment_positive_pct": 39.7,
    "sentiment_neutral_pct": 57.2,
    "sentiment_negative_pct": 2.7,
    "dominant_framing": "leader",
    "owned_citation_pct": 9.3,
    "earned_citation_pct": 90.7,
    "total_citations_seen": 2735,
    "aggregate_strengths": ["backlink analysis", "keyword research", "Content Explorer", "Site Explorer"],
    "aggregate_weaknesses": ["no free trial", "expensive", "steep learning curve", "restrictive credit system"],
    "avg_prominence": 87.3
  },
  "competitors": [
    {
      "brand": "Semrush",
      "ais": 67.2, "mention_rate": 82.0, "share_of_voice": 13.6, "top3_rate_pct": 78.4,
      "sentiment_positive_pct": 46.9, "owned_citation_pct": 4.2, "earned_citation_pct": 95.8,
      "aggregate_strengths": ["all-in-one platform", "broader keyword database", "site audit"],
      "aggregate_weaknesses": ["UI clutter", "weaker backlink data than Ahrefs"],
      "citation_mix": {"blog": 38.5, "media": 12.1, "review_site": 8.2, "brand_owned": 4.2, "other": 37.0}
    }
  ],
  "per_platform_per_brand": [
    { "platform": "copilot",    "brand": "Ahrefs",  "ais": 84.7, "mention_rate": 98.0, "share_of_voice": 28.0, "top3_rate_pct": 87.0 },
    { "platform": "ai_mode",    "brand": "Ahrefs",  "ais": 80.9, "mention_rate": 96.0, "share_of_voice": 29.7, "top3_rate_pct": 82.7 },
    { "platform": "chatgpt",    "brand": "Ahrefs",  "ais": 77.7, "mention_rate": 94.0, "share_of_voice": 28.7, "top3_rate_pct": 85.4 },
    { "platform": "perplexity", "brand": "Ahrefs",  "ais": 73.6, "mention_rate": 90.0, "share_of_voice": 26.9, "top3_rate_pct": 93.0 },
    { "platform": "ai_mode",    "brand": "Semrush", "ais": 64.9, "mention_rate": 82.0, "share_of_voice": 25.3, "top3_rate_pct": 77.5 },
    { "platform": "copilot",    "brand": "Semrush", "ais": 62.4, "mention_rate": 78.0, "share_of_voice": 27.4, "top3_rate_pct": 82.1 }
  ]
}
```

Three insights this actor already hands you for free from the data above:

1. **Ahrefs dominates Copilot** (AIS 84.7, 98% mention rate) — that's your strongest engine.
2. **Perplexity puts you top‑3 in 93% of responses** — highest top‑3 rate anywhere. Strong editorial signal on that engine.
3. **11‑point AIS spread across engines (73.6 → 84.7)** — your brand is not uniformly visible. Content ops have clear per‑engine targets.

Pay $0.30, get actionable board‑deck material like this.

***

### 🧭 Who this is for

- **SEO & GEO agencies** running monthly AI‑visibility audits for clients. Use this to export directly into Looker or a client dashboard.
- **B2B SaaS brand managers** who need to prove "we won the AI mention race" in a quarterly board deck. Full competitor comparison included.
- **Content strategists** looking to see *exactly* which query intents leave them absent from AI answers — the `per_query` section with gap class and opportunity type is the work order for your editorial calendar.
- **RevOps and Sales Ops** personalizing cold outreach at scale: 1 query per prospect via Clay or Apify API.
- **Freelance SEO consultants** who don't want a $99+/month subscription just to run one audit.

***

### ⏱️ How long does a scan take?

| Queries | Approx. runtime | Approx. data points |
|---|---|---|
| 3 | 3–5 min | 45–75 |
| 10 | 8–15 min | 150–250 |
| 30 | 20–40 min | 450–750 |
| 100 | 60–120 min | 1,500–2,500 |

Scans are parallel across engines; the bottleneck is slower AI engines (ChatGPT, AI Mode) on heavy load. Your Apify dataset populates the moment aggregation finishes, and PPE events only fire on successful completion — **partial / failed scans aren't charged**.

***

### 🧰 FAQ

**What exactly counts as 1 query?**
One topic or question you care about. Behind the scenes we expand it to 3–5 semantic variants (from Google PAA + related) and run each across 5 engines. You pay once; you get 15–25 real AI data points.

**Do I need to bring my own OpenAI / Anthropic / Google API keys?**
No. Everything is included in the $0.30 per query.

**What happens if a scan fails mid‑way?**
Every task is idempotent (tracked by a unique correlation key). If one AI engine throws, we retry automatically. The PPE event fires only on successful completion, so a dead‑ended scan isn't charged.

**Can I scan a brand in Polish, German, or another language?**
Yes. Ground‑truth validation uses Unicode NFC + casefold normalization, so "Żabka", "München", "İstanbul" all match reliably against any normalization form the AI uses.

**Can I trust the numbers?**
Yes, and the system is designed around that question. Every metric except Sentiment is backed by a literal markdown substring match. Sentiment is LLM‑classified but gated by markdown evidence. Every response carries `ground_truth_validated: true` confirming the validation ran.

**Where does the data come from?**
AI engines are scraped through real browser automation and public SERP APIs. Raw markdown is preserved for every response because hallucination validation needs it.

**Is my input private?**
Brand and category are public information. Results live in **your** Apify dataset under **your** account. We don't share or sell scan content.

**Can I use this for a client's brand?**
Yes. The output is a complete, self‑contained report you can deliver as-is to clients.

**How does this compare to running 100 scans on a cheaper actor?**
A cheaper actor at $0.008 per item gives you 100 items. This actor at $0.30 per query gives you 1,500–2,500 validated AI data points across 5 engines with scoring, ground-truth validation, entity resolution, fan-out, and a full competitor matrix. On a per-validated-datapoint basis we are **cheaper**, not more expensive.

**Refund policy?**
Apify's standard PPE model: you're only billed on successful task completion. Failed or partial scans are not charged.

**Will there be an MCP server?**
Yes — it's the next major channel after the Apify actor. The same ground-truth validated brand-visibility analytics will be exposed via a Model Context Protocol server so Claude Desktop, Cursor, ChatGPT custom GPTs, n8n / Zapier MCP nodes and any other MCP-aware AI agent can call the dataset as a native tool. Pricing model carries over: agents pay per query, no monthly commitment. Roadmap below.

**Who designed the metric set?**
The actor is co-developed with a senior SEO / GEO analyst who runs paid brand-visibility audits for agency clients. The AIS formula, the 5-class gap analysis, the per-platform × per-brand matrix layout, the tracked-only Share of Voice convention, and the citation-domain rankings are all built around what an analyst actually puts in a client deck — not what looks good in a tool screenshot.

***

### 🗺️ Roadmap

Features we're actively building. No ETAs — we ship when it's good, and changes land behind the `per_*` fields you already consume (backward compatible).

#### Coming soon

- **MCP server (Model Context Protocol)** — a hosted MCP endpoint so any MCP-aware client (Claude Desktop, Cursor, ChatGPT custom GPTs, n8n / Zapier MCP nodes, in-house AI agents) can ask *"how does HubSpot perform across all major AI engines vs Salesforce on Q3 buyer-intent queries?"* and receive ground-truth validated, structured analytics in a single tool call. Same pricing model — billed per query consumed by the agent.
- **Executive summary (LLM narrative)** — 2–4 sentence plain-English interpretation of your visibility profile. *"Ahrefs dominates Copilot at AIS 84.7 but loses 11 points on Perplexity — invest in Perplexity-native citation sources to close the gap."* Co-written with our SEO analyst. Available on every paid tier.

#### Planned

- **Discovered brands** — brands AI engines surface organically around your topic, outside the `competitors[]` list. Catches blind spots.
- **Cited pages per owned domain** — per-engine citation counts for every URL on your own domain. Tells you which pages to double down on and which to kill.
- **Content opportunity map** — full dump of People-Also-Ask + related candidates from the fan-out pass, including those the selector didn't pick. Idea-generation surface for content strategists.
- **Clay / Zapier / Make.com native integrations** — 1-click action nodes.
- **Looker / Sheets / BI exports** — ready-to-paste dataset templates for analyst-grade reporting.

#### Shipped recently

- **Per‑platform × per‑brand matrix** with tracked‑only Share of Voice.
- **Owned vs. earned citation split.**
- **Top‑3 rate and position scoring** on flowing-markdown engines (Copilot, AI Mode, AI Overview).
- **Aggregate strengths and weaknesses** (top-10 per brand).
- **Full sentiment breakdown** (positive / neutral / negative).
- **Preview mode on demo tier** with upgrade hints.

***

### 🏷️ Keywords

AI brand monitoring · ChatGPT brand visibility · Perplexity brand tracking · Copilot brand monitoring · Google AI Overview monitoring · Google AI Mode tracking · AEO (AI Engine Optimization) · GEO (Generative Engine Optimization) · LLM brand tracker · AI SEO · brand mentions in ChatGPT · share of voice AI · AI visibility score · ground truth validation · hallucination-free AI analysis · competitor AI monitoring · AI citation analysis · AI search optimization · SERP AI monitoring · platform-local share of voice · per-engine brand matrix · MCP server brand visibility · Model Context Protocol brand monitoring · MCP brand intelligence · Claude MCP brand visibility · Cursor MCP AI search · ChatGPT MCP integration · agentic brand intelligence · analyst-grade AI brand data · professional AI brand analytics · SEO analyst built tool · GEO analyst tooling · enterprise-grade AI brand monitoring · agency-grade brand visibility · board-deck brand visibility report

***

### 🤝 Built by

[doesaiknow.com](https://doesaiknow.com) — the only brand-AI monitoring stack built around **3-layer Ground-Truth Validation** to eliminate LLM hallucination from your data, **co-designed with a senior SEO / GEO analyst**, and built to ship the same dataset to **Apify, MCP-aware AI agents, BI tools and APIs** under a single pay-per-query economic model.

# Actor input Schema

## `brand` (type: `string`):

The brand you want to analyze. Unicode letters supported (e.g. 'Żabka', 'Müller', 'L'Oréal').

## `queries` (type: `array`):

REQUIRED for paid scans (tier=quick\_tester / pro\_auditor / agency\_scale): min 3, max 15 user-submitted queries (200 chars each), written in the language your customers use. Each query is fanned out into ~4 semantic variants by the backend, executed across all selected AI engines. Total cost = N\_queries × $0.30. OPTIONAL for tier=demo (cached preview, no real scan, no charge — backend ignores queries on the demo path).

## `language` (type: `string`):

ISO-3166 country code (lowercase, 2 chars). Drives DataForSEO geo + locale routing. 'us'=English/United States (default), 'pl'=Polish/Poland, 'de'=German/Germany, 'fr'=French/France, 'es'=Spanish/Spain, 'it'=Italian/Italy, 'gb'=English/United Kingdom, 'br'=Portuguese/Brazil, 'jp'=Japanese/Japan, etc. (53 countries supported, see app/core/geo\_mapper.py). Unknown codes silently fall back to US/English. NOTE FOR MAINTAINERS: this enum is hand-mirrored from app/core/geo\_mapper.py::\_GEO\_MAP keys — when adding a new country there, also append it here (and to the doesaiknow-visibility-apify mirror). No automated drift check exists yet; consider a CI guard if this list grows beyond 60 entries.

## `competitors` (type: `array`):

Up to 5 tracked competitors. Leave empty to let the system pick from AI engine responses.

## `category` (type: `string`):

Optional product / market category (e.g. 'CRM software', 'opony do samochodu', 'projekty domów'). Backend treats it as 'general' if omitted — paid scans use your queries verbatim and don't strictly need it. Unicode letters supported.

## `platforms` (type: `array`):

Which AI engines to query. Default 5 covers the highest-traffic engines; full set (7) adds Grok and Google AI Mode.

## `tier` (type: `string`):

demo = cached sample ($0, no real scan). quick\_tester = 3-5 queries (PPE $0.30/query). pro\_auditor = 5-15 queries (PPE $0.30/query, fan-out enabled). agency\_scale = bulk (PPE $0.30/query).

## Actor input object example

```json
{
  "brand": "HubSpot",
  "queries": [
    "best CRM 2026",
    "HubSpot review",
    "alternatives to HubSpot"
  ],
  "language": "us",
  "competitors": [],
  "platforms": [
    "chatgpt",
    "perplexity",
    "gemini",
    "copilot",
    "ai_overview"
  ],
  "tier": "pro_auditor"
}
```

# Actor output Schema

## `scanResult` (type: `string`):

Default dataset link. Holds the full ScanResult JSON: summary metrics (AIS, mention rate, share of voice, sentiment trio, owned/earned citation split, aggregate strengths/weaknesses), competitor BrandMetrics rows, per-platform breakdown, per-platform x per-brand matrix, per-query detail with citations and framing, top cited domains ranking, and the upgrade CTA for demo runs. See dataset\_schema.json for per-field documentation.

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "brand": "HubSpot",
    "queries": [
        "best CRM 2026",
        "HubSpot review",
        "alternatives to HubSpot"
    ]
};

// Run the Actor and wait for it to finish
const run = await client.actor("doesaiknow/ai-brand-visibility---chatgpt-perplexity-copilot-google-ai").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "brand": "HubSpot",
    "queries": [
        "best CRM 2026",
        "HubSpot review",
        "alternatives to HubSpot",
    ],
}

# Run the Actor and wait for it to finish
run = client.actor("doesaiknow/ai-brand-visibility---chatgpt-perplexity-copilot-google-ai").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "brand": "HubSpot",
  "queries": [
    "best CRM 2026",
    "HubSpot review",
    "alternatives to HubSpot"
  ]
}' |
apify call doesaiknow/ai-brand-visibility---chatgpt-perplexity-copilot-google-ai --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=doesaiknow/ai-brand-visibility---chatgpt-perplexity-copilot-google-ai",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "AI Brand Visibility - ChatGPT, Perplexity, Copilot, Google AI",
        "description": "Analyst-grade AI brand visibility across ChatGPT, Perplexity, Copilot, Google AI Overview & AI Mode. $0.30/query - 15 real AI interactions, ground-truth validated. AI Visibility Score, Share of Voice, competitor matrix. Built with a SEO analyst. MCP server. AI SEO/GEO/AEO audit, no subscription.",
        "version": "0.0",
        "x-build-id": "ZRGlI28PXu1jM8XfL"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/doesaiknow~ai-brand-visibility---chatgpt-perplexity-copilot-google-ai/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-doesaiknow-ai-brand-visibility---chatgpt-perplexity-copilot-google-ai",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/doesaiknow~ai-brand-visibility---chatgpt-perplexity-copilot-google-ai/runs": {
            "post": {
                "operationId": "runs-sync-doesaiknow-ai-brand-visibility---chatgpt-perplexity-copilot-google-ai",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/doesaiknow~ai-brand-visibility---chatgpt-perplexity-copilot-google-ai/run-sync": {
            "post": {
                "operationId": "run-sync-doesaiknow-ai-brand-visibility---chatgpt-perplexity-copilot-google-ai",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "brand"
                ],
                "properties": {
                    "brand": {
                        "title": "Brand",
                        "maxLength": 100,
                        "type": "string",
                        "description": "The brand you want to analyze. Unicode letters supported (e.g. 'Żabka', 'Müller', 'L'Oréal')."
                    },
                    "queries": {
                        "title": "Queries (what people ask AI engines)",
                        "minItems": 0,
                        "maxItems": 15,
                        "type": "array",
                        "description": "REQUIRED for paid scans (tier=quick_tester / pro_auditor / agency_scale): min 3, max 15 user-submitted queries (200 chars each), written in the language your customers use. Each query is fanned out into ~4 semantic variants by the backend, executed across all selected AI engines. Total cost = N_queries × $0.30. OPTIONAL for tier=demo (cached preview, no real scan, no charge — backend ignores queries on the demo path).",
                        "items": {
                            "type": "string"
                        }
                    },
                    "language": {
                        "title": "Language / country",
                        "enum": [
                            "us",
                            "gb",
                            "ca",
                            "au",
                            "de",
                            "fr",
                            "es",
                            "it",
                            "pl",
                            "nl",
                            "se",
                            "no",
                            "dk",
                            "fi",
                            "br",
                            "pt",
                            "jp",
                            "in",
                            "mx",
                            "ie",
                            "at",
                            "ch",
                            "be",
                            "cz",
                            "ro",
                            "ae",
                            "ar",
                            "bg",
                            "cl",
                            "co",
                            "ee",
                            "eg",
                            "gr",
                            "hr",
                            "hu",
                            "id",
                            "il",
                            "kr",
                            "lt",
                            "lv",
                            "my",
                            "ng",
                            "nz",
                            "pe",
                            "ph",
                            "pk",
                            "sa",
                            "sg",
                            "sk",
                            "th",
                            "tr",
                            "ua",
                            "vn",
                            "za"
                        ],
                        "type": "string",
                        "description": "ISO-3166 country code (lowercase, 2 chars). Drives DataForSEO geo + locale routing. 'us'=English/United States (default), 'pl'=Polish/Poland, 'de'=German/Germany, 'fr'=French/France, 'es'=Spanish/Spain, 'it'=Italian/Italy, 'gb'=English/United Kingdom, 'br'=Portuguese/Brazil, 'jp'=Japanese/Japan, etc. (53 countries supported, see app/core/geo_mapper.py). Unknown codes silently fall back to US/English. NOTE FOR MAINTAINERS: this enum is hand-mirrored from app/core/geo_mapper.py::_GEO_MAP keys — when adding a new country there, also append it here (and to the doesaiknow-visibility-apify mirror). No automated drift check exists yet; consider a CI guard if this list grows beyond 60 entries.",
                        "default": "us"
                    },
                    "competitors": {
                        "title": "Competitors (optional)",
                        "maxItems": 5,
                        "type": "array",
                        "description": "Up to 5 tracked competitors. Leave empty to let the system pick from AI engine responses.",
                        "default": [],
                        "items": {
                            "type": "string"
                        }
                    },
                    "category": {
                        "title": "Category (optional)",
                        "maxLength": 100,
                        "type": "string",
                        "description": "Optional product / market category (e.g. 'CRM software', 'opony do samochodu', 'projekty domów'). Backend treats it as 'general' if omitted — paid scans use your queries verbatim and don't strictly need it. Unicode letters supported."
                    },
                    "platforms": {
                        "title": "AI engines",
                        "type": "array",
                        "description": "Which AI engines to query. Default 5 covers the highest-traffic engines; full set (7) adds Grok and Google AI Mode.",
                        "items": {
                            "type": "string",
                            "enum": [
                                "chatgpt",
                                "perplexity",
                                "gemini",
                                "copilot",
                                "grok",
                                "ai_overview",
                                "ai_mode"
                            ]
                        },
                        "default": [
                            "chatgpt",
                            "perplexity",
                            "gemini",
                            "copilot",
                            "ai_overview"
                        ]
                    },
                    "tier": {
                        "title": "Tier",
                        "enum": [
                            "demo",
                            "quick_tester",
                            "pro_auditor",
                            "agency_scale"
                        ],
                        "type": "string",
                        "description": "demo = cached sample ($0, no real scan). quick_tester = 3-5 queries (PPE $0.30/query). pro_auditor = 5-15 queries (PPE $0.30/query, fan-out enabled). agency_scale = bulk (PPE $0.30/query).",
                        "default": "pro_auditor"
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
