AI Brand Monitor — Perplexity, ChatGPT, Claude & Gemini avatar

AI Brand Monitor — Perplexity, ChatGPT, Claude & Gemini

Pricing

from $80.00 / 1,000 brand query checkeds

Go to Apify Store
AI Brand Monitor — Perplexity, ChatGPT, Claude & Gemini

AI Brand Monitor — Perplexity, ChatGPT, Claude & Gemini

Rising star

Track brand visibility across Perplexity, ChatGPT, Claude & Gemini AI search MCP-ready. $0.080/result.

Pricing

from $80.00 / 1,000 brand query checkeds

Rating

0.0

(0)

Developer

Khadin Akbar

Khadin Akbar

Maintained by Community

Actor stats

0

Bookmarked

56

Total users

34

Monthly active users

7 days ago

Last modified

Share

AI Search Brand Visibility Monitor — GEO Tracker for Perplexity, ChatGPT, Claude & Gemini

Track how your brand appears in AI-generated search results across the four major AI search platforms. Built for SEO teams, GEO practitioners, and brand managers who need to monitor and improve their visibility in the AI-powered search era.

Use as MCP tool inside any AI agent — no setup, no install:

https://mcp.apify.com?tools=khadinakbar/ai-search-brand-monitor

…or self-host via the ai-brand-monitor-mcp npm package for stdio MCP clients (Claude Desktop, Cursor, Cline, Continue).

Compatible with: Apify MCP Server (Claude, ChatGPT agents), ai-brand-monitor-mcp npm package, LangChain, Make.com, Zapier, n8n, and direct REST API.


What does AI Search Brand Monitor do?

AI Search Brand Monitor submits brand-relevant queries to Perplexity, ChatGPT (via gpt-4o-search-preview), Claude, and Google Gemini, then analyzes each response to measure how prominently your brand appears. It tracks whether your brand is mentioned, how early it appears in the response, whether your domain is cited as a source, and which competitors are recommended in the same breath.

Run it weekly to build a trend baseline. Run it against competitors to measure share-of-voice across AI search. Run it before and after a content push to see if your GEO strategy is working.

Who is this for: SEO teams adopting GEO (Generative Engine Optimization), brand managers tracking AI search presence, content strategists building AI-citation-worthy content, and agencies running AI visibility audits for clients.


What data does it extract?

Each record in the output represents one brand × query × platform check:

FieldTypeExample
platformstring"perplexity"
querystring"What are the best SEO tools in 2026?"
query_categorystring"best_tools"
brand_mentionedbooleantrue
brand_mention_countinteger3
brand_share_of_voicenumber (0–1)0.667 (brand has 67% of competitor-relevant mentions)
mention_position_scoreinteger (1–10)2 (1 = first mention, most prominent)
mention_contextstring"...Ahrefs remains the top choice for..."
sentimentstring"positive" / "neutral" / "negative"
is_cited_as_sourcebooleantrue
cited_urlstring or null"https://ahrefs.com/blog/seo-tools/"
cited_urlsstring[]["https://ahrefs.com/...", "https://semrush.com/..."]
total_sources_citedinteger8
competitor_mentionsstring[]["Semrush", "Moz"]
competitor_mention_countinteger4 (sum across all competitors)
ai_response_summarystringFirst 800 chars (or 200 chars in concise mode)
model_usedstring"perplexity/sonar"
scraped_atISO datetime"2026-03-28T14:30:00.000Z"

How to use AI Search Brand Monitor

Zero setup — all 4 AI platforms included

No API keys to bring. The actor ships with Perplexity, ChatGPT, Claude, and Gemini access included — you pay one flat rate of $0.08 per brand × query × platform check and we handle the upstream AI API costs.

Step 1: Configure the actor

  1. Open the actor in Apify Console
  2. Enter your brandName (e.g., "Ahrefs")
  3. Enter your brandDomain (e.g., "ahrefs.com") for citation tracking
  4. Add competitors you want to track co-mentions for (e.g., ["Semrush", "Moz"])
  5. Select which queryTemplates to use, or write customQueries
  6. Pick which platforms to query (defaults to all 4)
  7. Choose responseFormat: detailed (default — full 800-char AI response context, best for human review) or concise (200-char summaries, ~3× smaller token footprint, best for AI agents)
  8. Click Start

Step 2: Analyze results

Results appear in real-time as checks complete. Each row shows one AI platform response with full mention analysis. Export to CSV or JSON, or pipe directly into your analytics stack via webhook.

Using with AI agents (Claude, ChatGPT)

Connect via the Apify MCP Server and ask:

"Check how Ahrefs appears in Perplexity and ChatGPT when people ask about best SEO tools"

"Monitor brand visibility for HubSpot across all AI search platforms versus Salesforce and Pipedrive"

The AI agent will automatically select and run this actor, returning structured visibility data.


How to track AI Search Brand Visibility over time

The actor stores a LAST_RUN_SUMMARY in its key-value store after every run, capturing overall mention rate, per-platform scores, and citation rates. Use Apify's scheduler to run this actor weekly, then export the summary data to track trends:

  • Is your mention rate improving after publishing new thought leadership content?
  • Are you appearing in Perplexity citations more often since you added structured data?
  • Is a competitor's mention score rising in ChatGPT while yours stays flat?

Weekly GEO monitoring makes these trends visible. Monthly audits miss the signal.


Understanding the mention_position_score

The mention_position_score (1–10) measures how prominently your brand features in an AI response:

  • 1–2 — Your brand is mentioned in the opening lines. The AI leads with your brand. Excellent GEO position.
  • 3–5 — Your brand appears in the middle of the response alongside other options.
  • 6–9 — Your brand is mentioned late, possibly as an afterthought or secondary option.
  • 10 — Your brand was not mentioned at all. This is a GEO gap to address.

A brand with a consistent position score of 2 across Perplexity and Gemini has dominant AI search presence in its category. A score of 8-9 means competitors are being recommended first — a signal to build more citation-worthy content.


Query templates explained

best_tools — Queries like "What are the best tools like [brand]?" — measures whether your brand is included in category roundups.

alternatives — Queries like "What are alternatives to [brand]?" — measures whether your brand is recommended when users search for options.

recommendations — Queries like "Should I use [brand]?" — measures whether AI platforms endorse your brand directly.

reviews — Queries like "What do users say about [brand]?" — measures sentiment-loaded queries where AI summarizes reputation.

comparisons — Queries like "How does [brand] compare to competitors?" — measures how AI positions your brand head-to-head.

You can also supply customQueries for industry-specific questions your target customers actually ask.


How much does it cost?

This actor uses PAY_PER_EVENT pricing:

UsagePrice
Per brand × query × platform check$0.08
Default agent-friendly run (1 brand, 3 queries, 4 platforms)~$0.96 (fits inside x402 default prepay cap)
Standard human run (1 brand, 5 queries, 4 platforms)~$1.60
Weekly monitoring (4 runs/month, default settings)~$3.84/month
Agency audit (5 brands, 10 queries, 4 platforms)~$16.00

Everything included. No separate AI API keys, no separate AI bills. The $0.08 covers Perplexity Sonar, ChatGPT (gpt-4o-search-preview), Claude Sonnet 4.5 with native web_search, and Gemini 2.5 Flash with Google Search grounding — all routed through OpenRouter and Google's API at the actor's expense.


Use cases for GEO monitoring

SEO teams adopting GEO — Track how content investments translate into AI search citations. Before and after measurement for every major content push.

Brand managers — Monitor share-of-voice across AI platforms weekly. Flag when a competitor's mention rate spikes and diagnose why.

Content strategists — Identify which query categories your brand is missing from and create content specifically designed to fill those gaps.

PR agencies — Show clients their brand's AI search presence as a new KPI alongside traditional SEO rankings.

Product teams — Track whether new features or product launches change how AI platforms describe your product.

Competitive intelligence — Run the actor for a competitor's brand to see their AI visibility scores and which sources AI platforms trust when recommending them.


No scraping, no cookies, no login required

This actor works exclusively through official AI platform APIs (routed via OpenRouter for unified billing). There is no browser automation, no web scraping, no cookie handling, and no login sessions — just direct API calls to Perplexity, OpenAI, Anthropic, and Google. All AI platform access is included in the flat per-check price; no separate AI API keys required. This means it runs fast (seconds per check), never gets blocked, and never requires proxy rotation.

Works great with:


API & Integration

REST API

curl -X POST "https://api.apify.com/v2/acts/khadinakbar~ai-search-brand-monitor/runs" \
-H "Authorization: Bearer YOUR_APIFY_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"brandName": "Ahrefs",
"brandDomain": "ahrefs.com",
"competitors": ["Semrush", "Moz"],
"queryTemplates": ["best_tools", "alternatives"]
}'

JavaScript (Node.js)

import { ApifyClient } from 'apify-client';
const client = new ApifyClient({ token: 'YOUR_APIFY_TOKEN' });
const run = await client.actor('khadinakbar/ai-search-brand-monitor').call({
brandName: 'Ahrefs',
brandDomain: 'ahrefs.com',
competitors: ['Semrush', 'Moz'],
queryTemplates: ['best_tools', 'alternatives', 'recommendations'],
});
const { items } = await client.dataset(run.defaultDatasetId).listItems();
// items[0] = { platform: 'perplexity', brand_mentioned: true, mention_position_score: 2, ... }

Python

from apify_client import ApifyClient
import os
client = ApifyClient(os.environ['APIFY_TOKEN'])
run = client.actor('khadinakbar/ai-search-brand-monitor').call(run_input={
'brandName': 'Ahrefs',
'brandDomain': 'ahrefs.com',
'competitors': ['Semrush', 'Moz'],
'queryTemplates': ['best_tools', 'alternatives'],
})
items = list(client.dataset(run['defaultDatasetId']).iterate_items())

Integrations: Apify MCP Server, LangChain, Make.com, Zapier, n8n, Google Sheets, Slack, Airtable, any webhook-compatible tool.


FAQ

Q: Do I need to bring my own AI API keys? A: No. All 4 AI platforms (Perplexity, ChatGPT, Claude, Gemini) are included in the flat per-check price — no separate signups, no separate billing. Upstream AI API calls are routed via OpenRouter and covered by the $0.08 per brand-query-checked event.

Q: Does Claude have web search? A: Yes. All 4 platforms (Perplexity, ChatGPT, Claude, Gemini) run with live web search enabled so you get a consistent real-time GEO signal across every model — not stale training-data snapshots.

Q: How often should I run this actor? A: Weekly monitoring is the sweet spot for most brands. Run it after publishing major content pieces, launching features, or executing PR campaigns to measure the impact on AI search visibility.

Q: Can I monitor multiple brands? A: Currently one brand per run. To monitor multiple brands or clients, create separate scheduled runs for each. Apify's scheduler supports unlimited concurrent runs.

Q: Is there a free tier? A: Apify provides $5 in free credits on signup, enough for ~60 brand-query checks across all 4 AI platforms.

Q: Can I use this with Claude or ChatGPT as an AI agent? A: Yes — connect via the Apify MCP Server and ask your AI assistant to run brand visibility checks directly in your conversation.


This actor uses official AI platform APIs only (no scraping, no terms-of-service circumvention). All upstream calls are routed through provider-authorized API endpoints (Perplexity, OpenAI, Anthropic, Google) via OpenRouter and Google's Generative Language API. Brand mention analysis is performed locally on the AI-returned text — the actor does not store, redistribute, or claim ownership of AI-generated responses beyond the per-run dataset. Use of this actor is governed by the Apify Terms of Service and the underlying AI providers' respective acceptable-use policies; you are responsible for ensuring your monitoring queries comply with each provider's TOS. The actor is provided as-is, with no warranty regarding the accuracy or completeness of AI responses.


Changelog

v1.6 (May 2026) — production hardening

  • Canary check — health-pings OpenRouter and Gemini at run start; aborts before charging if upstream is down
  • Circuit breaker — aborts the run after 5 consecutive platform failures to protect your credits
  • responseFormat input (detailed / concise) — concise mode trims ai_response_summary to 200 chars and mention_context to ~150 chars, ~3× smaller per-record token footprint for AI agent consumers
  • brand_share_of_voice field per record + overall_share_of_voice in run summary — brand mentions / (brand mentions + Σ competitor mentions). The single decision-relevant share-of-voice metric per check
  • competitor_mention_count per record — sum of competitor mention occurrences (was previously only a name list)
  • Gemini grounding URL resolververtexaisearch.cloud.google.com/grounding-api-redirect/... opaque tokens are now resolved (HEAD with 2.5s timeout) into the real cited URLs
  • Actor.setStatusMessage() at every phase transition — surfaced in Apify Console run cards
  • actor_version field in LAST_RUN_SUMMARY for downstream version tracking
  • 🔧 Default maxQueriesPerPlatform: 5 → 3 (12 checks = ~$0.96, fits agent-payment ceilings; manual users can still set 5–20)
  • 🔧 Tool description rewritten with the 5-part formula (when-to-use, when-NOT-to-use, return shape, pricing) for better MCP/agent discoverability

v1.5 (April 2026)

  • Gemini switched to direct Google API with native google_search grounding (was OpenRouter :online Exa fallback)
  • OPENROUTER_API_KEY and GEMINI_API_KEY both wired as Apify secrets

v1.4 (April 2026)

  • Claude switched from :online (Exa via OpenRouter) to native Anthropic web_search_20250305 server-side tool
  • Added regex extractor for inline [text](url) markdown citations to recover ChatGPT citations that OpenRouter normalizes away
  • Multi-source citation merge with dedupe across citations[], annotations[], and inline links

v1.3 (April 2026) — bridge release

  • Added :online suffix to Claude model to enable web search before native tool support

v1.2 (April 2026) — zero-setup

  • Removed perplexityApiKey, openaiApiKey, anthropicApiKey, geminiApiKey user inputs
  • All AI platform access now routed through a server-side OpenRouter key — users pay one flat per-check price, no separate AI signups
  • Updated README + FAQ to reflect zero-setup positioning

v1.1 (March 2026)

  • Added run_id and model_used output fields
  • Updated Claude model to claude-sonnet-4-6
  • Added exponential backoff retry on all AI API calls
  • Added enum validation on platforms and queryTemplates
  • Added demoMode health-check input
  • Set "permissions": "limited" for improved Trustworthiness quality score
  • Title updated with 🔍 emoji for Store discoverability

v1.0 (March 2026)

  • Initial release — brand visibility monitoring across Perplexity, ChatGPT, Claude, Gemini
  • PAY_PER_EVENT pricing at $0.08 per brand × query × platform check
  • 5 query template categories + custom query support
  • Competitor co-mention tracking and citation URL extraction