LLM Visibility Tracker — ChatGPT, Claude, Perplexity, Gemini
Pricing
from $90.00 / 1,000 keyword × llm visibility checks
LLM Visibility Tracker — ChatGPT, Claude, Perplexity, Gemini
Track LLM visibility, ranking, share-of-voice, and citations for any brand across ChatGPT, Claude, Perplexity, and Gemini MCP-ready. $0.090/result.
Pricing
from $90.00 / 1,000 keyword × llm visibility checks
Rating
0.0
(0)
Developer
Khadin Akbar
Actor stats
0
Bookmarked
1
Total users
1
Monthly active users
2 days ago
Last modified
Categories
Share
LLM Visibility Tracker — Brand Rankings in ChatGPT, Claude, Perplexity & Gemini
Track how your brand actually appears inside answers from the four major LLMs. Built for Answer Engine Optimization (AEO), LLM SEO, and competitive AI visibility tracking — with zero AI API setup required.
Compatible with: Apify MCP Server (Claude, ChatGPT, Cursor, Cline agents), LangChain, LangGraph, OpenAI Agents SDK, Make.com, Zapier, n8n, and any tool that can hit a REST endpoint.
What it does
LLM Visibility Tracker sends realistic prompts to ChatGPT, Claude, Perplexity, and Gemini — all with live web search enabled — and analyzes every answer for:
- Is the brand mentioned? Per-LLM mention rate.
- Where does it rank? Detected position inside numbered/bulleted lists ("1st place", "3rd place", etc.).
- How early in the answer? Position score from the very top to the bottom of the response.
- Share of Voice (%) — your brand's mentions vs configured competitors, in the same answer.
- Is it cited as a source? Domain-level citation detection across native LLM citations.
- Sentiment — is the LLM endorsing, neutral, or critical about the brand?
- Competitor co-mentions — which rivals show up alongside (or instead of) your brand.
Plus a single Visibility Index (0–100) that rolls all four signals into one trendable score.
Why a separate "LLM Visibility Tracker"?
LLM answer engines have very different ranking dynamics from classic SEO. ChatGPT and Claude pull heavily from Reddit, Hacker News, and review sites. Perplexity ranks by citation authority. Gemini grounds on Google. A brand that ranks #1 on Google can be invisible inside Claude's recommendations.
This actor measures the new ranking surface — the answer itself — across all four LLMs in one run, with the cleanest set of metrics shipped today: mention rate, in-list rank, position, share-of-voice, citations, and sentiment.
What you get per check
Each row in the output dataset is one brand × prompt × LLM check:
| Field | Type | Example |
|---|---|---|
llm | string | "chatgpt" |
prompt | string | "What's the best note-taking app for engineers in 2026?" |
prompt_intent | string | "recommendation" |
is_mentioned | boolean | true |
mention_count | integer | 4 |
rank_in_response | integer | 2 (2nd place in a numbered list) |
position_score | integer (1–10) | 2 (mentioned in the first 20% of the answer) |
share_of_voice_pct | number | 66.7 |
is_cited | boolean | true |
brand_citation_url | string|null | "https://notion.so/help/..." |
all_citations | string[] | ["https://notion.so/...", "https://coda.io/..."] |
competitors_mentioned | string[] | ["Coda", "ClickUp"] |
sentiment | string | "positive" |
excerpt | string | "...Notion shines for engineering teams thanks to..." |
full_answer | string | First 1200 chars of the answer |
model | string | "openai/gpt-4o-search-preview" |
checked_at | ISO datetime | "2026-05-01T14:30:00.000Z" |
A run summary with visibility_index, per-LLM scores, share-of-voice, and recommendations is also saved to the key-value store (LAST_RUN_SUMMARY) and POSTed to your webhook if configured.
Zero setup — all 4 LLMs included
You do not bring AI API keys. ChatGPT, Claude, Perplexity, and Gemini access is bundled — every charge covers the upstream model cost.
Pricing is PAY_PER_EVENT at $0.096 per brand × prompt × LLM check.
| Use case | Approx cost |
|---|---|
| Quick mode (3 prompts × 4 LLMs = 12 checks) | ~$1.15 |
| Standard mode (5 prompts × 4 LLMs = 20 checks) | ~$1.92 |
| Deep audit (10 prompts × 4 LLMs = 40 checks) | ~$3.84 |
| Weekly monitoring on standard (4 runs/month) | ~$7.70/month |
A small actor-start charge applies (≈ $0.00006 per GB-RAM, ~$0.0001 per run).
Quickest possible run
The only required input is brand. Everything else has defaults.
{ "brand": "Notion" }
This runs standard mode: 5 prompts × 4 LLMs = 20 checks, ~$1.92. Visibility Index, share-of-voice, and recommendations land in your dataset and the key-value store.
A more complete run:
{"brand": "Notion","domain": "notion.so","competitors": ["Coda", "ClickUp", "Asana"],"category": "productivity software","mode": "standard","promptIntents": ["recommendation", "alternatives", "how_to", "use_case"]}
Built for AI agents (MCP, LangChain, OpenAI Agents)
Every input field has multiple natural-language aliases so an LLM agent can call this actor in whichever shape feels native to it. Examples that all work:
{ "brand": "Notion" }{ "brandName": "Notion" }{ "company": "Notion", "rivals": ["Coda"], "engines": ["chatgpt", "claude"] }{ "product": "Notion", "questions": ["Should I use Notion or Coda for my engineering team?"] }
Apify MCP Server
Connect via the Apify MCP Server and ask Claude or ChatGPT:
"Track LLM visibility for Notion vs Coda, ClickUp, and Asana across all four AI search platforms — focus on recommendation and use-case prompts."
"Run a deep visibility audit on Stripe in fintech and show which LLM is the weakest."
The agent picks apify--llm-visibility-tracker, fills the inputs, and returns structured rankings.
Audit modes
| Mode | Prompts per LLM | Total checks (4 LLMs) | Use case |
|---|---|---|---|
quick | 3 | 12 | Scheduled monitoring, fast spot checks |
standard (default) | 5 | 20 | Weekly tracking, share-of-voice |
deep | 10 | 40 | One-time audits, competitive analysis, board reports |
Override with maxPrompts for fine-grained control (e.g. "maxPrompts": 7).
Prompt intents
The actor auto-builds prompts in plain user phrasing — the kind real customers actually ask LLMs:
recommendation— "What's the best [category] to use right now?"alternatives— "What are alternatives to [brand]?"how_to— "How do I choose the right [category] for my team?"comparison— "[brand] vs its top [category] — which wins?"use_case— "Which [category] works best for small teams?"review— "Is [brand] actually any good?"pricing— "Is [brand] worth the money?"
Pass customPrompts to add your own (real customer questions are ideal):
{"brand": "Notion","customPrompts": ["Should an engineering team move from Confluence to Notion?","What's the cleanest way to manage product specs across Notion and Linear?"]}
Track LLM visibility over time
Schedule the actor weekly via the Apify Scheduler and watch:
- Visibility Index trend — is your brand gaining or losing ground in answer engines?
- Per-LLM gaps — are you missing on Perplexity but strong on ChatGPT?
- Share-of-voice swings — when does a competitor's mention rate spike?
- Citation rate — how often is your domain making it into LLM source lists?
Every run writes a fresh LAST_RUN_SUMMARY to the key-value store, perfect for dashboards.
Visibility Index (0–100)
The single trendable score weighted toward what matters in answer engines:
- 50% — overall mention rate across all checks
- 30% — average rank inside ranked lists (or position score if no lists)
- 20% — citation rate
Practical reading:
- 70–100 — Strong AI presence. The LLM treats you as a primary recommendation.
- 40–69 — Mid-tier. Mentioned, but rivals are usually mentioned first or more often.
- 0–39 — Visibility gap. Most answers don't include your brand at all.
Integrations
- Apify MCP Server — direct invocation from Claude, ChatGPT, Cursor, Cline.
- LangChain / LangGraph — wrap the run as a tool, get structured visibility data back.
- Make.com / Zapier / n8n — webhook payload includes
visibility_index, share-of-voice, per-LLM scores, and recommendations. - Slack / Google Sheets / Airtable / Notion — pipe the dataset into reporting wherever you live.
REST API
curl -X POST "https://api.apify.com/v2/acts/khadinakbar~llm-visibility-tracker/runs" \-H "Authorization: Bearer YOUR_APIFY_TOKEN" \-H "Content-Type: application/json" \-d '{"brand": "Notion","domain": "notion.so","competitors": ["Coda", "ClickUp", "Asana"],"category": "productivity software","mode": "standard"}'
JavaScript
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_APIFY_TOKEN' });const run = await client.actor('khadinakbar/llm-visibility-tracker').call({brand: 'Notion',domain: 'notion.so',competitors: ['Coda', 'ClickUp', 'Asana'],category: 'productivity software',mode: 'standard',});const { items } = await client.dataset(run.defaultDatasetId).listItems();// items[0] = { llm: 'chatgpt', is_mentioned: true, rank_in_response: 1, share_of_voice_pct: 75.0, ... }
Python
from apify_client import ApifyClientclient = ApifyClient('YOUR_APIFY_TOKEN')run = client.actor('khadinakbar/llm-visibility-tracker').call(run_input={'brand': 'Notion','domain': 'notion.so','competitors': ['Coda', 'ClickUp', 'Asana'],'mode': 'standard',})items = list(client.dataset(run['defaultDatasetId']).iterate_items())
FAQ
Q: Do I need OpenAI / Anthropic / Perplexity / Google API keys? A: No. All four LLMs are bundled. You pay one flat per-check price ($0.096) and we cover the upstream model cost.
Q: Does Claude actually search the web here?
A: Yes. Claude runs with Anthropic's native web_search_20250305 server-side tool. ChatGPT uses gpt-4o-search-preview. Perplexity uses Sonar. Gemini uses Google Search grounding via the official Google API for authentic native grounding.
Q: How is this different from a regular SEO rank tracker? A: SEO rank trackers measure search engine rankings on Google/Bing. This actor measures the answer itself — what the LLM says when a user asks a category question. That's the new ranking surface for AEO and LLM SEO.
Q: How is this different from your AI Brand Monitor actor?
A: This actor is laser-focused on LLM rankings and AEO with new metrics: in-list rank_in_response, share_of_voice_pct, and a re-weighted visibility_index. It also has a much simpler input — just brand is required — plus a mode preset and a richer prompt library tuned to natural user phrasing.
Q: Can I monitor multiple brands? A: One brand per run. Schedule a separate run per brand (Apify schedules support unlimited concurrent runs).
Q: Free tier? A: Apify gives every new account $5 in free credits — enough for ~50 LLM Visibility checks across all 4 platforms.
Legal & ethics
This actor calls official LLM APIs (via OpenRouter for ChatGPT/Claude/Perplexity, direct Google API for Gemini). No scraping, no cookie hijacking, no proxy rotation. All upstream usage stays within each provider's terms of service. Your dataset is private to your Apify account.
Changelog
v1.0 (May 2026)
- Initial release — LLM visibility tracking across ChatGPT, Claude, Perplexity, Gemini.
- New metrics:
rank_in_response,share_of_voice_pct,visibility_index(0–100). - Aggressive input aliasing for LLM/MCP/agent calls.
- Mode presets: quick / standard / deep.
- AEO-style prompt library with 7 intent categories.
- Apify MCP-ready: tool name
apify--llm-visibility-tracker.