AI Rank Tracker - GEO/AEO: ChatGPT, Claude, Gemini, Perplexity avatar

AI Rank Tracker - GEO/AEO: ChatGPT, Claude, Gemini, Perplexity

Pricing

Pay per event

Go to Apify Store
AI Rank Tracker - GEO/AEO: ChatGPT, Claude, Gemini, Perplexity

AI Rank Tracker - GEO/AEO: ChatGPT, Claude, Gemini, Perplexity

AI rank tracker & GEO/AEO audit tool for 5 AI platforms (Google AI Overview, ChatGPT, Claude, Gemini, Perplexity). Tracks brand visibility, share of voice, citation rank vs competitors. Multi-country, AI Search Volume metric, top cited domains. Half the price of competitors. No API keys.

Pricing

Pay per event

Rating

5.0

(2)

Developer

Santhej Kallada

Santhej Kallada

Maintained by Community

Actor stats

0

Bookmarked

5

Total users

4

Monthly active users

2 days ago

Last modified

Categories

Share

AI Rank Tracker - GEO/AEO Audit for ChatGPT, Claude, Gemini, Perplexity & Google AI Overview

The cheapest AI rank tracker on Apify. Track brand visibility, share of voice, citation rank, and competitor mentions across 5 AI search platforms in a single run. Built for Generative Engine Optimization (GEO), Answer Engine Optimization (AEO), and AI search SEO.

What this AI rank tracker does

For any brand you give it, this actor sends your queries through 5 major AI search platforms and tells you:

  • Whether your brand is mentioned in the AI's answer
  • Your share of voice vs competitors
  • The rank position of your brand in the answer (first vs seventh = huge difference)
  • Whether your domain is cited as a source URL (the holy grail for AI search SEO)
  • The sentiment the AI uses when describing your brand
  • Which other domains are cited — find guest-post / outreach opportunities
  • The AI Search Volume for each query (exclusive metric — no other Apify actor has this)
  • Cross-platform top cited domains and top cited pages (aggregated report)

Use it as your weekly AI visibility tracker, ChatGPT SEO audit tool, Google AI Overview rank checker, or as part of a broader Generative Engine Optimization workflow.

Why this AI visibility tracker beats every paid alternative on Apify

FeatureThis actorkhadinakbar's AI Brand Monitor (the leader)adityalingwal AI Brand Visibilitydoesaiknow analyst tier
Platforms covered5 — Google AI Overview, ChatGPT, Claude, Gemini, Perplexity4 (no AIO)35 (incl. Copilot, no Claude/Gemini)
Price per check$0.05$0.08-$0.09$0.15 effective ($0.30 / ~2 checks)$0.30/query
Real LLM queries✅ (no simulation, no scraping)✅ direct LLM API❌ browser scraping❌ browser scraping
Google AI Overview tracking✅ Yes❌ No❌ No
Multi-country AI Overview✅ Yes (50+ locations)❌ US only❌ US only
AI Search Volume metric✅ Exclusive
Top cited domains aggregation✅ Built-in✅ Pro
No API keys required
Speed (5 platforms × 3 queries)~15-25 seconds~22 seconds~115 seconds~30 seconds

If you're paying for Profound, Peec AI, Otterly, or any other AI visibility SaaS at $29-$499/month, this actor delivers the same data per-run, pay-as-you-go, no contracts.

Use cases

  • AI search SEO audits — weekly brand visibility checks for SEO/GEO/AEO clients
  • ChatGPT SEO tracking — measure how ChatGPT recommends your brand vs competitors
  • Google AI Overview rank tracking — see when AIO cites your domain (or doesn't)
  • Perplexity rank tracker — monitor Perplexity citations and share of voice
  • Claude & Gemini brand monitoring — track AI-generated recommendations across LLM platforms
  • Generative Engine Optimization (GEO) — measure GEO content performance pre/post launch
  • Answer Engine Optimization (AEO) — identify queries where competitors win citations
  • Multi-country AI search visibility — set locationCode to track in India, UK, Germany, etc.
  • AI citation tracker — see exactly which 3rd-party domains AI cites for your category
  • Top cited domain research — find guest-post and partnership targets (Aggregated Report add-on)
  • AI search volume research — see which queries are popular in AI search (DataForSEO exclusive metric per query)
  • Profound / Peec AI / Otterly alternative — pay-as-you-go alternative to expensive AI visibility SaaS

Pricing

EventPriceWhen charged
Actor Start$0.001Once per run
Platform Check$0.05Per (query × platform) — you control it via input
Aggregated Report$0.25Optional, only when includeAggregatedMetrics: true

Example runs:

  • 3 queries × 5 platforms = $0.75 + $0.001 start = $0.751
  • 10 queries × 5 platforms = $2.50 + $0.001 = $2.501
  • 5 queries × 2 platforms (AIO + ChatGPT only) = $0.50 + $0.001 = $0.501
    • Aggregated Report add-on: + $0.25

Minimum 3 queries per run (for batching efficiency).

Input

Required

  • brandName — Your brand (e.g., "HubSpot")
  • queries — 3-50 queries to test. Each is one complete prompt (e.g., "best CRM software" or "What CRM should a B2B startup use?").
  • brandDomain"hubspot.com" for citation detection
  • competitors — Up to 10 competitor brand names
  • platforms — Subset of the 5 platforms (default: all 5)

Optional

  • brandAliases — Alternate spellings (case-insensitive matched)
  • competitorDomains — For competitor citation detection
  • locationCode — Geographic location (default: 2840 = US). Common: 2356 India, 2826 UK, 2124 Canada, 2036 Australia, 2276 Germany.
  • languageCode — Default: "en"
  • includeAggregatedMetrics — Set true for the bonus aggregated report (extra $0.25)
  • responseFormat"detailed" (800 chars) or "concise" (200 chars)
  • maxBudgetUsd — Safety cap. Actor refuses to start if estimated cost exceeds this.
  • webhookUrl — POSTed run summary on completion
  • demoMode — Set true for a free health check

Output

The actor writes records to the default dataset.

platform_check records (one per query × platform)

{
"record_type": "platform_check",
"scraped_at": "2026-05-14T08:54:29.917Z",
"platform": "gemini",
"model_used": "gemini-2.5-flash",
"data_source": "dataforseo_responses",
"data_status": "ok",
"query": "best CRM software",
"location_code": 2840,
"language_code": "en",
"brand_name": "HubSpot",
"brand_domain": "hubspot.com",
"brand_mentioned": true,
"brand_mention_count": 4,
"brand_share_of_voice": 0.5,
"mention_position_score": 1,
"mention_context": "...excerpt around first mention...",
"sentiment": "positive",
"is_cited_as_source": false,
"cited_url": null,
"cited_urls": ["https://..."],
"total_sources_cited": 7,
"competitor_mentions": ["Salesforce", "Pipedrive"],
"competitor_mention_count": 5,
"competitor_breakdown": {
"Salesforce": { "mentioned": true, "count": 3, "position_score": 1 },
"Pipedrive": { "mentioned": true, "count": 2, "position_score": 3 }
},
"ai_search_volume": 4743,
"monthly_searches": null,
"ai_response_summary": "...800 chars of the AI's response..."
}

Field reference:

FieldDescription
data_statusok = data returned; no_index_match = no indexed data for this query (rare; very specific brand-related queries); fallback_responses = real-time LLM Responses fallback; error = API call failed
mention_position_score1-10. 1 = brand appears in first 10% of the answer (best). Null if not mentioned.
brand_share_of_voicebrand_count / (brand_count + sum(competitor_counts)). 0-1.
ai_search_volumeExclusive metric: how often this query is asked in AI search (only populated for AIO + ChatGPT data)

aggregated_report record (only if includeAggregatedMetrics: true)

One row per run summarizing all platform_checks plus cross-platform aggregations: top cited domains, top cited pages, competitor summary, platform breakdown, and overall visibility score (0-100).

{
"record_type": "aggregated_report",
"summary": {
"brand_name": "HubSpot",
"queries_tracked": 3,
"platforms_queried": 5,
"total_platform_checks": 15,
"overall_visibility_score": 78,
"brand_mention_rate": 0.87,
"brand_avg_position_score": 2.1,
"brand_avg_share_of_voice": 0.42,
"brand_citation_rate": 0.20
},
"top_cited_domains": [
{ "domain": "g2.com", "citation_count": 15, "platforms_appearing_on": ["chatgpt", "gemini", "perplexity"] }
],
"top_cited_pages": [ ... ],
"competitor_summary": [ ... ],
"platform_breakdown": { ... },
"ai_search_volume_total": 42400
}

overall_visibility_score formula (0-100):

score = round(100 × (
0.40 × mention_rate +
0.30 × position_score_normalized +
0.20 × share_of_voice +
0.10 × citation_rate
))

How it works under the hood

  • Google AI Overview + ChatGPT → batched queries against an enterprise AI search index
  • Claude + Gemini + Perplexity → real-time LLM queries (Claude Haiku 4.5, Gemini 2.5 Flash, Perplexity Sonar)
  • Mention detection → word-boundary regex matching against brand name + aliases, position scoring 1-10
  • Citation detection → subdomain-aware URL matching (blog.hubspot.com → matches hubspot.com)
  • Sentiment → batched LLM classification (single Gemini Flash call per run, regardless of record count)
  • Top domains/pages → local aggregation across all cited URLs in the run
  • Performance → 5 platforms × 3 queries usually completes in 15-25 seconds

When to use this

  • Weekly AI visibility audits for SEO/GEO/AEO agency clients
  • Pre/post content launches — measure if your content moved AI rankings
  • Competitor monitoring — track when competitors are cited but you aren't
  • Multi-country tracking — use different locationCode values to see AI Overview answers in different markets
  • Top-of-funnel SEO research — see which 3rd-party domains get cited so you can pitch guest posts there
  • AI search volume discovery — find which queries have the highest AI search demand

FAQ

Q: How is this different from a regular SEO rank tracker? A: Traditional rank trackers measure where your site appears in Google's blue links. This measures whether AI engines (which 60%+ of consumers now use for product research) actually mention or recommend your brand — a totally different signal that traditional SEO tools miss.

Q: Why minimum 3 queries? A: AI search index queries are batched in groups of up to 10 per API call. Below 3 queries the per-query economics get tight, and we'd have to charge more. Three is the floor for clean margins on our end and value on yours.

Q: How accurate are the brand mentions? A: Detected via word-boundary regex on the full AI response text. Include all your brand aliases (e.g., "Hub Spot", "HS") to catch every variant. Position scoring is based on character position of the first mention.

Q: Why do some Google AI Overview records show data_status: no_index_match? A: For very specific or niche queries (e.g., "HubSpot alternatives for B2B startups with 20 employees"), there may be no indexed AI Overview data yet. Generic search-style queries ("best CRM software") almost always return data. You're still charged a Platform Check because we made a real API attempt. Switch to generic queries for AIO coverage, or use the other 4 platforms which always return fresh data.

Q: Does it support Microsoft Copilot or Grok? A: Not in v1. Both have very small market share. Will add if user demand picks up.

Q: Can I run this on a schedule? A: Yes — use Apify's built-in Scheduler to run weekly or daily for trend tracking. Combine with the webhookUrl input to push results to Make.com, Zapier, n8n, or Slack.

Q: How does this compare to Profound, Peec AI, Otterly? A: Those are full SaaS platforms at $29-$499/month with dashboards. This actor delivers the same underlying data per-run, pay-as-you-go, no contracts. Use it standalone or pipe the JSON output into your own dashboard.

Q: Will the auto-test run charge me? A: Apify auto-tests your actor daily with the default input. Those runs charge against your underlying API budget, not the user-facing pricing events. To minimize, edit the default input to use 1 platform × 3 queries (~$0.001/day).

Tags

ai rank tracker · ai visibility tracker · ai overview tracker · chatgpt seo tool · chatgpt rank tracker · perplexity rank tracker · claude brand monitoring · gemini ai search · generative engine optimization · geo audit tool · aeo tool · answer engine optimization · llm visibility · llm rank tracker · ai search visibility · ai search optimization · ai citation tracker · brand mention ai · ai brand monitor · share of voice ai · profound ai visibility alternative · peec ai alternative · otterly alternative

Author

Built by santhej. Found an issue? Open one on the actor page.