AI Search Visibility Audit
Pricing
from $30.00 / 1,000 results
AI Search Visibility Audit
Audits websites for AI/LLM search readiness — scores how likely pages are to be cited by ChatGPT, Perplexity, Gemini, Claude, and Google AI Overviews across 6 categories: crawlability, entity clarity, content structure, schema markup, authority signals, and AI-specific optimisation.
Pricing
from $30.00 / 1,000 results
Rating
0.0
(0)
Developer
Peter
Actor stats
2
Bookmarked
6
Total users
5
Monthly active users
9 days ago
Last modified
Categories
Share
Audits any website for Generative Engine Optimisation (GEO) — how likely your pages are to be cited, referenced, or recommended by ChatGPT, Perplexity, Gemini, Claude, and Google AI Overviews.
This is not a traditional SEO audit. It checks whether AI systems can crawl, understand, extract from, and trust your content enough to surface it in their answers.
Why it matters
AI-powered search is replacing link-based results. Google AI Overviews, ChatGPT web search, and Perplexity now answer questions directly — and they choose which sources to cite. If your site isn't structured for AI extraction, you're invisible to a growing share of search traffic.
This actor gives you a concrete, actionable score so you know exactly what to fix.
What you get
- GEO Score (0–100) per page and site-wide, graded A+ to F
- 6 category scores with weighted rollup: AI Crawlability (20%), Entity & Brand Clarity (20%), Content Structure (25%), Schema & Structured Data (15%), Authority & Trust (10%), AI-Specific Optimisation (10%)
- Prioritised issues — every issue includes severity, estimated impact, and a plain-English fix hint
- Quick wins — the top 5 highest-impact fixes you can make today
- Competitor comparison — optional side-by-side scoring against up to 3 competitor URLs
- Deterministic output — no LLM calls for data extraction; same input always produces the same result
Who it's for
- SEO teams adding GEO to their toolkit
- Content strategists optimising for AI citation
- Developers validating schema markup and crawlability
- Agencies running AI-readiness audits for clients
- Founders who want to know if AI can find them
Input
| Parameter | Type | Default | Description |
|---|---|---|---|
startUrls | array | required | URLs to audit |
crawlPages | boolean | false | Follow internal links to audit multiple pages |
maxPages | integer | 50 | Maximum pages to crawl (1–500) |
maxConcurrency | integer | 5 | Concurrent page audits (1–20) |
brandName | string | null | Your brand name for consistency checks |
primaryKeywords | array | null | Target keywords for content relevance |
competitors | array | null | Up to 3 competitor URLs for comparison |
auditCrawlability | boolean | true | Toggle crawlability audit |
auditEntity | boolean | true | Toggle entity audit |
auditContent | boolean | true | Toggle content audit |
auditSchema | boolean | true | Toggle schema audit |
auditAuthority | boolean | true | Toggle authority audit |
auditAIOptimisation | boolean | true | Toggle AI optimisation audit |
Output
Two record types are pushed to the default dataset:
Page records (type: "page")
One per audited URL with overall score, 6 category scores, full audit details, and all issues grouped by severity.
Site summary (type: "site-summary")
One per run with average scores, score distribution, top 10 issues ranked by severity and frequency, top 5 quick wins, and optional competitor comparison.
Issue format
Every issue includes:
id— unique identifier (e.g.,NO_ORG_SCHEMA,THIN_CONTENT)message— human-readable descriptionseverity—critical,warning, orinfopriority— 1 (highest) to 5 (lowest)estimatedImpact—high,medium, orlowfixHint— actionable recommendationpagesAffected— URLs or["site-wide"]
Pricing
Pay per page audited. No subscription.
- $0.03 per page audited
- $0.05 actor start fee per run
- Platform usage included
A 50-page audit costs ~$1.55. Compare that to subscription GEO/AEO tools at $99–$500/month.
Limitations
- Readability scoring uses Flesch-Kincaid, designed for English text
- Uses CheerioCrawler (no JS execution) — JS-dependent pages are flagged but not fully rendered
- robots.txt and sitemap are cached per domain, fetched once per run
- No LLM-based content analysis — all checks are rule-based and deterministic
Validated against
- cogcogcog.com — Shopify e-commerce (Cognitive Performance Drinks)
- goodrays.com — Shopify e-commerce (CBD drinks)
- example.com — minimal static HTML (edge case baseline)
Three consecutive runs produce identical scores across all test sites.


