AI Readiness Auditor avatar

AI Readiness Auditor

Pricing

from $10.00 / 1,000 results

Go to Apify Store
AI Readiness Auditor

AI Readiness Auditor

Check how AI-ready any website is. Get an AI Readiness Score (0-100) checking llms.txt, robots.txt AI crawler directives, Schema.org structured data, and meta tags. No API key needed.

Pricing

from $10.00 / 1,000 results

Rating

0.0

(0)

Developer

J N

J N

Maintained by Community

Actor stats

0

Bookmarked

3

Total users

2

Monthly active users

8 days ago

Last modified

Share

Check how AI-ready any website is. Get an AI Readiness Score (0-100) with actionable recommendations.

No API key needed. Just enter a URL.

What it checks

CheckWeightWhat it measures
llms.txt30%/llms.txt and /llms-full.txt — the new standard for delivering content to AI assistants
robots.txt AI Directives25%Permissions for GPTBot, ClaudeBot, Google-Extended, Amazonbot, PerplexityBot, and 9 more AI crawlers
Schema.org Structured Data25%JSON-LD, Microdata, and RDFa markup that helps AI understand your content semantically
Meta Tags20%Title, description, Open Graph, Twitter Card, canonical URL, language declaration

AI Readiness Score grading

GradeScoreMeaning
A+90-100Fully optimized for AI — your content is maximally discoverable
A80-89Excellent — minor improvements possible
B70-79Good — some AI optimization gaps
C60-69Average — significant room for improvement
D50-59Below average — missing key AI signals
F0-49Not AI-ready — major changes needed

Why AI readiness matters

In 2025-2026, how AI systems discover and use your content is becoming as important as traditional SEO:

  • AI search engines (Perplexity, Google AI Overviews, ChatGPT Search) now drive significant traffic
  • LLM agents (Claude, GPT, Gemini) are used by millions to find and evaluate products, services, and information
  • llms.txt is a new standard (like robots.txt was for search engines) that tells AI how to consume your content
  • Websites blocking AI crawlers lose visibility in AI-powered search results
  • Schema.org markup helps AI understand what your page is about, not just what it says

Input

FieldTypeRequiredDefaultDescription
urlsstring[]YesWebsite URLs to audit (e.g., https://example.com)
checkLlmsTxtbooleanNotrueCheck /llms.txt and /llms-full.txt
checkRobotsTxtbooleanNotrueCheck robots.txt AI crawler directives
checkStructuredDatabooleanNotrueCheck Schema.org structured data
checkMetaTagsbooleanNotrueCheck meta tags quality
timeoutintegerNo30HTTP request timeout (seconds)

Output

Each URL produces a result with:

FieldTypeDescription
urlstringThe audited website URL
aiReadinessScoreintegerOverall score 0-100
gradestringLetter grade (A+ through F)
llmsTxtobjectllms.txt check results (score, exists, findings)
robotsTxtobjectrobots.txt AI directives (score, crawler permissions, findings)
structuredDataobjectSchema.org analysis (score, JSON-LD count, schema types, findings)
metaTagsobjectMeta tags analysis (score, title, description, OG/Twitter, findings)
recommendationsstring[]Prioritized actionable recommendations
auditedAtstringISO 8601 timestamp

Example output

{
"url": "https://stripe.com",
"aiReadinessScore": 72,
"grade": "B",
"llmsTxt": {
"score": 80,
"exists": true,
"fullExists": false,
"contentLength": 63754,
"findings": ["/llms.txt found (63,754 bytes)", "llms.txt has 355 lines (good detail)"]
},
"robotsTxt": {
"score": 80,
"exists": true,
"aiCrawlers": {
"GPTBot": "allowed (not mentioned — default)",
"ClaudeBot": "allowed (not mentioned — default)"
},
"findings": ["robots.txt found", "No AI crawlers are blocked"]
},
"recommendations": [
"MEDIUM: Add /llms-full.txt with comprehensive site content for deep AI indexing."
],
"auditedAt": "2026-03-06T12:00:00+00:00"
}

AI crawlers checked

The actor checks permissions for these AI crawlers in robots.txt:

CrawlerOperator
GPTBotOpenAI
ChatGPT-UserOpenAI
ClaudeBotAnthropic
Claude-WebAnthropic
Google-ExtendedGoogle (Gemini)
AmazonbotAmazon
PerplexityBotPerplexity AI
YouBotYou.com
CCBotCommon Crawl
BytespiderByteDance
Cohere-aiCohere
FacebookBotMeta
anthropic-aiAnthropic
Applebot-ExtendedApple

Use cases

  • SEO professionals: Audit client websites for AI search readiness
  • Web developers: Check your site before deploying AI optimization changes
  • Digital marketers: Compare competitor AI readiness scores
  • AI tool builders: Find websites with good structured data for training/indexing
  • Consultants: Generate AI readiness reports for clients

FAQ

Q: Do I need an API key? A: No. This actor only reads publicly available files (llms.txt, robots.txt, HTML) from websites you specify.

Q: Is this legal? A: Yes. The actor only reads files that websites explicitly serve to all visitors (robots.txt, HTML pages). It respects the same access that any web browser has.

Q: How many URLs can I audit at once? A: Up to 100 URLs per run. Each URL is audited independently.

Q: What is llms.txt? A: A new standard (2025-2026) similar to robots.txt but for AI. It tells LLMs and AI agents how to consume your content. See llmstxt.org for details.

Q: Why does my site score low? A: Most websites haven't optimized for AI yet. Check the recommendations in the output for specific improvements.

Q: Does this check mobile or JavaScript-rendered pages? A: This actor uses HTTP requests (not a browser), so it checks the initial HTML response. JavaScript-rendered content may not be captured. For most sites, the key signals (llms.txt, robots.txt, meta tags, JSON-LD) are in the initial HTML.

Q: Can I run this on a schedule? A: Yes. Use Apify's scheduling feature to run periodic AI readiness audits and track improvements over time.

Q: How is the score calculated? A: The overall score is a weighted average: llms.txt (30%), robots.txt AI directives (25%), Schema.org structured data (25%), and meta tags (20%). Each check scores 0-100 independently.