SEO Audit Tool avatar
SEO Audit Tool

Pricing

from $0.01 / 1,000 results

Go to Apify Store
SEO Audit Tool

SEO Audit Tool

Comprehensive SEO analysis for your entire website. Crawl all pages, identify technical issues, and get actionable insights to improve your search engine rankings.

Pricing

from $0.01 / 1,000 results

Rating

0.0

(0)

Developer

HappiTap

HappiTap

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

2 days ago

Last modified

Categories

Share

Comprehensive SEO analysis for your entire website. Crawl all pages, identify technical issues, and get actionable insights to improve your search engine rankings.

πŸš€ What does this actor do?

This SEO audit tool crawls your entire website and performs detailed analysis on each page, identifying errors and issues that could negatively impact your SEO performance. Get comprehensive reports on technical SEO, content quality, and on-page optimization.

✨ Key Features

  • πŸ” Single Page or Full Site Crawl: Audit a single page quickly or analyze your entire website
  • πŸ“Š Technical SEO Checks: Validate meta tags, headers, and HTML structure
  • πŸ–ΌοΈ Image Optimization: Identify missing alt tags and broken images
  • πŸ”— Link Analysis: Detect broken links and no-follow issues
  • πŸ“ Content Quality: Assess content length and keyword optimization
  • πŸ“± Mobile Optimization: Check viewport and responsive design elements
  • 🏷️ Structured Data: Validate JSON-LD and microdata implementation
  • πŸ“ˆ Google Analytics: Verify tracking code integration
  • πŸ’° Pay-Per-Event Pricing: Only pay for pages successfully audited

πŸ“₯ Input Configuration

Required Fields

  • Start URL: The webpage URL where the audit begins (e.g., https://apify.com)

Optional Fields

  • Crawl all pages recursively (default: false):
    • false - Audits only the single URL provided (fast, recommended for quick checks)
    • true - Crawls all pages on the website (comprehensive site audit)
  • Max pages (default: 10): Maximum number of pages to crawl (only applies when recursive crawling is enabled)
  • Max depth (default: unlimited): Maximum crawl depth (only applies when recursive crawling is enabled)
  • Proxy configuration: Optional proxy settings (Apify proxy requires valid API token)
  • User Agent: Custom user agent string for testing mobile/desktop views
  • Viewport width/height: Custom viewport dimensions for responsive testing
  • Page Navigation timeout: Timeout in milliseconds for page loading (default: 60000ms)
  • Max Request Retries: Number of retry attempts for failed requests
  • SEO params: Override default SEO validation parameters

Example Input

{
"startUrl": "https://apify.com",
"crawlRecursive": false,
"maxRequestsPerCrawl": 10,
"pageTimeout": 60000
}

πŸ“€ Output & Dataset Schema

All results are stored in the Apify dataset. Each page audit produces a comprehensive JSON object with the following structure:

Core Page Information

  • url - URL of the audited page
  • title - Page title from <title> tag
  • isLoaded - Whether the page loaded successfully

Technical SEO

  • isDoctype - HTML doctype declaration present
  • isCharacterEncode - Meta charset tag present
  • isViewport - Viewport meta tag present (mobile optimization)
  • robotsFileExists - robots.txt file exists at domain root
  • faviconExists - favicon.ico exists at domain root
  • pageIsBlocked - Page has noindex/nofollow robots meta tag
  • isUsingFlash - Page uses Flash content (deprecated)

Meta Tags & Content

  • isMetaDescription - Meta description tag exists
  • metaDescription - Content of meta description
  • isMetaDescriptionEnoughLong - Meta description length is optimal (< 140 chars)
  • isTitle - Title tag exists
  • isTitleEnoughLong - Title length is optimal (10-70 chars)

Headings

  • isH1 - H1 heading exists
  • h1 - Content of H1 heading
  • isH1OnlyOne - Exactly one H1 tag present (SEO best practice)
  • isH2 - H2 headings exist
  • linksCount - Total number of links on page
  • isTooEnoughLinks - Link count within acceptable range (< 3000)
  • internalNoFollowLinks - Array of internal links with nofollow attribute
  • internalNoFollowLinksCount - Count of internal nofollow links
  • brokenLinks - Array of internal broken links
  • brokenLinksCount - Count of internal broken links
  • externalBrokenLinks - Array of external broken links
  • externalBrokenLinksCount - Count of external broken links

Images

  • notOptimizedImages - Array of images missing alt attributes
  • notOptimizedImagesCount - Count of images without alt tags
  • brokenImages - Array of broken image URLs
  • brokenImagesCount - Count of broken images

Content Quality

  • wordsCount - Total word count on the page
  • isContentEnoughLong - Content meets minimum threshold (< 350 words)

Analytics & Tracking

  • isGoogleAnalyticsObject - Google Analytics object (ga) is present
  • isGoogleAnalyticsFunc - Google Analytics tracking function is present

Mobile & Performance

  • isAmp - Page uses AMP (Accelerated Mobile Pages)
  • isNotIframe - Page is free of iframes

Structured Data

  • jsonLd - JSON-LD structured data
    • isJsonLd - Whether JSON-LD is present
    • jsonLdData - Parsed JSON-LD data
  • microdata - Microdata structured data
    • isMicrodata - Whether microdata is present
    • microdata - Array of parsed microdata items

Error Handling

  • errorMessage - Error message if page failed to load (only present when isLoaded is false)

For the complete field-level JSON schema specification, see ./OUTPUT_SCHEMA.json.

The Actor output configuration is defined in ./.actor/output_schema.json following the Apify Actor output schema specification.

πŸ“‹ Sample Output

Successful Page Audit

{
"url": "https://apify.com/",
"title": "Web Scraping, Data Extraction and Automation - Apify",
"isLoaded": true,
"isGoogleAnalyticsObject": true,
"isGoogleAnalyticsFunc": false,
"isCharacterEncode": true,
"isMetaDescription": true,
"metaDescription": "Apify extracts data from websites, crawls lists of URLs and automates workflows on the web. Turn any website into an API in a few minutes!",
"isMetaDescriptionEnoughLong": true,
"isDoctype": true,
"isTitle": true,
"isTitleEnoughLong": true,
"isH1": true,
"h1": "The web scraping and automation platform",
"isH1OnlyOne": true,
"isH2": true,
"linksCount": 91,
"isTooEnoughLinks": true,
"internalNoFollowLinks": [],
"internalNoFollowLinksCount": 0,
"notOptimizedImages": [],
"notOptimizedImagesCount": 0,
"wordsCount": 1373,
"isContentEnoughLong": false,
"isViewport": true,
"isAmp": false,
"isNotIframe": true,
"isUsingFlash": false,
"pageIsBlocked": false,
"robotsFileExists": true,
"faviconExists": true,
"brokenLinks": [],
"brokenLinksCount": 0,
"externalBrokenLinks": [],
"externalBrokenLinksCount": 0,
"brokenImages": [],
"brokenImagesCount": 0,
"jsonLd": {
"isJsonLd": true,
"jsonLdData": {
"@context": "https://schema.org",
"@type": "Organization",
"name": "Apify",
"url": "https://apify.com"
}
},
"microdata": {
"isMicrodata": false,
"microdata": []
}
}

Failed Page Load

{
"url": "https://example.com/broken-page",
"isLoaded": false,
"errorMessage": "Navigation timeout of 60000 ms exceeded"
}

πŸ” SEO Issues Detected

The tool automatically identifies common SEO issues:

  • ❌ Missing or poorly optimized meta tags - Title too short/long, missing meta description
  • ❌ Heading structure problems - Missing H1, multiple H1 tags, no H2 tags
  • ❌ Image optimization issues - Missing alt attributes on images
  • ❌ Broken links and images - Non-functional internal/external links
  • ❌ Content quality concerns - Insufficient word count
  • ❌ Mobile optimization gaps - Missing viewport meta tag
  • ❌ Technical SEO problems - Missing robots.txt, favicon, or doctype
  • ❌ Internal nofollow links - Potential link equity issues
  • ❌ Deprecated technologies - Flash content usage

πŸ’° Pricing

This actor uses pay-per-event pricing:

  • $0.001 per page successfully audited
  • Failed pages are not charged
  • Single page audits are very cost-effective
  • Full site crawls are charged only for successfully processed pages

πŸ› οΈ Customizing SEO Parameters

You can override default SEO validation thresholds using the seoParams input:

{
"startUrl": "https://example.com",
"seoParams": {
"maxTitleLength": 70,
"minTitleLength": 10,
"maxMetaDescriptionLength": 160,
"maxLinksCount": 3000,
"maxWordsCount": 500,
"outputLinks": true,
"workingStatusCodes": [200, 301, 302, 304]
}
}

πŸ“Š Use Cases

  • Pre-launch SEO audit - Validate new websites before going live
  • Ongoing monitoring - Regular SEO health checks
  • Migration validation - Ensure SEO integrity after site migrations
  • Competitor analysis - Audit competitor websites for insights
  • Content optimization - Identify pages needing content improvements
  • Technical SEO fixes - Find and fix broken links, missing tags, etc.

πŸš€ Getting Started

  1. Enter your website URL in the Start URL field
  2. Choose single page or recursive crawling mode
  3. Configure optional parameters (proxy, viewport, etc.)
  4. Run the actor
  5. Export results as JSON, CSV, or Excel
  6. Fix identified SEO issues to improve rankings!

πŸ”Œ API Integration & Automation

You can automate SEO audits using the Apify API. This is useful for:

  • Scheduled audits (daily/weekly monitoring)
  • CI/CD pipeline integration
  • Bulk auditing multiple websites
  • Custom reporting dashboards

Using the Apify API Client

Node.js Example

const { ApifyClient } = require('apify-client');
// Initialize the client with your API token
const client = new ApifyClient({
token: 'YOUR_APIFY_TOKEN',
});
// Run the SEO audit actor
const run = await client.actor('YOUR_USERNAME/seo-audit-tool').call({
startUrl: 'https://example.com',
crawlRecursive: false,
pageTimeout: 60000,
});
// Fetch results from the dataset
const { items } = await client.dataset(run.defaultDatasetId).listItems();
// Process the SEO audit results
items.forEach((item) => {
console.log(`URL: ${item.url}`);
console.log(`Title: ${item.title}`);
console.log(`SEO Issues: ${item.brokenLinksCount} broken links, ${item.notOptimizedImagesCount} images without alt tags`);
});

Python Example

from apify_client import ApifyClient
# Initialize the client with your API token
client = ApifyClient('YOUR_APIFY_TOKEN')
# Run the SEO audit actor
run = client.actor('YOUR_USERNAME/seo-audit-tool').call(
run_input={
'startUrl': 'https://example.com',
'crawlRecursive': False,
'pageTimeout': 60000,
}
)
# Fetch results from the dataset
items = client.dataset(run['defaultDatasetId']).list_items().items
# Process the SEO audit results
for item in items:
print(f"URL: {item['url']}")
print(f"Title: {item['title']}")
print(f"SEO Issues: {item['brokenLinksCount']} broken links, {item['notOptimizedImagesCount']} images without alt tags")

Using REST API

# Start the actor run
curl -X POST https://api.apify.com/v2/acts/YOUR_USERNAME~seo-audit-tool/runs \
-H "Authorization: Bearer YOUR_APIFY_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"startUrl": "https://example.com",
"crawlRecursive": false,
"pageTimeout": 60000
}'
# Get the run status (replace RUN_ID with the ID from the previous response)
curl https://api.apify.com/v2/acts/YOUR_USERNAME~seo-audit-tool/runs/RUN_ID \
-H "Authorization: Bearer YOUR_APIFY_TOKEN"
# Download the dataset (replace DATASET_ID with defaultDatasetId from run status)
curl https://api.apify.com/v2/datasets/DATASET_ID/items \
-H "Authorization: Bearer YOUR_APIFY_TOKEN"

Scheduled Audits

Set up automated audits using Apify Schedules:

  1. Go to Schedules in your Apify Console
  2. Create a new schedule (e.g., daily at 9 AM)
  3. Select the SEO Audit Tool actor
  4. Configure your input parameters
  5. Enable notifications for failures

Webhook Integration

Receive audit results automatically via webhooks:

const run = await client.actor('YOUR_USERNAME/seo-audit-tool').call({
startUrl: 'https://example.com',
crawlRecursive: false,
}, {
webhooks: [{
eventTypes: ['ACTOR.RUN.SUCCEEDED'],
requestUrl: 'https://your-server.com/webhook',
}],
});

Bulk Auditing Multiple Sites

const websites = [
'https://example1.com',
'https://example2.com',
'https://example3.com',
];
// Run audits in parallel
const runs = await Promise.all(
websites.map(url =>
client.actor('YOUR_USERNAME/seo-audit-tool').call({
startUrl: url,
crawlRecursive: false,
})
)
);
// Collect all results
const allResults = await Promise.all(
runs.map(run =>
client.dataset(run.defaultDatasetId).listItems()
)
);

API Documentation

For complete API documentation, visit: