Broken Link Monitor For Website Owners
Pricing
Pay per usage
Broken Link Monitor For Website Owners
I build web automation and website auditing tools for website owners, agencies, and marketers.
Pricing
Pay per usage
Rating
0.0
(0)
Developer
Shahab Uddin
Actor stats
0
Bookmarked
3
Total users
1
Monthly active users
6 days ago
Last modified
Categories
Share
๐ Broken Link Monitor for Website Owners
Find broken links, redirects, and website errors instantly. Scan any website and detect SEO issues that hurt rankings, user experience, and conversions.
๐ What is this tool?
Broken Link Monitor for Website Owners is a powerful website crawler & SEO audit tool that scans your website and detects:
โ Broken links (4xx / 5xx errors) ๐ Redirects (301 / 302 / 307 / 308) โ ๏ธ Page errors and crawl issues ๐ Internal and external links ๐ Website link health overview
Perfect for website owners, SEO experts, agencies, developers, and bloggers.
๐ก Why use this tool?
Broken links and redirects can:
Hurt your Google rankings (SEO) Reduce user trust and experience Cause lost traffic and revenue Affect site crawlability and indexing
๐ This tool helps you fix those issues fast.
โก Key Features ๐ Multi-page website crawler ๐ Internal + external link checker โ๏ธ Technical SEO audit ready ๐ Smart SSL handling (auto retry) โก Fast scanning with safe defaults ๐ Export results (JSON / CSV) ๐ง Intelligent crawl stopping logic ๐ ๏ธ Works even on sites with SSL issues ๐ฏ Best For Website owners fixing broken links SEO professionals doing technical audits Agencies managing client websites Developers maintaining site health Bloggers & ecommerce store owners ๐งพ Input
Provide the following:
{ "startUrl": "https://example.com/", "maxPages": 25, "checkExternalLinks": false, "ignoreSslErrors": false } Input Options Field Description startUrl Website URL to scan maxPages Max pages to crawl (recommended: 10โ50) checkExternalLinks Check external links or not ignoreSslErrors Ignore SSL errors if site has issues ๐ Output
Each result includes:
sourcePage โ Page where link was found linkUrl โ Detected link finalUrl โ Final URL after redirects statusCode โ HTTP status isInternal โ Internal or external result โ broken / redirect / error Summary Example { "startUrl": "https://example.com/", "crawledPages": 12, "foundLinks": 145, "brokenLinks": 5, "redirects": 9, "externalLinks": 60, "internalLinks": 85 } โ๏ธ How it works Starts from your homepage Crawls internal links (same domain) Extracts all valid URLs Checks status of each link Reports broken links, redirects, and errors Stops based on maxPages or crawl limits ๐ง Smart Crawling System Only crawls internal pages Avoids duplicate URLs Handles relative & absolute links Ignores invalid links (mailto, tel, javascript) Continues even if some pages fail Automatically retries SSL errors when needed ๐จ Important Notes If a site has JavaScript-only navigation, some links may not be detected Increase maxPages for deeper scans Use ignoreSslErrors = true only if SSL issues occur ๐ฐ Why this tool is different
Unlike basic link checkers, this tool:
โ๏ธ Performs a real multi-page crawl โ๏ธ Detects technical SEO issues โ๏ธ Handles SSL failures automatically โ๏ธ Designed for real-world websites โ๏ธ Built for speed + accuracy ๐งช Example Use Cases SEO audit before website launch Fix broken links after migration Monitor site health regularly Improve Google indexing Find hidden crawl errors ๐ฅ SEO Keywords (Optimized)
Broken link checker, website audit tool, SEO audit tool, website crawler, link checker tool, technical SEO tool, find broken links, website health checker, crawl website, SEO scanner
Apify Integration
This actor is designed to run on the Apify platform. It uses Apify's input schema and dataset features for configuration and result storage.
How to use on Apify
- Deploy the actor
- Push this repository to your Apify account or create a new actor and upload the code.
- Configure input
- The actor uses the following input schema (see
input_schema.json):startUrl(string, required): The first page to crawl (e.g., https://example.com/)maxPages(integer, default: 25): Maximum number of internal pages to crawlcheckExternalLinks(boolean, default: false): Whether to check external linksignoreSslErrors(boolean, default: false): Ignore SSL certificate errors
- The actor uses the following input schema (see
- Run the actor
- The actor will crawl the website, check links, and store results in the Apify dataset.
- View results
- Results are available in the Apify run dataset as JSON or CSV. Each record contains:
sourcePage,linkUrl,finalUrl,statusCode,isInternal,result
- Results are available in the Apify run dataset as JSON or CSV. Each record contains:
Example input (Apify UI or API)
{"startUrl": "https://example.com/","maxPages": 25,"checkExternalLinks": false,"ignoreSslErrors": false}
Example output (dataset record)
{"sourcePage": "https://example.com/","linkUrl": "https://www.example.com/","finalUrl": "https://www.example.com/","statusCode": 200,"isInternal": false,"result": "ok"}