Broken Link Checker
Pricing
Pay per event
Broken Link Checker
Broken Link Checker crawls your website, discovers all internal and external links, and verifies each one. It finds 404 errors, server errors, timeouts, and other broken links — then tells you exactly which page links to each broken URL and what the anchor text says.
Pricing
Pay per event
Rating
0.0
(0)
Developer
Stas Persiianenko
Actor stats
0
Bookmarked
5
Total users
1
Monthly active users
15 minutes ago
Last modified
Categories
Share
Broken Link Checker — Find Dead Links on Any Website
What does Broken Link Checker do?
Broken Link Checker crawls your website, discovers all internal and external links, and verifies each one. It finds 404 errors, server errors, timeouts, and other broken links — then tells you exactly which page links to each broken URL and what the anchor text says.
Who is it for?
- 🔍 SEO specialists — finding and fixing broken links that hurt search rankings
- 💻 Web developers — validating all links work before deploying site updates
- 📝 Content managers — auditing large sites for outdated or dead links
- 🏢 Digital agencies — running link health reports for client websites
- ♿ Accessibility auditors — ensuring all navigation paths lead to valid destinations
Why use Broken Link Checker?
- Full-site crawling — automatically discovers and follows internal pages up to your depth limit
- External link checking — verifies links to other websites, not just your own domain
- Source tracking — every broken link shows which page contains it and the exact anchor text
- Smart retry with backoff — retries timed-out requests with exponential backoff to reduce false positives
- Timeout/broken separation — clearly distinguishes confirmed broken links (4xx/5xx) from timeouts and connection issues
- No proxy by default — runs direct for speed and low cost; optional proxy with fallback-to-direct on timeout
- HEAD-first checking — uses lightweight HEAD requests with GET fallback to minimize load on target sites
- Structured output — results include status code, error type, severity, confirmation status, and diagnostics
How much does it cost to check for broken links?
Uses pay-per-event pricing:
| Event | Price | When charged |
|---|---|---|
start | per run | Once when the actor starts |
page-crawled | per page | Each internal page crawled and analyzed |
Example: Crawling a 50-page website costs 1 start + 50 page-crawled events. Platform compute and proxy costs are billed separately by Apify.
What data can you extract?
| Field | Example |
|---|---|
url | "https://example.com/old-page" |
statusCode | 404 |
statusText | "Not Found" |
isBrokenConfirmed | true |
errorType | "http", "timeout", "dns", "tls", "blocked" |
sourceUrl | "https://example.com/blog/post-1" |
anchorText | "Click here for details" |
linkType | "internal" or "external" |
severity | "error" (confirmed broken) or "warning" (timeout/unreachable) |
retryCountUsed | 2 |
usedProxy | false |
finalMethod | "HEAD" or "GET" |
checkedAt | "2026-02-28T12:00:00.000Z" |
How to check a website for broken links
- Open Broken Link Checker on Apify Console.
- Enter the Website URL to crawl (e.g.,
https://your-site.com). - Set Max pages to control crawl depth (default: 100).
- Enable or disable Check external links.
- Click Start and review broken links in the dataset.
Example input
{"startUrl": "https://example.com","maxPages": 50,"checkExternalLinks": true}
Input parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
startUrl | string | Yes | — | Website URL to crawl |
maxPages | integer | No | 100 | Max internal pages to crawl (1–1000) |
checkExternalLinks | boolean | No | true | Also check links to other domains |
maxConcurrency | integer | No | 5 | Parallel requests (1–20) |
requestTimeoutSecs | integer | No | 15 | Timeout per request in seconds |
retryCount | integer | No | 2 | Retries for timed-out requests (0–5), uses exponential backoff |
useProxy | boolean | No | false | Enable proxy for requests |
proxyGroup | string | No | datacenter | Proxy group: datacenter, residential, or auto |
timeoutFallbackToDirect | boolean | No | true | On proxy timeout, retry without proxy |
proxyConfiguration | object | No | — | Advanced: custom Apify proxy settings |
Output example
{"url": "https://example.com/deleted-page","statusCode": 404,"statusText": "Not Found","sourceUrl": "https://example.com/blog/post-3","anchorText": "Read our case study","linkType": "internal","severity": "error","isBrokenConfirmed": true,"errorType": "http","retryCountUsed": 0,"usedProxy": false,"finalMethod": "HEAD","checkedAt": "2026-02-28T12:34:56.789Z"}
Tips for best results
- Start with a small
maxPages(10–20) to get a quick overview before running a full crawl. - Enable external link checking to catch broken outbound links — these hurt SEO too.
- Leave proxy off for most sites. Only enable if you get blocked or need geo-specific checking.
- Increase
requestTimeoutSecsif you see many timeout warnings (slow CDNs, overseas servers). - Use
retryCount: 2(default) to reduce false positives from transient network issues. - Filter by
isBrokenConfirmed: trueto see only confirmed broken links (4xx/5xx responses). - Lower
maxConcurrencyif the target site starts rate-limiting your requests. - Use the
sourceUrlfield to quickly find and fix the pages containing broken links. - Schedule regular runs with Apify Scheduler to catch new broken links as your site changes.
Integrations
Connect with Make, Zapier, n8n, or any HTTP tool. Use Apify webhooks to get notified when broken links are found. Feed results into your SEO dashboard or ticketing system.
Using the Apify API
Node.js
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_TOKEN' });const run = await client.actor('automation-lab/broken-link-checker').call({startUrl: 'https://your-site.com',maxPages: 50,});const { items } = await client.dataset(run.defaultDatasetId).listItems();console.log('Broken links found:', items.length);
Python
from apify_client import ApifyClientclient = ApifyClient('YOUR_TOKEN')run = client.actor('automation-lab/broken-link-checker').call(run_input={'startUrl': 'https://your-site.com','maxPages': 50,})items = client.dataset(run['defaultDatasetId']).list_items().itemsprint(f'Broken links found: {len(items)}')
cURL
curl "https://api.apify.com/v2/acts/automation-lab~broken-link-checker/runs" \-X POST \-H "Content-Type: application/json" \-H "Authorization: Bearer YOUR_TOKEN" \-d '{"startUrl": "https://your-site.com", "maxPages": 50}'
Use with AI agents via MCP
Broken Link Checker is available as a tool for AI assistants via the Model Context Protocol (MCP).
Setup for Claude Code
$claude mcp add --transport http apify "https://mcp.apify.com"
Setup for Claude Desktop, Cursor, or VS Code
{"mcpServers": {"apify": {"url": "https://mcp.apify.com"}}}
Example prompts
- "Find all broken links on https://example.com"
- "Check our website for dead links"
Learn more in the Apify MCP documentation.
Legality
This tool analyzes publicly accessible web content. Automated analysis of public web resources is standard practice in SEO and web development. Always respect robots.txt directives and rate limits when analyzing third-party websites. For personal data processing, ensure compliance with applicable privacy regulations.
FAQ
Does it check external links? Yes, by default. Disable with checkExternalLinks: false to only check same-domain links.
Will it overload my website? No. It uses maxConcurrency: 5 by default and HEAD-first requests. For sensitive sites, lower concurrency to 1–2.
What counts as a broken link? Any HTTP 4xx or 5xx response after retries. Timeouts and connection failures are reported as warnings, not confirmed broken.
How do I filter confirmed broken links? Filter the dataset by isBrokenConfirmed: true to exclude timeouts and unreachable links.
Does it follow redirects? Yes. Redirected links are considered OK. Only the final status matters.
Do I need a proxy? Usually no. Proxy is off by default. Enable it only if the target site blocks direct requests.
Can I schedule regular checks? Yes. Use Apify Scheduler to run weekly or daily checks and get notified of new broken links.
Does it need a browser? No. It uses fast HTTP requests only — no Chromium/Playwright overhead.
I'm seeing many timeouts but the links work in my browser. What should I do? Some servers are slow to respond or rate-limit automated requests. Try increasing requestTimeoutSecs to 30 or more, lowering maxConcurrency to 2-3, or enabling proxy with useProxy: true. Timeout results appear as warnings, not confirmed broken links.
The crawler is not finding all pages on my site. How can I increase coverage? Increase the maxPages parameter to allow the crawler to discover more internal pages. The crawler follows links from each page it visits, so deeply nested pages may require a higher limit. Also ensure your site's internal linking allows the crawler to reach all sections.
Other SEO tools
- Canonical URL Checker — Validate canonical URL tags on web pages for SEO.
- SEO Title Checker — Check page titles for SEO best practices.
- Heading Structure Checker — Analyze heading hierarchy (H1-H6) on web pages.
- HTML Validator — Validate HTML markup for errors and warnings.
- Website Health Report — Get a comprehensive health report for any website.
- Redirect Chain Analyzer — Trace redirect chains and detect redirect loops.
