🔗 Broken Link Checker
Pricing
Pay per event
🔗 Broken Link Checker
Crawl websites to find broken links, 404 errors, and dead URLs. Checks internal and external links with configurable depth. Essential for SEO audits, website maintenance, and content teams.
Pricing
Pay per event
Rating
0.0
(0)
Developer
太郎 山田
Actor stats
0
Bookmarked
1
Total users
0
Monthly active users
5 hours ago
Last modified
Categories
Share
Crawl websites to find broken links, 404 errors, and dead URLs. Essential for SEO audits, website maintenance, and content teams.
Store Quickstart
Start with the Quickstart template (single starting URL, depth 2). For full-site audits, use Deep Crawl (depth 5, up to 500 pages).
Key Features
- 🕸️ Configurable crawl depth — Follows internal links up to 5 levels deep, up to 500 pages per run
- 🌐 Internal + external checks — Validate both your own links and outbound references
- 📍 Anchor text reporting — Identify which link text points to the broken URL
- 🏷️ Error classification — TIMEOUT, DNS_FAILED, CONNECTION_REFUSED, SSL_ERROR
- ⚡ Concurrent fetching — 1-10 parallel requests to speed up crawls
- 📊 Per-page breakdown — Each result shows all broken links grouped by source page
Use Cases
| Who | Why |
|---|---|
| SEO agencies | Regular broken-link audits for client websites to protect ranking |
| Content editors | Find dead outbound links in blog posts and documentation |
| E-commerce sites | Monitor product pages for broken navigation and outbound partner links |
| Site migrations | Validate internal linking after URL restructuring |
| Technical SEO | Identify redirect chains and crawl traps that waste crawl budget |
Input
| Field | Type | Default | Description |
|---|---|---|---|
| startUrls | string[] | (required) | URLs to start crawling (max 10) |
| maxDepth | integer | 2 | Crawl depth (1-5) |
| maxPages | integer | 50 | Max pages to crawl (1-500) |
| concurrency | integer | 5 | Parallel requests (1-10) |
| checkExternal | boolean | true | Check external links |
| timeoutMs | integer | 10000 | Request timeout in ms |
Input Example
{"startUrls": ["https://example.com"],"maxDepth": 2,"maxPages": 50,"concurrency": 5,"checkExternal": true}
Output Example
{"url": "https://example.com/blog","brokenLinks": [{"href": "https://example.com/deleted-page","statusCode": 404,"anchorText": "Old announcement","isExternal": false,"error": null}]}
FAQ
How does crawl depth work?
Depth 1 = only starting URLs. Depth 2 = starting URLs + links found on them. Depth 5 is the maximum and covers most typical sites.
Does it respect robots.txt?
Yes. Pages blocked by robots.txt are skipped during crawl.
Can I exclude certain URL patterns?
Not in the current version. Add URL pattern exclusion to input if needed in future releases.
How long does a 500-page crawl take?
With concurrency=5 and 10s timeout: roughly 5-15 minutes depending on site speed.
Related Actors
URL/Link Tools cluster — explore related Apify tools:
- 🔗 URL Health Checker — Bulk-check HTTP status codes, redirects, SSL validity, and response times for thousands of URLs.
- 🔗 URL Unshortener — Expand bit.
- 🏷️ Meta Tag Analyzer — Analyze meta tags, Open Graph, Twitter Cards, JSON-LD, and hreflang for any URL.
- 📚 Wayback Machine Checker — Check if URLs are archived on the Wayback Machine and find closest snapshots by date.
Cost
Pay Per Event:
actor-start: $0.01 (flat fee per run)dataset-item: $0.005 per output item
Example: 1,000 items = $0.01 + (1,000 × $0.005) = $5.01
No subscription required — you only pay for what you use.