🔗 Broken Link Checker avatar

🔗 Broken Link Checker

Pricing

Pay per event

Go to Apify Store
🔗 Broken Link Checker

🔗 Broken Link Checker

Crawl websites to find broken links, 404 errors, and dead URLs. Checks internal and external links with configurable depth. Essential for SEO audits, website maintenance, and content teams.

Pricing

Pay per event

Rating

0.0

(0)

Developer

太郎 山田

太郎 山田

Maintained by Community

Actor stats

0

Bookmarked

1

Total users

0

Monthly active users

5 hours ago

Last modified

Share

Crawl websites to find broken links, 404 errors, and dead URLs. Essential for SEO audits, website maintenance, and content teams.

Store Quickstart

Start with the Quickstart template (single starting URL, depth 2). For full-site audits, use Deep Crawl (depth 5, up to 500 pages).

Key Features

  • 🕸️ Configurable crawl depth — Follows internal links up to 5 levels deep, up to 500 pages per run
  • 🌐 Internal + external checks — Validate both your own links and outbound references
  • 📍 Anchor text reporting — Identify which link text points to the broken URL
  • 🏷️ Error classification — TIMEOUT, DNS_FAILED, CONNECTION_REFUSED, SSL_ERROR
  • Concurrent fetching — 1-10 parallel requests to speed up crawls
  • 📊 Per-page breakdown — Each result shows all broken links grouped by source page

Use Cases

WhoWhy
SEO agenciesRegular broken-link audits for client websites to protect ranking
Content editorsFind dead outbound links in blog posts and documentation
E-commerce sitesMonitor product pages for broken navigation and outbound partner links
Site migrationsValidate internal linking after URL restructuring
Technical SEOIdentify redirect chains and crawl traps that waste crawl budget

Input

FieldTypeDefaultDescription
startUrlsstring[](required)URLs to start crawling (max 10)
maxDepthinteger2Crawl depth (1-5)
maxPagesinteger50Max pages to crawl (1-500)
concurrencyinteger5Parallel requests (1-10)
checkExternalbooleantrueCheck external links
timeoutMsinteger10000Request timeout in ms

Input Example

{
"startUrls": ["https://example.com"],
"maxDepth": 2,
"maxPages": 50,
"concurrency": 5,
"checkExternal": true
}

Output Example

{
"url": "https://example.com/blog",
"brokenLinks": [
{
"href": "https://example.com/deleted-page",
"statusCode": 404,
"anchorText": "Old announcement",
"isExternal": false,
"error": null
}
]
}

FAQ

How does crawl depth work?

Depth 1 = only starting URLs. Depth 2 = starting URLs + links found on them. Depth 5 is the maximum and covers most typical sites.

Does it respect robots.txt?

Yes. Pages blocked by robots.txt are skipped during crawl.

Can I exclude certain URL patterns?

Not in the current version. Add URL pattern exclusion to input if needed in future releases.

How long does a 500-page crawl take?

With concurrency=5 and 10s timeout: roughly 5-15 minutes depending on site speed.

URL/Link Tools cluster — explore related Apify tools:

Cost

Pay Per Event:

  • actor-start: $0.01 (flat fee per run)
  • dataset-item: $0.005 per output item

Example: 1,000 items = $0.01 + (1,000 × $0.005) = $5.01

No subscription required — you only pay for what you use.