Link Extractor avatar

Link Extractor

Pricing

Pay per event

Go to Apify Store
Link Extractor

Link Extractor

This actor extracts all hyperlinks from web pages. For each link, it captures the anchor text, href, rel attributes (nofollow, ugc, sponsored), target attribute, and classifies links as internal or external. It also detects the link's location in the page (nav, header, footer, main content,...

Pricing

Pay per event

Rating

0.0

(0)

Developer

Stas Persiianenko

Stas Persiianenko

Maintained by Community

Actor stats

0

Bookmarked

14

Total users

4

Monthly active users

4 hours ago

Last modified

Share

Extract all links from web pages with anchor text, rel attributes, nofollow detection, and internal/external classification.

This actor extracts all hyperlinks from web pages. For each link, it captures the anchor text, href, rel attributes (nofollow, ugc, sponsored), target attribute, and classifies links as internal or external. It also detects the link's location in the page (nav, header, footer, main content, sidebar). Process hundreds of pages in a single run to build a complete link profile for any website.

Who is it for?

  • 🔍 SEO specialists — extracting all outbound and internal links from web pages for auditing
  • 💻 Web developers — mapping site link structures for migration planning
  • 📊 Market researchers — discovering linked resources and partner networks from competitor sites
  • 🛡️ Security analysts — identifying external link destinations for phishing or malware detection
  • 📝 Content strategists — analyzing link patterns and resource references across web pages

Use cases

  • SEO specialists -- audit internal and external linking patterns to optimize site architecture and link equity flow
  • Link builders -- identify nofollow and sponsored link usage on target websites for outreach planning
  • Content strategists -- understand link distribution across page sections to improve content structure
  • Migration teams -- extract all links before and after URL changes to verify nothing is broken
  • Competitive analysts -- discover who competitors link to and how they structure their outbound links
  • Batch processing -- extract links from hundreds of pages in a single run
  • Detailed link attributes -- captures rel, target, anchor text, and link type for each link
  • Internal/external classification -- automatically sorts links by domain relationship
  • Page location detection -- identifies whether links sit in nav, header, footer, sidebar, or main content
  • Structured JSON output -- machine-readable results ready for analysis or import into SEO tools
  • Pay-per-event pricing -- cost-effective at scale, starting at fractions of a cent per URL
  • Fast and lightweight -- HTTP-only requests with no browser overhead, so runs complete quickly

Input parameters

ParameterTypeRequiredDefaultDescription
urlsstring[]Yes--List of web page URLs to extract all links from

Example input

{
"urls": [
"https://www.google.com",
"https://en.wikipedia.org/wiki/Web_scraping",
"https://example.com"
]
}

Output example

{
"url": "https://example.com",
"title": "Example Domain",
"totalLinks": 1,
"internalLinks": 0,
"externalLinks": 1,
"nofollowLinks": 0,
"uniqueInternalDomains": 0,
"uniqueExternalDomains": 1,
"links": [
{
"sourceUrl": "https://example.com",
"href": "https://www.iana.org/domains/example",
"anchorText": "More information...",
"isInternal": false,
"isExternal": true,
"isNofollow": false,
"isUgc": false,
"isSponsored": false,
"rel": null,
"target": null,
"linkType": "page",
"location": "body"
}
],
"error": null,
"extractedAt": "2026-03-01T12:00:00.000Z"
}
  1. Go to Link Extractor on Apify Store.
  2. Enter the URLs you want to extract links from in the urls field.
  3. Click Start and wait for the run to finish.
  4. Download your results as JSON, CSV, or Excel from the Dataset tab.
EventPriceDescription
Start$0.035One-time per run
URL extracted$0.001Per page processed

Example costs:

  • 10 URLs: $0.035 + 10 x $0.001 = $0.045
  • 100 URLs: $0.035 + 100 x $0.001 = $0.135
  • 1,000 URLs: $0.035 + 1,000 x $0.001 = $1.035

Using the Apify API

Node.js

import { ApifyClient } from 'apify-client';
const client = new ApifyClient({ token: 'YOUR_TOKEN' });
const run = await client.actor('automation-lab/link-extractor').call({
urls: ['https://example.com'],
});
const { items } = await client.dataset(run.defaultDatasetId).listItems();
console.log(items);

Python

from apify_client import ApifyClient
client = ApifyClient('YOUR_TOKEN')
run = client.actor('automation-lab/link-extractor').call(run_input={
'urls': ['https://example.com'],
})
items = client.dataset(run['defaultDatasetId']).list_items().items
print(f'Found {items[0]["totalLinks"]} links')

cURL

curl "https://api.apify.com/v2/acts/automation-lab~link-extractor/runs" \
-X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_TOKEN" \
-d '{"urls": ["https://example.com"]}'

Use with AI agents via MCP

Link Extractor is available as a tool for AI assistants via the Model Context Protocol (MCP).

Setup for Claude Code

$claude mcp add --transport http apify "https://mcp.apify.com?tools=automation-lab/link-extractor"

Setup for Claude Desktop, Cursor, or VS Code

{
"mcpServers": {
"apify": {
"url": "https://mcp.apify.com?tools=automation-lab/link-extractor"
}
}
}

Example prompts

  • "Extract all links from this webpage"
  • "Get all outbound links from our homepage"

Learn more in the Apify MCP documentation.

Integrations

Link Extractor integrates with your existing workflow tools through the Apify platform. Connect it to Make (formerly Integromat), Zapier, or n8n to automate link extraction and feed results into SEO tools or spreadsheets. Export link data to Google Sheets for collaborative analysis, send alerts to Slack when new external links appear, or use webhooks to trigger downstream processing whenever a run completes.

Common integration patterns include:

  • SEO dashboard -- schedule weekly runs and push results to Google Sheets to track link profile changes over time
  • Link monitoring -- use webhooks to compare new results against previous runs and alert on unexpected external link additions

Tips and best practices

  • Combine with a sitemap -- feed your sitemap URLs into the actor to get a complete link profile for your entire site.
  • Filter by rel attribute -- use the isNofollow, isUgc, and isSponsored fields to segment links by their SEO significance.
  • Check link location -- links in nav and footer carry different SEO weight than links in the main content body.
  • Run regularly to track link changes over time, especially after content updates or site redesigns.
  • Export to CSV -- download the dataset as CSV from the Apify Console for easy import into Excel or Google Sheets.

Legality

This tool analyzes publicly accessible web content. Automated analysis of public web resources is standard practice in SEO and web development. Always respect robots.txt directives and rate limits when analyzing third-party websites. For personal data processing, ensure compliance with applicable privacy regulations.

FAQ

What types of links does it extract? It extracts all <a> tag hyperlinks including page links, anchor links, mailto links, and tel links. It captures the href, anchor text, rel attributes, and target attribute for each.

Does it follow links to other pages? No. The actor extracts links from the pages you provide, but does not crawl or follow those links to additional pages. Each URL in the input is processed independently.

Can it detect broken links? No. Link Extractor captures link URLs and attributes but does not check whether the destination URLs return valid responses. Pair it with a dedicated link checker for that purpose.

What does the location field mean? The location field indicates where on the page the link was found -- for example, nav, header, footer, sidebar, or body. This helps you understand the context and SEO weight of each link.

Some links are missing from the results. Why? The actor extracts links from the initial HTML response without running JavaScript. Links injected dynamically by JavaScript frameworks (React, Vue, Angular) after page load will not be captured. Most server-rendered pages and traditional CMS platforms include all links in the HTML source. For JavaScript-heavy single-page applications, consider using a browser-based scraper.

How many URLs can I process in one run? There is no hard limit. The actor processes URLs concurrently, so runs with hundreds or even thousands of URLs complete efficiently.

Other SEO tools