Sitemap & URL Discovery - Find All URLs on Any Site
Pricing
from $2.00 / 1,000 website analyzeds
Sitemap & URL Discovery - Find All URLs on Any Site
Discover every URL on any website by parsing sitemap.xml, robots.txt, and sitemap indexes. Extract URLs with last modified dates, change frequency, and priority. Perfect for SEO audits, content analysis, crawling preparation, and site mapping.
Pricing
from $2.00 / 1,000 website analyzeds
Rating
0.0
(0)
Developer
Ale
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
14 days ago
Last modified
Categories
Share
Sitemap & URL Discovery
Map every page on any website in seconds.
Point this actor at any domain and it will discover every URL the site publishes — by parsing robots.txt, sitemap.xml, nested sitemap indexes, and image sitemaps. You get back a clean dataset of URLs with their last modified date, change frequency, and priority, ready to feed into SEO audits, content inventories, or the next stage of your crawling pipeline.
No scraping, no HTML parsing, no browser automation — just the standard files every well-behaved site publishes, fetched and parsed at machine speed.
Features
- robots.txt aware — follows every
Sitemap:directive found in robots.txt - Default location probing — falls back to
/sitemap.xml,/sitemap_index.xml,/sitemap.xml.gz - Sitemap index recursion — follows
<sitemapindex>files to any depth you allow - Gzip support — transparently decompresses
.gzsitemaps - Rich metadata — extracts
<lastmod>,<changefreq>,<priority> - Image sitemap support — optional extraction of
<image:loc>entries - Deduplication — URLs seen in multiple sitemaps are emitted once
- Safety caps — per-site URL limit prevents runaway jobs on massive sites
- Per-site summary — one summary record per website with counts and source sitemaps
- Fast & lightweight — runs at minimal cost with no browser overhead
Use with AI Agents (MCP)
Connect this actor to any MCP-compatible AI client — Claude Desktop, Claude.ai, Cursor, VS Code, LangChain, LlamaIndex, or custom agents.
Apify MCP server URL:
https://mcp.apify.com?tools=santamaria-automations/sitemap-url-discovery
Example prompt once connected:
"Use
sitemap-url-discoveryto process data with sitemap url discovery. Return results as a table."
Clients that support dynamic tool discovery (Claude.ai, VS Code) will receive the full input schema automatically via add-actor.
Example Output
Running against https://www.apify.com and https://wordpress.org returns records like:
{"website": "https://www.apify.com","url": "https://apify.com/store","lastmod": "2026-03-15","changefreq": "daily","priority": "0.8","source_sitemap": "https://apify.com/sitemap.xml","is_image": false,"image_url": null,"scraped_at": "2026-04-07T10:15:32Z"}
Plus one summary record per website:
{"website": "https://wordpress.org","type": "summary","robots_txt_found": true,"sitemaps_found": 4,"sitemap_urls": ["https://wordpress.org/sitemap.xml","https://wordpress.org/news/sitemap.xml","https://wordpress.org/plugins/sitemap.xml","https://wordpress.org/themes/sitemap.xml"],"total_urls": 8243,"scraped_at": "2026-04-07T10:15:45Z"}
Input Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
websites | array | required | Website URLs to analyze. Bare domains or full URLs both work. |
followSitemapIndex | boolean | true | Recursively follow <sitemapindex> files. |
respectRobotsTxt | boolean | true | Read robots.txt and follow Sitemap: directives. |
maxUrlsPerSite | integer | 10000 | Safety cap on URLs returned per website. |
maxDepth | integer | 3 | Max recursion depth for nested sitemap indexes. |
includeLastmod | boolean | true | Include <lastmod> dates when present. |
includeImages | boolean | false | Include <image:loc> entries from image sitemaps. |
timeoutSeconds | integer | 30 | Per-sitemap HTTP timeout. |
proxyConfiguration | object | disabled | Optional Apify proxy. Usually not needed. |
Use Cases
SEO Audits
Cross-reference the URLs a site exposes in its sitemap against what's actually in Google's index. Find orphaned pages (in sitemap, not indexed) and rogue pages (indexed, not in sitemap). Spot missing lastmod tags that weaken crawl budget.
Content Inventory
Answer "how many pages does this site have?" in seconds. Break down by section by grouping URLs on path prefix. Track growth over time by comparing snapshots.
Crawling Preparation
Instead of crawling from the homepage and hoping to find everything, feed a clean URL list into your content extractor. Faster, more predictable, and much easier on the target site.
Competitor Analysis
Run weekly against competitor domains and diff the results to detect new product pages, blog posts, or landing pages the moment they ship.
Site Migration Planning
Get a complete URL inventory of the legacy site before migration. Use it to build 301 redirect maps, verify coverage after launch, and guarantee no page is left behind.
Content Mining
Filter discovered URLs by path (/blog/, /products/, /jobs/) to focus downstream extraction on exactly the content type you care about.
Pricing
- $0.001 per run start
- $0.002 per website analyzed
Pricing is per website, not per URL. Discovering 100,000 URLs on one site costs the same as discovering 10 URLs — so you can run this against large domains without worrying about a surprise bill.
Issues & Feedback
Found a bug or have a feature request? Please open an issue on the actor page.
Related Actors
- Website Contact Extractor — Extract emails, phones, and team members from any company website.
- Website Tech Stack Detector — Identify the CMS, frameworks, analytics, and hosting stack behind any site.
- Domain WHOIS & DNS Lookup — Resolve DNS records and WHOIS registration data for any domain.