Website Contact Finder avatar

Website Contact Finder

Pricing

Pay per event

Go to Apify Store
Website Contact Finder

Website Contact Finder

Extract emails, phone numbers, and social media links from any website. Bulk scan multiple sites at once for sales prospecting and lead enrichment.

Pricing

Pay per event

Rating

0.0

(0)

Developer

Stas Persiianenko

Stas Persiianenko

Maintained by Community

Actor stats

0

Bookmarked

13

Total users

6

Monthly active users

19 minutes ago

Last modified

Share

Website Contact Finder — Extract Emails, Phones & Social Links

What does Website Contact Finder do?

Website Contact Finder crawls any website and extracts email addresses, phone numbers, and social media profile links. It prioritizes contact and about pages, filters out false positives, and returns clean, structured data ready for CRM import or outreach campaigns.

Why use Website Contact Finder?

  • Smart crawling — automatically prioritizes contact, about, and team pages for faster results
  • Email extraction — finds emails from mailto: links and page text, filters false positives (image files, example domains, noreply addresses)
  • Phone detection — extracts numbers from tel: links and text, supports international formats
  • Social media discovery — finds LinkedIn, Twitter/X, Facebook, Instagram, YouTube, GitHub, and TikTok profiles
  • No browser needed — fast HTTP-only requests, no Chromium overhead
  • Structured output — one clean record per website with all contact data

How much does it cost to find website contacts?

Uses pay-per-event pricing:

EventPriceWhen charged
start$0.035Once when the actor starts
page-crawled$0.001Each internal page crawled

Example costs:

  • 1 website, 20 pages: $0.035 + 20 x $0.001 = $0.055
  • 5 websites, 20 pages each: 5 x $0.035 + 100 x $0.001 = $0.275
  • 20 websites, 20 pages each: 20 x $0.035 + 400 x $0.001 = $1.10

What data can you extract?

FieldExample
websiteUrl"https://example.com"
emails["info@example.com", "sales@example.com"]
phones["+1 (555) 123-4567"]
socialLinks.linkedin"https://linkedin.com/company/example"
socialLinks.twitter"https://x.com/example"
socialLinks.facebook"https://facebook.com/example"
socialLinks.instagram"https://instagram.com/example"
socialLinks.youtube"https://youtube.com/@example"
socialLinks.github"https://github.com/example"
socialLinks.tiktok"https://tiktok.com/@example"
contactPageUrl"https://example.com/contact"
pagesCrawled15

How to find contact information

  1. Open Website Contact Finder on Apify Console.
  2. Enter the Website URL to scan.
  3. Set Max pages to control how deep the crawl goes (default: 20).
  4. Click Start and download the contact data from the dataset.

Example input

{
"startUrl": "https://example.com",
"maxPages": 20
}

Input parameters

ParameterTypeRequiredDefaultDescription
startUrlstringYesWebsite URL to scan
maxPagesintegerNo20Max internal pages to crawl (1–500)
maxConcurrencyintegerNo5Parallel requests (1–20)
requestTimeoutSecsintegerNo15Timeout per request in seconds
useProxybooleanNofalseEnable proxy for requests
proxyConfigurationobjectNoApify proxy settings

Output example

{
"websiteUrl": "https://example.com",
"emails": ["info@example.com", "sales@example.com"],
"phones": ["+1 (555) 123-4567"],
"socialLinks": {
"linkedin": "https://linkedin.com/company/example",
"twitter": "https://x.com/example",
"facebook": null,
"instagram": "https://instagram.com/example",
"youtube": null,
"github": "https://github.com/example",
"tiktok": null
},
"contactPageUrl": "https://example.com/contact",
"pagesCrawled": 15,
"crawledAt": "2026-02-28T12:00:00.000Z"
}

Tips for best results

  • Start with 10–20 pages — most contact info is on the homepage, contact page, and footer.
  • Leave proxy off for most sites. Enable only if you get blocked.
  • Combine with Google Maps Lead Finder — find businesses on Google Maps, then enrich each website with contact details.
  • Filter empty results — some sites use contact forms instead of publishing emails.
  • Check the contactPageUrl — if no email is found, the contact page may have a form.

Integrations

Connect Website Contact Finder with your sales and outreach tools:

  • Google Sheets — Export contact data directly to a shared prospecting spreadsheet
  • Zapier — Trigger CRM updates or outreach sequences when new contacts are found
  • Make — Build pipelines: scrape contacts, verify emails with Email Enrichment, then push to HubSpot or Salesforce
  • n8n — Self-hosted workflow automation for contact extraction
  • Webhooks — Send results to your own API endpoint for custom processing

Using the Apify API

Node.js

import { ApifyClient } from 'apify-client';
const client = new ApifyClient({ token: 'YOUR_TOKEN' });
const run = await client.actor('automation-lab/website-contact-finder').call({
startUrl: 'https://example.com',
maxPages: 20,
});
const { items } = await client.dataset(run.defaultDatasetId).listItems();
console.log('Contacts:', items[0]);

Python

from apify_client import ApifyClient
client = ApifyClient('YOUR_TOKEN')
run = client.actor('automation-lab/website-contact-finder').call(run_input={
'startUrl': 'https://example.com',
'maxPages': 20,
})
items = client.dataset(run['defaultDatasetId']).list_items().items
print(f'Contacts: {items[0]}')

Use with cURL

curl -X POST "https://api.apify.com/v2/acts/automation-lab~website-contact-finder/runs?token=YOUR_API_TOKEN&waitForFinish=120" \
-H "Content-Type: application/json" \
-d '{
"startUrl": "https://example.com",
"maxPages": 20
}'

Use with AI agents via MCP

Website Contact Finder is available as a tool for AI assistants that support the Model Context Protocol (MCP).

Setup for Claude Code

$claude mcp add --transport http apify "https://mcp.apify.com"

Setup for Claude Desktop, Cursor, or VS Code

Add this to your MCP config file:

{
"mcpServers": {
"apify": {
"url": "https://mcp.apify.com"
}
}
}

Example prompts

Once connected, try asking your AI assistant:

  • "Find all contact information on this company website"
  • "Extract phone numbers and emails from these business websites"

Learn more in the Apify MCP documentation.

Legality

This tool analyzes publicly accessible web content. Automated analysis of public web resources is standard practice in SEO and web development. Always respect robots.txt directives and rate limits when analyzing third-party websites. For personal data processing, ensure compliance with applicable privacy regulations.

FAQ

What emails does it find? Emails from mailto: links and visible page text. It filters out image files, example domains, and noreply addresses.

Does it find phone numbers? Yes, from tel: links and text patterns. Supports international formats.

Which social platforms does it detect? LinkedIn, Twitter/X, Facebook, Instagram, YouTube, GitHub, and TikTok.

Do I need a proxy? Usually no. Proxy is off by default. Enable if the target site blocks direct requests.

Does it need a browser? No. Fast HTTP requests only — no Chromium/Playwright overhead.

How many pages should I crawl? 10–20 pages covers most websites. Increase for larger sites with distributed contact info.

The actor found no emails on a site I know has contact info. Why? Some websites display email addresses as images or use JavaScript-based obfuscation to prevent scraping. Since this actor uses HTTP-only requests (no browser), it cannot extract emails rendered by JavaScript. Also, some sites only use contact forms without publishing email addresses. Check the contactPageUrl field -- it may point to a form-based contact page.

Can I scrape multiple websites in one run? Currently the actor processes one website per run. For batch processing, use the Apify API to start multiple runs in parallel, or chain it with Make or Zapier to iterate over a list of URLs.

Should I enable the proxy? Leave it off by default. Only enable the proxy if the target website blocks your requests (you'll see timeout errors or empty results). Residential proxies work best for stubborn sites.

Other lead generation tools on Apify