Website Contact Finder avatar

Website Contact Finder

Pricing

Pay per event

Go to Apify Store
Website Contact Finder

Website Contact Finder

Rising star

Extract emails, phone numbers & social media profiles from any website. Bulk scan hundreds of sites in one run — smart crawling prioritizes contact pages. Filters false positives. CRM-ready output. No browser required.

Pricing

Pay per event

Rating

0.0

(0)

Developer

Stas Persiianenko

Stas Persiianenko

Maintained by Community

Actor stats

0

Bookmarked

70

Total users

45

Monthly active users

15 days ago

Last modified

Share

Website Contact Finder — Extract Emails, Phones & Social Links in Bulk

What does Website Contact Finder do?

Website Contact Finder crawls a list of websites and extracts email addresses, phone numbers, and social media profile links from every page — returning one clean contact record per website, ready for CRM import or outreach campaigns.

Provide any list of company URLs and the actor automatically prioritizes contact, about, and team pages for the fastest results. It filters out false positives (image-embedded emails, example domains, noreply addresses), handles multiple formats of phone numbers, and detects profiles on LinkedIn, Twitter/X, Facebook, Instagram, YouTube, GitHub, and TikTok.

No browser required — fast pure HTTP crawling keeps costs low and runs fast.

Try it free: go to Website Contact Finder on Apify Store and click Try for free.

Who is Website Contact Finder for?

Sales teams and SDRs doing outbound prospecting

  • 📋 Enrich a lead list from LinkedIn or Apollo with verified email addresses and phone numbers
  • 📞 Find direct contact info for decision-makers at target companies before cold outreach
  • 🔄 Combine with Google Maps Lead Finder to go from local business search to contact list in one pipeline

Marketing agencies building lead databases

  • 🗂️ Scan hundreds of client-submitted company websites overnight and deliver a contact spreadsheet
  • 🎯 Find social media handles for prospect companies to target with paid social campaigns
  • 📊 Identify which prospects are active on LinkedIn vs Instagram before choosing your channel

Recruiters and HR professionals

  • 🔍 Find careers email addresses and HR contacts at target employers
  • 🌐 Discover company social pages to research culture and team before outreach
  • 📬 Build a contact database for passive candidate outreach campaigns

Entrepreneurs and freelancers

  • 💼 Research potential clients quickly without manually visiting each website
  • 🤝 Find partnership or collaboration contacts at companies in your niche
  • ✉️ Build an outreach list for your services by scanning industry directories

Developers building lead generation pipelines

  • 🤖 Integrate contact extraction into automated prospecting workflows via the Apify API
  • ⚡ Process thousands of URLs programmatically without manual browser work
  • 🧩 Feed results into HubSpot, Salesforce, Pipedrive, or Airtable via Make or Zapier

Why use Website Contact Finder?

  • Bulk processing — scan dozens or hundreds of websites in a single run with one flat start fee
  • 🎯 Smart prioritization — automatically crawls contact, about, and team pages first for faster results
  • 📧 Filtered emails — removes image-embedded addresses, example domains (@example.com), and noreply addresses
  • 📱 International phone formats — extracts numbers from tel: links and text, supports US, EU, and global formats
  • 🌐 7 social networks — finds LinkedIn, Twitter/X, Facebook, Instagram, YouTube, GitHub, and TikTok profiles
  • No browser — fast pure HTTP requests, no Chromium overhead
  • 📤 CRM-ready output — one clean JSON record per website with all contact data structured for direct import
  • 🔧 Configurable depth — control how many pages to crawl per site (default: 20, max: 200)

What data can you extract?

FieldExample
websiteUrl"https://example.com"
emails["info@example.com", "sales@example.com"]
phones["+1 (555) 123-4567", "+44 20 7946 0958"]
socialLinks.linkedin"https://linkedin.com/company/example"
socialLinks.twitter"https://x.com/example"
socialLinks.facebook"https://facebook.com/example"
socialLinks.instagram"https://instagram.com/example"
socialLinks.youtube"https://youtube.com/@example"
socialLinks.github"https://github.com/example"
socialLinks.tiktok"https://tiktok.com/@example"
contactPageUrl"https://example.com/contact"
pagesCrawled15
crawledAt"2026-03-25T12:00:00.000Z"

How much does it cost to find website contacts?

This Actor uses pay-per-event pricing — you pay only for what you scan. No monthly subscription. All platform costs are included.

EventFreeBronze ($29/mo)Silver ($199/mo)Gold ($999/mo)
Start fee (once per run)$0.005$0.005$0.005$0.005
Per website scanned$0.005$0.005$0.004$0.003

Real-world cost examples:

Websites scannedCost (Free tier)
1 website~$0.010
10 websites~$0.055
100 websites~$0.505
500 websites~$2.505

The per-site cost is flat regardless of crawl depth — scanning 20 pages per site costs the same as scanning 5 pages. You control depth via maxPagesPerSite without changing your bill.

On the free $5 credit that every new Apify account gets, you can scan approximately 990 websites.

How to find contact information from websites

  1. Go to Website Contact Finder on Apify Store
  2. Click Try for free (no credit card needed for the free $5 credit)
  3. Enter your list of website URLs — one per line
  4. Set Max pages per site (default 20 is good for most sites; increase for large corporate sites)
  5. Click Start and wait for results
  6. Export to JSON, CSV, or Excel for CRM import or outreach

Example input:

{
"urls": [
"https://example.com",
"https://anothercompany.com",
"https://thirdsite.io"
],
"maxPagesPerSite": 20
}

For a large prospecting batch:

{
"urls": [
"https://company1.com",
"https://company2.com",
"https://company3.com"
],
"maxPagesPerSite": 30,
"maxConcurrency": 10
}

Input parameters

ParameterTypeRequiredDefaultDescription
urlsarrayYesList of website URLs to scan for contact info
maxPagesPerSiteintegerNo20Max internal pages to crawl per website (1–200)
maxConcurrencyintegerNo5Parallel requests per site (1–20)
requestTimeoutSecsintegerNo15Timeout per HTTP request in seconds (5–60)
useProxybooleanNofalseEnable Apify proxy if the target site blocks direct requests
proxyConfigurationobjectNoApify proxy settings (used only when useProxy is true)

Output examples

{
"websiteUrl": "https://example.com",
"emails": ["info@example.com", "sales@example.com"],
"phones": ["+1 (555) 123-4567"],
"socialLinks": {
"linkedin": "https://linkedin.com/company/example",
"twitter": "https://x.com/example",
"facebook": null,
"instagram": "https://instagram.com/example",
"youtube": null,
"github": "https://github.com/example",
"tiktok": null
},
"contactPageUrl": "https://example.com/contact",
"pagesCrawled": 15,
"crawledAt": "2026-03-25T12:00:00.000Z"
}

Sites with no public contact info return emails: [] and phones: [] — never silent failures.

Tips for best results

  • 📋 Scan the full list at once — provide all URLs in one run to minimize start fees and get results in one dataset
  • 📄 Start with 15–20 pages per site — contact info is usually on the homepage, footer, contact page, or about page
  • 🔌 Leave proxy off by default — most sites work without it; enable only if you see timeouts or empty results
  • 🔗 Check contactPageUrl — if no email is found, the contact page may use a form instead of a mailto link
  • 🤝 Combine with Google Maps Lead Finder — find local businesses → get their websites → scan for emails in one workflow
  • 🧹 Filter empty results — some sites deliberately hide emails; plan for ~20–30% of sites returning no email (common for enterprise sites)
  • 🌐 Try the full domain root — always use https://company.com rather than a deep subpage; the crawler discovers contact pages automatically

Integrations

Connect Website Contact Finder with your sales and outreach stack:

  • 📊 Google Sheets — export the contact dataset directly to a shared prospecting spreadsheet; team members see results immediately
  • Zapier — when a run completes, automatically add new contacts to HubSpot, Pipedrive, or a Google Sheet
  • 🔄 Make (Integromat) — build pipelines: scrape contacts → verify emails with Email Finder → push qualified leads to Salesforce
  • 🔧 n8n — self-hosted workflow automation: scan contacts nightly, deduplicate, and push new contacts to your CRM
  • 🔔 Webhooks — receive results at your own API endpoint for custom processing as soon as the run finishes
  • 📅 Scheduled runs — scan a fixed list of prospects weekly to catch newly published contact pages

Using the Apify API

Run Website Contact Finder programmatically from your code using the Apify API.

Node.js:

import { ApifyClient } from 'apify-client';
const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });
const run = await client.actor('automation-lab/website-contact-finder').call({
urls: [
'https://example.com',
'https://anothercompany.com',
'https://thirdsite.io',
],
maxPagesPerSite: 20,
});
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach(item => {
console.log(`${item.websiteUrl}: ${item.emails.join(', ') || 'no email found'}`);
});

Python:

from apify_client import ApifyClient
client = ApifyClient('YOUR_API_TOKEN')
run = client.actor('automation-lab/website-contact-finder').call(run_input={
'urls': [
'https://example.com',
'https://anothercompany.com',
'https://thirdsite.io',
],
'maxPagesPerSite': 20,
})
for item in client.dataset(run['defaultDatasetId']).iterate_items():
emails = ', '.join(item['emails']) if item['emails'] else 'no email found'
print(f"{item['websiteUrl']}: {emails}")

cURL:

curl -X POST "https://api.apify.com/v2/acts/automation-lab~website-contact-finder/runs?token=YOUR_API_TOKEN&waitForFinish=120" \
-H "Content-Type: application/json" \
-d '{
"urls": [
"https://example.com",
"https://anothercompany.com",
"https://thirdsite.io"
],
"maxPagesPerSite": 20
}'

Use with AI agents via MCP

Website Contact Finder is available as a tool for AI assistants that support the Model Context Protocol (MCP).

Add the Apify MCP server to your AI client — this gives you access to all Apify actors, including this one:

Setup for Claude Code

$claude mcp add --transport http apify "https://mcp.apify.com?tools=automation-lab/website-contact-finder"

Setup for Claude Desktop, Cursor, or VS Code

Add this to your MCP config file:

{
"mcpServers": {
"apify": {
"url": "https://mcp.apify.com?tools=automation-lab/website-contact-finder"
}
}
}

Your AI assistant will use OAuth to authenticate with your Apify account on first use.

Example prompts

Once connected, try asking your AI assistant:

  • "Use automation-lab/website-contact-finder to find the email addresses and social links for these 5 company websites: [list]"
  • "Scan the contact pages of these SaaS companies and give me their LinkedIn pages and support emails"
  • "Extract all phone numbers and emails from these 20 local business websites and format them as a CSV"

Learn more in the Apify MCP documentation.

Website Contact Finder only accesses publicly available web pages — contact information intentionally published by website owners for visitors to find. It does not bypass login walls, CAPTCHAs, or access controls.

Automated collection of public contact data is standard practice in sales prospecting and business research. That said, you should review applicable laws in your jurisdiction and the target website's terms of service. For data containing personal information (individual email addresses), ensure compliance with GDPR, CCPA, and similar privacy regulations — particularly around how you store, use, and retain the data. Read more about the legality of web scraping.

FAQ

What types of emails does it find? Emails from mailto: links and plain text on pages. The actor filters out false positives: image-embedded addresses, addresses at example.com or test.com, noreply@ addresses, and addresses that look like file extensions.

Does it find phone numbers? Yes. It extracts numbers from tel: links and text patterns on pages. It supports US format (e.g., (555) 123-4567), international format (e.g., +44 20 7946 0958), and common variations.

Which social platforms does it detect? LinkedIn, Twitter/X, Facebook, Instagram, YouTube, GitHub, and TikTok. It looks for both full profile URLs and embeds in share buttons or footer links.

How many pages should I crawl per site? 10–20 pages covers most small and medium business websites. Increase to 50+ for large enterprise sites or e-commerce stores with distributed contact info across regional pages.

Do I need a proxy? Usually no. The actor uses HTTP requests without a proxy by default, and most public websites don't block them. Enable proxy only if you see timeout errors or empty results — residential proxies work best for stubborn sites.

Does it need a browser? No. Fast HTTP requests only — no Chromium or Playwright overhead. This keeps cost and run time low.

The actor found no emails on a site I know has contact info. Why? The most common reason is that the email is displayed as an image (to prevent bots) or rendered by JavaScript. Since this actor uses HTTP-only requests (no browser), it cannot extract emails that are only visible after JavaScript execution. Some sites also exclusively use contact forms without ever publishing a mailto: link. Check the contactPageUrl field — it likely points to a form-based contact page.

Can I scrape multiple websites in one run? Yes — bulk processing is built in. Add all URLs to the urls array and the actor scans each one and returns a separate contact record per site.

Should I enable the proxy? Leave it off by default. Only enable if the target website blocks your requests (you'll see timeout errors or 403 responses in the logs). Residential proxies work best for sites with strict bot detection.

How does pricing work for bulk runs? You pay $0.005 per website scanned plus a $0.005 flat start fee per run. 100 websites in one run costs $0.505. The per-page crawl depth is free — increasing maxPagesPerSite does not increase your bill.

Other lead generation tools