Website Contact Finder
Pricing
Pay per event
Website Contact Finder
Extract emails, phone numbers, and social media links from any website. Bulk scan multiple sites at once for sales prospecting and lead enrichment.
Pricing
Pay per event
Rating
0.0
(0)
Developer
Stas Persiianenko
Actor stats
0
Bookmarked
14
Total users
7
Monthly active users
13 hours ago
Last modified
Categories
Share
Website Contact Finder — Extract Emails, Phones & Social Links in Bulk
What does Website Contact Finder do?
Website Contact Finder crawls a list of websites and extracts email addresses, phone numbers, and social media profile links from each one. Provide a list of URLs and get one clean contact record per website — ready for CRM import or outreach campaigns. It prioritizes contact and about pages, filters out false positives, and handles multiple sites in a single run.
Why use Website Contact Finder?
- Bulk processing — scan dozens or hundreds of websites in a single run, not one at a time
- Smart crawling — automatically prioritizes contact, about, and team pages for faster results
- Email extraction — finds emails from mailto: links and page text, filters false positives (image files, example domains, noreply addresses)
- Phone detection — extracts numbers from tel: links and text, supports international formats
- Social media discovery — finds LinkedIn, Twitter/X, Facebook, Instagram, YouTube, GitHub, and TikTok profiles
- No browser needed — fast HTTP-only requests, no Chromium overhead
- Structured output — one clean record per website with all contact data
How much does it cost to find website contacts?
Uses pay-per-event pricing:
| Event | Price | When charged |
|---|---|---|
start | $0.005 | Once when the actor starts |
website-scanned | $0.005 | Each website scanned |
Example costs:
- 1 website: $0.005 + 1 x $0.005 = $0.010
- 10 websites: $0.005 + 10 x $0.005 = $0.055
- 100 websites: $0.005 + 100 x $0.005 = $0.505
- 500 websites: $0.005 + 500 x $0.005 = $2.505
Much cheaper than scraping website after website individually — the per-site cost is flat regardless of how many pages you crawl per site.
What data can you extract?
| Field | Example |
|---|---|
websiteUrl | "https://example.com" |
emails | ["info@example.com", "sales@example.com"] |
phones | ["+1 (555) 123-4567"] |
socialLinks.linkedin | "https://linkedin.com/company/example" |
socialLinks.twitter | "https://x.com/example" |
socialLinks.facebook | "https://facebook.com/example" |
socialLinks.instagram | "https://instagram.com/example" |
socialLinks.youtube | "https://youtube.com/@example" |
socialLinks.github | "https://github.com/example" |
socialLinks.tiktok | "https://tiktok.com/@example" |
contactPageUrl | "https://example.com/contact" |
pagesCrawled | 15 |
crawledAt | "2026-03-25T12:00:00.000Z" |
How to find contact information
- Open Website Contact Finder on Apify Console.
- Enter the list of Website URLs to scan — one per line.
- Set Max pages per site to control how deep the crawl goes per site (default: 20).
- Click Start and download the contact data from the dataset.
Example input
{"urls": ["https://example.com","https://anothercompany.com","https://thirdsite.io"],"maxPagesPerSite": 20}
Input parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
urls | array | Yes | — | List of website URLs to scan |
maxPagesPerSite | integer | No | 20 | Max internal pages to crawl per website (1–200) |
maxConcurrency | integer | No | 5 | Parallel requests per site (1–20) |
requestTimeoutSecs | integer | No | 15 | Timeout per request in seconds |
useProxy | boolean | No | false | Enable proxy for requests |
proxyConfiguration | object | No | — | Apify proxy settings |
Output example
{"websiteUrl": "https://example.com","emails": ["info@example.com", "sales@example.com"],"phones": ["+1 (555) 123-4567"],"socialLinks": {"linkedin": "https://linkedin.com/company/example","twitter": "https://x.com/example","facebook": null,"instagram": "https://instagram.com/example","youtube": null,"github": "https://github.com/example","tiktok": null},"contactPageUrl": "https://example.com/contact","pagesCrawled": 15,"crawledAt": "2026-03-25T12:00:00.000Z"}
Tips for best results
- Scan multiple sites at once — provide a full list of URLs upfront to get all results in one run.
- Start with 10–20 pages per site — most contact info is on the homepage, contact page, and footer.
- Leave proxy off for most sites. Enable only if you get blocked.
- Combine with Google Maps Lead Finder — find businesses on Google Maps, then enrich each website with contact details.
- Filter empty results — some sites use contact forms instead of publishing emails.
- Check the
contactPageUrl— if no email is found, the contact page may have a form.
Integrations
Connect Website Contact Finder with your sales and outreach tools:
- Google Sheets — Export contact data directly to a shared prospecting spreadsheet
- Zapier — Trigger CRM updates or outreach sequences when new contacts are found
- Make — Build pipelines: scrape contacts, verify emails with Email Enrichment, then push to HubSpot or Salesforce
- n8n — Self-hosted workflow automation for contact extraction
- Webhooks — Send results to your own API endpoint for custom processing
Programmatic access via API
Use the Apify API to run Website Contact Finder from your code.
Node.js
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });const run = await client.actor('automation-lab/website-contact-finder').call({urls: ['https://example.com','https://anothercompany.com','https://thirdsite.io',],maxPagesPerSite: 20,});const { items } = await client.dataset(run.defaultDatasetId).listItems();items.forEach(item => console.log(`${item.websiteUrl}: ${item.emails.join(', ')}`));
Python
from apify_client import ApifyClientclient = ApifyClient('YOUR_TOKEN')run = client.actor('automation-lab/website-contact-finder').call(run_input={'urls': ['https://example.com','https://anothercompany.com','https://thirdsite.io',],'maxPagesPerSite': 20,})for item in client.dataset(run['defaultDatasetId']).iterate_items():print(f"{item['websiteUrl']}: {', '.join(item['emails'])}")
cURL
curl -X POST "https://api.apify.com/v2/acts/automation-lab~website-contact-finder/runs?token=YOUR_API_TOKEN&waitForFinish=120" \-H "Content-Type: application/json" \-d '{"urls": ["https://example.com","https://anothercompany.com","https://thirdsite.io"],"maxPagesPerSite": 20}'
Use with AI agents via MCP
Website Contact Finder is available as a tool for AI assistants that support the Model Context Protocol (MCP).
Setup for Claude Code
$claude mcp add --transport http apify "https://mcp.apify.com"
Setup for Claude Desktop, Cursor, or VS Code
Add this to your MCP config file:
{"mcpServers": {"apify": {"url": "https://mcp.apify.com"}}}
Example prompts
Once connected, try asking your AI assistant:
- "Find all contact information on this company website"
- "Extract phone numbers and emails from these business websites"
Learn more in the Apify MCP documentation.
Legality
This tool analyzes publicly accessible web content. Automated analysis of public web resources is standard practice in SEO and web development. Always respect robots.txt directives and rate limits when analyzing third-party websites. For personal data processing, ensure compliance with applicable privacy regulations.
FAQ
What emails does it find? Emails from mailto: links and visible page text. It filters out image files, example domains, and noreply addresses.
Does it find phone numbers? Yes, from tel: links and text patterns. Supports international formats.
Which social platforms does it detect? LinkedIn, Twitter/X, Facebook, Instagram, YouTube, GitHub, and TikTok.
Do I need a proxy? Usually no. Proxy is off by default. Enable if the target site blocks direct requests.
Does it need a browser? No. Fast HTTP requests only — no Chromium/Playwright overhead.
How many pages should I crawl per site? 10–20 pages covers most websites. Increase for larger sites with distributed contact info.
The actor found no emails on a site I know has contact info. Why?
Some websites display email addresses as images or use JavaScript-based obfuscation to prevent scraping. Since this actor uses HTTP-only requests (no browser), it cannot extract emails rendered by JavaScript. Also, some sites only use contact forms without publishing email addresses. Check the contactPageUrl field — it may point to a form-based contact page.
Can I scrape multiple websites in one run?
Yes — bulk processing is built in. Provide a list of URLs in the urls input field and the actor will scan each website and return one contact record per site. No need to start separate runs.
Should I enable the proxy? Leave it off by default. Only enable the proxy if the target website blocks your requests (you'll see timeout errors or empty results). Residential proxies work best for stubborn sites.
How does pricing work for bulk runs?
You pay $0.005 per website scanned, plus a flat $0.005 start fee per run. Scanning 100 websites in one run costs $0.505. There is no per-page charge — the crawl depth is controlled by maxPagesPerSite without affecting your bill.
Other lead generation tools on Apify
- Email Enrichment -- enrich email addresses with name, company, and social profiles
- Google Maps Lead Finder -- find businesses on Google Maps with contact details
- Social Media Profile Finder -- find social media profiles for any person or company
- Email Finder -- find email addresses for any domain or company