Website Contact Finder
Pricing
Pay per event
Website Contact Finder
Website Contact Finder crawls any website and extracts email addresses, phone numbers, and social media profile links. It prioritizes contact and about pages, filters out false positives, and returns clean, structured data ready for CRM import or outreach campaigns.
Pricing
Pay per event
Rating
0.0
(0)
Developer

Stas Persiianenko
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
2 days ago
Last modified
Categories
Share
Website Contact Finder — Extract Emails, Phones & Social Links
What does Website Contact Finder do?
Website Contact Finder crawls any website and extracts email addresses, phone numbers, and social media profile links. It prioritizes contact and about pages, filters out false positives, and returns clean, structured data ready for CRM import or outreach campaigns.
Why use Website Contact Finder?
- Smart crawling — automatically prioritizes contact, about, and team pages for faster results
- Email extraction — finds emails from mailto: links and page text, filters false positives (image files, example domains, noreply addresses)
- Phone detection — extracts numbers from tel: links and text, supports international formats
- Social media discovery — finds LinkedIn, Twitter/X, Facebook, Instagram, YouTube, GitHub, and TikTok profiles
- No browser needed — fast HTTP-only requests, no Chromium overhead
- Structured output — one clean record per website with all contact data
How much does it cost?
Uses pay-per-event pricing:
| Event | Price | When charged |
|---|---|---|
start | $0.035 | Once when the actor starts |
page-crawled | $0.001 | Each internal page crawled |
Example costs:
- 1 website, 20 pages: $0.035 + 20 x $0.001 = $0.055
- 5 websites, 20 pages each: 5 x $0.035 + 100 x $0.001 = $0.275
- 20 websites, 20 pages each: 20 x $0.035 + 400 x $0.001 = $1.10
What data can you extract?
| Field | Example |
|---|---|
websiteUrl | "https://example.com" |
emails | ["info@example.com", "sales@example.com"] |
phones | ["+1 (555) 123-4567"] |
socialLinks.linkedin | "https://linkedin.com/company/example" |
socialLinks.twitter | "https://x.com/example" |
socialLinks.facebook | "https://facebook.com/example" |
socialLinks.instagram | "https://instagram.com/example" |
socialLinks.youtube | "https://youtube.com/@example" |
socialLinks.github | "https://github.com/example" |
socialLinks.tiktok | "https://tiktok.com/@example" |
contactPageUrl | "https://example.com/contact" |
pagesCrawled | 15 |
How to find contact information
- Open Website Contact Finder on Apify Console.
- Enter the Website URL to scan.
- Set Max pages to control how deep the crawl goes (default: 20).
- Click Start and download the contact data from the dataset.
Example input
{"startUrl": "https://example.com","maxPages": 20}
Input parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
startUrl | string | Yes | — | Website URL to scan |
maxPages | integer | No | 20 | Max internal pages to crawl (1–500) |
maxConcurrency | integer | No | 5 | Parallel requests (1–20) |
requestTimeoutSecs | integer | No | 15 | Timeout per request in seconds |
useProxy | boolean | No | false | Enable proxy for requests |
proxyConfiguration | object | No | — | Apify proxy settings |
Output example
{"websiteUrl": "https://example.com","emails": ["info@example.com", "sales@example.com"],"phones": ["+1 (555) 123-4567"],"socialLinks": {"linkedin": "https://linkedin.com/company/example","twitter": "https://x.com/example","facebook": null,"instagram": "https://instagram.com/example","youtube": null,"github": "https://github.com/example","tiktok": null},"contactPageUrl": "https://example.com/contact","pagesCrawled": 15,"crawledAt": "2026-02-28T12:00:00.000Z"}
Tips for best results
- Start with 10–20 pages — most contact info is on the homepage, contact page, and footer.
- Leave proxy off for most sites. Enable only if you get blocked.
- Combine with Google Maps Lead Finder — find businesses on Google Maps, then enrich each website with contact details.
- Filter empty results — some sites use contact forms instead of publishing emails.
- Check the
contactPageUrl— if no email is found, the contact page may have a form.
Integrations
Connect with Make, Zapier, n8n, or any HTTP tool. Feed extracted contacts into your CRM, email outreach tool, or lead database.
Using the Apify API
Node.js
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_TOKEN' });const run = await client.actor('automation-lab/website-contact-finder').call({startUrl: 'https://example.com',maxPages: 20,});const { items } = await client.dataset(run.defaultDatasetId).listItems();console.log('Contacts:', items[0]);
Python
from apify_client import ApifyClientclient = ApifyClient('YOUR_TOKEN')run = client.actor('automation-lab/website-contact-finder').call(run_input={'startUrl': 'https://example.com','maxPages': 20,})items = client.dataset(run['defaultDatasetId']).list_items().itemsprint(f'Contacts: {items[0]}')
FAQ
What emails does it find? Emails from mailto: links and visible page text. It filters out image files, example domains, and noreply addresses.
Does it find phone numbers? Yes, from tel: links and text patterns. Supports international formats.
Which social platforms does it detect? LinkedIn, Twitter/X, Facebook, Instagram, YouTube, GitHub, and TikTok.
Do I need a proxy? Usually no. Proxy is off by default. Enable if the target site blocks direct requests.
Does it need a browser? No. Fast HTTP requests only — no Chromium/Playwright overhead.
How many pages should I crawl? 10–20 pages covers most websites. Increase for larger sites with distributed contact info.