Website Contact Scraper
Pricing
Pay per usage
Website Contact Scraper
Extract emails, phone numbers, team members, and social media links from any business website. Feed it URLs from Google Maps or your CRM and get structured contact data back. Fast HTTP requests, no browser — scrapes 1,000 sites for ~$0.50.
Pricing
Pay per usage
Rating
0.0
(0)
Developer

ryan clinton
Actor stats
0
Bookmarked
9
Total users
2
Monthly active users
2 days ago
Last modified
Categories
Share
Extract emails, phone numbers, team members, and social media links from any business website. Give it a list of URLs and get back structured contact data — fast, reliable, and affordable.
Why use this over manual research? A human can research maybe 10 companies per hour. This Actor scrapes 1,000+ websites in under 15 minutes at a fraction of the cost. No browser needed — it uses fast HTTP requests that keep your costs near zero.
How to scrape contact info from websites
- Go to the Website Contact Scraper Actor page on Apify
- Paste your website URLs into the URLs field (one per line, or as a JSON array)
- Click Start and wait a few seconds
- Download your results as JSON, CSV, Excel, or export directly to Google Sheets
That's it. Each website produces one clean record with all discovered emails, phones, names, and social links.
What data can you extract?
For each website you provide, this Actor:
- Visits the homepage and discovers contact, about, and team pages
- Follows those links (up to 5 pages per domain by default)
- Extracts all contact information it finds:
- Email addresses from
mailto:links and page content - Phone numbers from
tel:links and formatted numbers in contact sections - People's names and job titles from team/about pages
- Social media links (LinkedIn, Twitter/X, Facebook, Instagram, YouTube)
- Email addresses from
- Deduplicates and returns one clean record per website
Use cases
Lead generation from Google Maps
Scrape businesses from Google Maps Scraper first, then feed their website URLs into this Actor to get direct email addresses and decision-maker names. This gives you a complete lead: business info + contact data.
Example workflow:
- Run Google Maps Scraper for "dentists in Austin, TX" → get 200 business websites
- Feed those 200 URLs into Website Contact Scraper → get emails, names, titles
- Export to Google Sheets or your CRM → start outreach
Sales prospecting
Find the right person to contact at target companies. Instead of guessing generic info@ emails, discover actual team members with their job titles — then reach out to the decision-maker directly.
CRM enrichment
Have a list of company domains but missing contact fields? Upload your URLs and fill in emails, phone numbers, social profiles, and key team members in bulk.
Market research
Build a structured database of companies in any industry with their full contact details, team size, and social media presence.
Input
| Field | Type | Description | Default |
|---|---|---|---|
urls | Array of strings | Website URLs to scrape (required) | — |
maxPagesPerDomain | Integer (1-20) | Max pages to crawl per website | 5 |
includeNames | Boolean | Extract people's names and job titles | true |
includeSocials | Boolean | Extract social media profile links | true |
proxyConfiguration | Object | Proxy settings (recommended for 50+ sites) | Apify Proxy |
Example input
{"urls": ["https://stripe.com","https://basecamp.com","https://buffer.com"],"maxPagesPerDomain": 5,"includeNames": true,"includeSocials": true}
Output
Each website produces one record in the dataset. Here are real results from the example input above:
Website with emails found (Stripe)
{"url": "https://stripe.com","domain": "stripe.com","emails": ["jane.diaz@stripe.com","sales@stripe.com"],"phones": [],"contacts": [],"socialLinks": {},"pagesScraped": 3,"scrapedAt": "2026-02-06T23:48:25.250Z"}
Website with team members found (Buffer — 48 people extracted)
{"url": "https://buffer.com","domain": "buffer.com","emails": [],"phones": [],"contacts": [{"name": "Joel Gascoigne","title": "Founder CEO"},{"name": "Caro Kopprasch","title": "Chief of Staff"},{"name": "Jenny Terry","title": "VP of Finance & Operations"}],"socialLinks": {"linkedin": "https://www.linkedin.com/company/bufferapp","twitter": "https://x.com/buffer","facebook": "https://www.facebook.com/bufferapp","instagram": "https://www.instagram.com/buffer"},"pagesScraped": 2,"scrapedAt": "2026-02-06T23:48:25.255Z"}
48 team members were extracted from Buffer's about page — only 3 shown above for brevity.
Output fields
| Field | Type | Description |
|---|---|---|
url | String | The original website URL |
domain | String | Domain name (without www) |
emails | Array | Discovered email addresses (deduplicated) |
phones | Array | Discovered phone numbers |
contacts | Array | Named contacts with name, title, and optionally email |
socialLinks | Object | Social media profile URLs (linkedin, twitter, facebook, instagram, youtube) |
pagesScraped | Integer | Number of pages crawled on this domain |
scrapedAt | String | ISO timestamp of when the scrape completed |
How to use the API
You can run this Actor programmatically and integrate it into your own applications.
Python
from apify_client import ApifyClientclient = ApifyClient(token="YOUR_API_TOKEN")run = client.actor("ryanclinton/website-contact-scraper").call(run_input={"urls": ["https://stripe.com","https://basecamp.com","https://buffer.com",],"maxPagesPerDomain": 5,})for item in client.dataset(run["defaultDatasetId"]).iterate_items():print(f"{item['domain']}: {item['emails']}")
JavaScript / Node.js
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });const run = await client.actor('ryanclinton/website-contact-scraper').call({urls: ['https://stripe.com','https://basecamp.com','https://buffer.com',],maxPagesPerDomain: 5,});const { items } = await client.dataset(run.defaultDatasetId).listItems();items.forEach(item => {console.log(`${item.domain}: ${item.emails}`);});
cURL
curl -X POST "https://api.apify.com/v2/acts/ryanclinton~website-contact-scraper/runs?token=YOUR_API_TOKEN" \-H "Content-Type: application/json" \-d '{"urls": ["https://stripe.com", "https://basecamp.com"],"maxPagesPerDomain": 5}'
How it finds contacts
The Actor is smart about where it looks — it doesn't crawl the entire site.
Page discovery — Reads the homepage and specifically follows links to pages that matter: contact, about, team, people, staff, leadership, and management pages. This keeps scraping fast and focused.
Email extraction — Checks mailto: links first (most reliable), then scans page content for email patterns. Filters out common junk like noreply@, example@, and image file false positives.
Phone extraction — Prioritizes tel: links (reliable and intentionally published). For text-based numbers, only looks in contact sections and footers, and only matches numbers with clear formatting (not random digit sequences).
Name extraction — Uses three strategies:
- Schema.org
Personmarkup (structured data, most reliable) - Common team card CSS patterns (
.team-member,.staff-member, etc.) - Heading + paragraph pairs where the paragraph contains job title keywords
Social links — Extracts from <a> tags linking to LinkedIn, Twitter/X, Facebook, Instagram, and YouTube.
Combine with other Apify Actors
Website Contact Scraper → Email Pattern Finder pipeline
Found team member names but no emails? Feed them into Email Pattern Finder to detect the company's email format and generate email addresses for every person.
Website Contact Scraper Email Pattern Finder┌────────────────────┐ ┌──────────────────────┐│ buffer.com │ │ Known emails + ││ │ emails + │ team member names ││ 48 team members │ ──────> │ │ ──> Email list│ names + titles │ names │ Detect: first.last@ ││ 4 social links │ │ Generate 48 emails │└────────────────────┘ └──────────────────────┘
Google Maps to Emails pipeline
The most powerful use case: combine Google Maps Scraper with this Actor to build a complete lead database.
Google Maps Scraper Website Contact Scraper┌──────────────────┐ ┌──────────────────────┐│ "plumbers in LA" │ ──────> │ 500 website URLs │ ──────> Full lead database│ │ websites│ │ emails with contact info│ 500 businesses │ │ emails, phones, names │└──────────────────┘ └──────────────────────┘
Python example — full pipeline:
from apify_client import ApifyClientclient = ApifyClient(token="YOUR_API_TOKEN")# Step 1: Get businesses from Google Mapsmaps_run = client.actor("compass/crawler-google-places").call(run_input={"searchStringsArray": ["plumbers in Los Angeles"]})# Step 2: Extract website URLswebsites = []for biz in client.dataset(maps_run["defaultDatasetId"]).iterate_items():if biz.get("website"):websites.append(biz["website"])print(f"Found {len(websites)} business websites")# Step 3: Scrape contact info from those websitescontacts_run = client.actor("ryanclinton/website-contact-scraper").call(run_input={"urls": websites, "maxPagesPerDomain": 5})# Step 4: Get enriched resultsfor item in client.dataset(contacts_run["defaultDatasetId"]).iterate_items():print(f"{item['domain']}: {item['emails']} | {len(item['contacts'])} contacts")
Score and rank leads
After extracting contacts, score them with B2B Lead Qualifier. It analyzes each company's website for 30+ business quality signals and tells you which leads are worth contacting first.
All-in-one pipeline
Want all three steps in a single run? Use B2B Lead Generation Suite — it chains Website Contact Scraper, Email Pattern Finder, and B2B Lead Qualifier automatically. One input, one output, no manual data piping.
Performance and cost
This Actor uses CheerioCrawler (fast HTTP requests, no browser) which keeps costs very low:
| Websites | Estimated time | Estimated platform cost |
|---|---|---|
| 10 | ~10 seconds | < $0.01 |
| 100 | ~2 minutes | ~$0.05 |
| 1,000 | ~15 minutes | ~$0.50 |
| 10,000 | ~2.5 hours | ~$5.00 |
Estimates based on 5 pages per domain with datacenter proxies. Actual costs vary by site complexity and response times.
Tips for best results
- Start small — Test with 10-20 URLs to check data quality before running large batches.
- Use proxies for large runs — Enable Apify Proxy when scraping 50+ sites to avoid rate limiting.
- Increase
maxPagesPerDomainfor team-heavy sites — Companies with large "Meet the Team" pages may need 10-15 pages to find all contacts. - Check the
pagesScrapedfield — If it's only 1, the site may not have discoverable contact pages (the contact info might be JS-rendered or behind a contact form). - Normalize your URLs — Include
https://prefix. The Actor handles redirects, but clean URLs work best.
Integrations
Export your results directly to:
- Google Sheets — One-click export from the dataset view
- CSV / JSON / Excel — Download in any format from the Apify Console
- Zapier / Make / n8n — Automate workflows triggered when scraping completes
- API — Access results programmatically via the Apify API (see code examples above)
- Webhooks — Get notified when a run finishes and process results automatically
FAQ
How is this different from manually checking websites?
This Actor automates what a researcher would do by hand: visit the homepage, find the contact page, write down emails and names. But it does it for hundreds of websites simultaneously, in minutes instead of hours.
Why didn't it find emails on some websites?
Some sites only have contact forms (no visible email address), render content with JavaScript (this Actor uses HTTP requests, not a browser), or hide emails behind CAPTCHAs. Check the pagesScraped count — if it's 1, the site might not have discoverable contact page links from the homepage.
Is the extracted data accurate?
Emails from mailto: links are highly reliable (the site intentionally published them). Regex-matched emails from page text are filtered to remove false positives. Names are extracted from structured team pages and validated against common patterns.
Can I scrape thousands of websites?
Yes. The Actor is designed for bulk scraping. For 50+ sites, enable Apify Proxy to avoid rate limiting. For 10,000+ sites, consider running with higher memory allocation for faster processing.
Does this work on JavaScript-heavy sites?
This Actor uses CheerioCrawler (HTTP requests only, no browser). Sites that render contact info entirely with JavaScript won't yield results. For those sites, you'd need a Playwright-based scraper. Most business websites serve contact info in static HTML, so this works for the majority of sites.
Is web scraping legal?
Scraping publicly available information from websites is generally legal, as confirmed by the US Ninth Circuit in hiQ Labs v. LinkedIn (2022). This Actor only accesses public pages — it cannot bypass logins or access private data. Always review and comply with each website's Terms of Service.
Responsible use
This Actor extracts publicly available contact information from websites. By using it, you agree to:
- Comply with all applicable laws, including GDPR, CAN-SPAM, and CCPA
- Respect each website's Terms of Service and
robots.txt - Use extracted data only for legitimate business purposes (lead generation, market research, CRM enrichment)
- Not use this tool for unsolicited bulk email or spam
The Actor only accesses publicly available pages — it cannot bypass logins, CAPTCHAs, or any access controls.
Limitations
- JavaScript-rendered content — Sites that render contact info purely with client-side JavaScript won't be extracted. Most business sites serve contact info as static HTML, but modern SPAs may not work.
- Login-protected pages — Cannot access pages behind authentication.
- Contact forms only — Sites that only have contact forms (no visible email/phone) won't yield email results.
- Name accuracy — Name extraction works best on sites with structured team pages. Unstructured "About us" paragraphs may not yield names.
Pricing
- $5 per 1,000 websites scraped
- First 100 websites free — try it risk-free
- Only pay for successful results (no charge for sites that return empty)
| Websites | Price |
|---|---|
| 100 | Free |
| 1,000 | $5 |
| 5,000 | $25 |
| 10,000 | $50 |
| 50,000 | $250 |
Changelog
v1.0.0 (2026-02-06)
- Initial release
- Email, phone, name, and social media extraction
- Smart page discovery (contact, about, team pages)
- CheerioCrawler for fast, cost-effective scraping
- False positive filtering for emails, phones, and names