Website Contact Scraper
Pricing
from $3.00 / 1,000 contact scrapeds
Website Contact Scraper
Under maintenanceExtract emails, phone numbers, and social media links from any website. Scrape multiple sites per run using a real browser for full JS-rendered content.
Pricing
from $3.00 / 1,000 contact scrapeds
Rating
0.0
(0)
Developer
Andrew
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
10 hours ago
Last modified
Categories
Share
Extract emails, phone numbers, and social media links from any website — scrape multiple sites in a single run using a real Chromium browser so JavaScript-rendered content is fully captured.
What you get
- Email addresses — every unique email visible on the site, deduplicated across all pages
- Social media links — auto-detected and categorised by platform: Facebook, Twitter/X, LinkedIn, Instagram, YouTube, TikTok, Pinterest, Snapchat, Telegram, WhatsApp, Reddit, Discord, GitHub, and more
- Phone numbers — extracted from visible page text in any format (international, domestic, extensions)
- Scanned pages list — see exactly which pages were crawled for each site
- Works on JS-heavy sites (React, Vue, Angular) — uses a real Chromium browser, not just static HTML
Use cases
- Lead generation — find contact details on company websites for outreach campaigns
- Sales prospecting — build contact lists from industry directories or partner pages
- Social media discovery — find all social profiles associated with a business in one pass
- Due diligence — audit what contact information a company exposes publicly
- Market research — collect contact data from multiple competitor or supplier sites at once
- Recruitment — surface contact pages and team directories at target organisations
How to use
- Paste one or more website URLs into the Website URLs field
- Set Max Crawl Depth (default
3— covers most contact pages; increase for deep directories) - Set Max Pages Per Site (default
50— enough for most company sites) - Run the actor — results appear in the Dataset tab
- Export to JSON, CSV, Excel, or Google Sheets from the Apify console
Scrape a single site
{"urls": ["https://www.example.com"]}
Scrape multiple sites in one run
{"urls": ["https://www.acme.com","https://www.globex.com","https://www.initech.com"],"maxDepth": 2,"maxPages": 20}
Each URL is crawled independently and produces its own output record. Contact pages (/contact, /about, /team, /staff) are automatically prioritised in the crawl queue.
Parameters
| Field | Default | Description |
|---|---|---|
| Website URLs | (required) | One or more URLs to scrape. Pass a homepage to scan the whole site, or a specific path (e.g. /contact) to target a section |
| Max Crawl Depth | 3 | Link-hops from the start URL per site. 1 = homepage only; 3 covers most contact pages |
| Max Pages Per Site | 50 | Maximum pages to visit per URL. Increase for large site directories |
Output format
One dataset record per input URL:
{"url": "https://www.example.com","emails": ["hello@example.com","sales@example.com"],"social_links": {"linkedin": ["https://linkedin.com/company/example"],"twitter": ["https://x.com/example"],"instagram": ["https://instagram.com/example"],"facebook": ["https://facebook.com/example"]},"phone_numbers": ["+1 (555) 123-4567","555-987-6543"],"scanned_pages": ["https://www.example.com","https://www.example.com/contact","https://www.example.com/about"],"status": "success","error": null}
If a site fails to load, status is set to "error" and error contains the reason — the run continues with the remaining URLs.
Notes
- The scraper stays on the same hostname as each input URL and will not follow links to external domains
- All results (emails, phones, social links) are deduplicated per site
- Common false positives such as CSS identifiers and image filenames containing
@are filtered out automatically - Phone number extraction uses visible page text, so numbers hidden behind JavaScript interactions may not be captured