Website Email Scraper
Pricing
from $4.50 / 1,000 results
Website Email Scraper
Crawl any website up to 20 levels deep and extract all visible email addresses, with proxy support and same-domain link following.
Pricing
from $4.50 / 1,000 results
Rating
0.0
(0)
Developer
Andrew
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
3 days ago
Last modified
Categories
Share
Extract every visible email address from any website — crawls up to 1,000 pages deep using a real browser so JavaScript-rendered content is fully captured. Stays within the same domain and deduplicates results automatically.
Pricing
You only pay per email address found.
What you get
- Every unique email address visible on the site, deduplicated across all pages
- The exact page URL where each email was first found
- The page title alongside each result for easy context
- Works on JS-heavy sites (React, Vue, Angular) — uses a real Chromium browser, not just HTML parsing
Use cases
- Lead generation — find contact emails on competitor or partner sites
- Sales prospecting — build contact lists from industry directories or association member pages
- Due diligence — audit what email addresses a company exposes publicly
- Recruitment — find department or team contact pages at target organisations
- Compliance checks — identify exposed email addresses on your own domain before a security review
How to use
Scan a whole website
Enter the homepage URL to crawl the entire site:
https://www.example.com
The scraper follows all internal links staying on the same hostname, up to 20 levels deep and 1,000 pages by default.
Target a specific section
To focus on one part of a site (e.g. a contact directory or staff page), enter that path directly:
https://www.example.com/about/contacthttps://www.university.edu/faculty/sciencehttps://www.company.com/team
Only pages reachable by following links from that starting URL will be crawled — so entering a subdirectory effectively scopes the run to that section.
Parameters
| Field | Default | Description |
|---|---|---|
| Starting URL | (required) | Homepage or specific path to start from |
| Max Crawl Depth | 20 | Link-hops from the start URL. Most sites are fully covered at depth 20. Reduce to 2–3 for a quick surface scan |
| Max Pages | 1000 | Total pages to visit. Increase for very large sites |
Example results
Running on a mid-sized company website (50 pages, depth 3) might return:
| Found on | Page title | |
|---|---|---|
hello@example.com | /contact | Contact Us |
sales@example.com | /contact | Contact Us |
support@example.com | /help | Help Centre |
press@example.com | /about | About Us |
careers@example.com | /careers | Join Our Team |
Output format
Each dataset record:
{"email": "contact@example.com","sourceUrl": "https://www.example.com/about","pageTitle": "About Us — Example Company"}
Export to JSON, CSV, Excel, or Google Sheets directly from the Apify console.
Notes
- The scraper stays on the same hostname as your starting URL and will not follow links to external domains
- Emails are deduplicated globally — each address appears once, from the first page it was found on
- Common false positives (image filenames, CSS class names containing
@) are filtered out automatically - No login or proxy required for public sites; Apify Proxy is used automatically when running on the platform