Website Contact Extractor (HTTP)
Pricing
from $5.00 / 1,000 contact extracteds
Website Contact Extractor (HTTP)
Extract contacts from any company website: names, emails, phones, LinkedIn. Offer targeting mode ranks decision-makers by relevance to your pitch so you always reach the right person first. AI-powered, multilingual, no browser needed.
Pricing
from $5.00 / 1,000 contact extracteds
Rating
0.0
(0)
Developer
Ale
Actor stats
0
Bookmarked
19
Total users
2
Monthly active users
6 days ago
Last modified
Categories
Share
Website Contact Extractor
Extract contact information from any website — emails, phone numbers, social media links, and company address — in a single run. Give it a list of URLs and get back one consolidated contact record per site. No API keys needed.
How It Works
For each URL you provide, the scraper:
- Visits the homepage and captures the page title and meta description
- Parses schema.org JSON-LD structured data to extract the company address
- Discovers internal links — contact, about, impressum, team, and privacy pages are always crawled first
- Extracts email addresses from page text and
mailto:links - Extracts phone numbers from
tel:links and international-format (+XX) text - Finds social media profile links (LinkedIn, Twitter/X, Facebook, Instagram, YouTube, Xing)
- Returns one record per website with all contact details consolidated
Challenge pages (bot-protection walls) are skipped automatically so the run keeps going.
Use with AI Agents (MCP)
Connect this actor to any MCP-compatible AI client — Claude Desktop, Claude.ai, Cursor, VS Code, LangChain, LlamaIndex, or custom agents.
Apify MCP server URL:
https://mcp.apify.com?tools=nanoscrape/website-contact-extractor
Example prompt once connected:
"Use
website-contact-extractorto extract the contact email, phone number, and LinkedIn page for acme-corp.com."
Clients that support dynamic tool discovery (Claude.ai, VS Code) will receive the full input schema automatically via add-actor.
Input Example
{"urls": ["acme-corp.com","https://www.another-company.de","https://startup.io/contact"],"maxPagesPerUrl": 15}
Bare domains (acme-corp.com) and full URLs are both accepted.
Output Example
[{"url": "https://acme-corp.com","domain": "acme-corp.com","title": "Acme Corp — Business Solutions","description": "We help businesses automate their operations worldwide.","emails": ["info@acme-corp.com", "sales@acme-corp.com"],"phones": ["+1 555 123 4567"],"address": "123 Main St, San Francisco, CA 94105, US","social_links": {"linkedin": "https://linkedin.com/company/acme-corp","twitter": "https://twitter.com/acmecorp","facebook": "https://facebook.com/acmecorp"},"pages_crawled": 9,"scraped_at": "2026-05-06T10:00:00Z"}]
Pricing
You pay per website processed — one charge per contact record regardless of how many emails or phone numbers were found.
| Event | Price | Description |
|---|---|---|
| Actor start | $0.15 | Covers container startup |
| Contact result | $0.001 | Per website contact record produced |
Example costs:
| Websites | Cost |
|---|---|
| 1 website | $0.151 |
| 100 websites | $0.25 |
| 1,000 websites | $1.15 |
| 10,000 websites | $10.15 |
No monthly fees. No minimum spend.
Input Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
urls | string[] | required | Website URLs or bare domains to scrape |
maxPagesPerUrl | integer | 15 | Max pages to crawl per site (1–50) |
proxyConfiguration | object | Apify proxy | Proxy settings |
Output Fields
| Field | Type | Description |
|---|---|---|
url | string | Input URL that was scraped |
domain | string | Normalized domain name |
title | string | Page title from the homepage |
description | string | Meta description from the homepage |
emails | string[] | All unique email addresses found (lowercase) |
phones | string[] | Phone numbers from tel: links and international-format text |
address | string | Company address from schema.org JSON-LD structured data |
social_links | object | Social profiles found (linkedin, twitter, facebook, instagram, youtube, xing) |
pages_crawled | integer | Number of pages visited |
scraped_at | string | ISO timestamp of when the record was produced |
Tips
- Contact and impressum pages are checked first — the scraper prioritizes
/contact,/kontakt,/impressum,/about,/team, and similar paths - 15 pages covers most SMB and mid-market sites — most businesses expose all contact info within the first 15 pages
- Increase to 30–50 pages for large corporate sites or to find role-specific emails (hr@, press@)
- One record per website — all emails, phones, and social links from a site are consolidated into a single row, making it easy to use in spreadsheets or CRMs
- Address from structured data — address is only populated when the site includes schema.org JSON-LD markup (common on SMB and local business sites)
Related Actors
Enrich with more data
- Contact Details Scraper — same core extraction, alternative entry point
- Google Maps Email Scraper — find businesses via Maps then extract emails
Lead generation sources
- B2B Leads Scraper — Clutch and G2 company directories
- Yellow Pages Scraper — US local business listings
- Y Combinator Scraper — startup company leads
Issues & Feature Requests
If something is not working or you're missing a feature, please open an issue and we'll look into it.