Contact Info Scraper: Pay Per Result avatar

Contact Info Scraper: Pay Per Result

Pricing

$6.00 / 1,000 results

Go to Apify Store
Contact Info Scraper: Pay Per Result

Contact Info Scraper: Pay Per Result

Extract emails, phone numbers, and social media links from any website. Supports 14 platforms including Facebook, Instagram, LinkedIn, Twitter/X, YouTube, TikTok, WhatsApp, and more. Powerful extraction, simple setup — just enter a URL and go.

Pricing

$6.00 / 1,000 results

Rating

3.1

(21)

Developer

ВAH

ВAH

Maintained by Community

Actor stats

25

Bookmarked

396

Total users

19

Monthly active users

a day ago

Last modified

Categories

Share

Contact Info Scraper — Extract Emails, Phone Numbers & Social Media from Any Website

Enter any website URL, and this Actor will automatically browse through its pages to collect all available contact information — email addresses, phone numbers, and social media links. No technical knowledge required.


What does Contact Info Scraper do?

Contact Info Scraper is a powerful tool that visits any website you specify, navigates through its pages (up to a configurable depth), and extracts all publicly available contact details. It organizes the results by page, and at the end, provides a deduplicated summary of all unique contact information found across the entire site.

Whether you need to find a company's email, collect phone numbers from a directory, or gather social media profiles for outreach, this Actor handles it all automatically.


Key Features

  • Email Extraction — Discovers all email addresses on a website, with smart filtering to exclude invalid or placeholder addresses.

  • Phone Number Extraction — Identifies phone numbers across 200+ countries and regions. Results are split into two confidence levels:

    • phone_numbers — High-confidence, validated phone numbers
    • uncertain_phone_numbers — Potential phone numbers that may need manual verification
  • 14 Social Media Platforms — Automatically detects and categorizes links to: Twitter/X, Facebook, Instagram, LinkedIn, YouTube, TikTok, Pinterest, GitHub, Reddit, Snapchat, WhatsApp, Telegram, Medium, and Discord.

  • Auto Deduplication & Summary — After scraping completes, a summary record is automatically generated containing all unique emails, phone numbers, and social media links found across every page — no manual cleanup needed.

  • Concurrent Processing — Multiple pages are processed simultaneously (configurable 1-10 workers), significantly reducing scraping time.

  • URL Filtering — Focus only on the pages you care about (e.g., /contact, /about, /team) and skip irrelevant ones (e.g., /blog, /cart). Saves time and cost.

  • Domain Locking — Restrict scraping to the same domain as your starting URL, preventing the crawler from following external links.

  • Auto Retry — If a page fails to load, the Actor automatically retries up to 2 times to ensure data completeness.


Use Cases

  • Sales & Lead Generation — Quickly build contact lists of potential clients by scraping their company websites.
  • Market Research — Collect publicly available contact information from industry websites to understand your competitive landscape.
  • Recruitment — Gather team member and HR contact details from company career pages and about pages.
  • PR & Media Outreach — Find contact information for journalists, bloggers, and influencers for partnership and promotion.
  • Directory Building — Build industry-specific contact databases from multiple websites at scale.

Input Parameters

ParameterTypeDescription
Urlsstring[]Target website URLs to scrape. You can provide multiple URLs.
DepthintegerHow many layers of links to follow from the starting URL. 0 = only the starting page.
Total_numintegerMaximum number of page results to collect (up to 10,000). The summary record is always appended at the end.
Lock_domainbooleanWhen enabled, only scrapes pages within the same domain as the starting URL.
ConcurrencyintegerNumber of pages to process in parallel (1-10). Higher values are faster but use more resources.
Max_urls_per_depthintegerMaximum number of URLs to process at each depth level.
url_include_patternsstring[]Only crawl URLs whose path contains at least one of these keywords. Example: ["/contact", "/about", "/team"]. Leave empty to crawl all.
url_exclude_patternsstring[]Skip URLs whose path contains any of these keywords. Example: ["/blog", "/news", "/login"].

Output Example

Each scraped page produces a record with "type": "page". After all pages are processed, a final "type": "summary" record is appended with deduplicated data from all pages.

Page Record

{
"type": "page",
"start_url": "https://wikimediafoundation.org/contact/",
"domain": "wikimediafoundation.org",
"depth": 0,
"referrer_url": "https://wikimediafoundation.org/contact/",
"current_url": "https://wikimediafoundation.org/contact/",
"page_title": "Contact – Wikimedia Foundation",
"emails": [
"donate@wikimedia.org",
"info@wikimedia.org",
"legal@wikimedia.org",
"answers@wikimedia.org",
"business@wikimedia.org"
],
"phone_numbers": [
"+1 415-839-6885",
"+1 415-347-8540"
],
"uncertain_phone_numbers": [
"+84 912 032 336",
"+34 912 03 23 36"
],
"twitter": [],
"facebook": [
"https://www.facebook.com/wikimediafoundation/",
"https://www.facebook.com/wikipedia"
],
"instagram": [
"https://www.instagram.com/wikimediafoundation/",
"https://www.instagram.com/wikipedia/"
],
"linkedin": [
"https://www.linkedin.com/company/wikimedia-foundation",
"https://www.linkedin.com/company/wikipedia-the-free-encyclopedia/"
],
"youtube": [
"https://www.youtube.com/@wikipedia"
],
"tiktok": [
"https://www.tiktok.com/@wikipedia"
],
"pinterest": [],
"github": [],
"reddit": [],
"snapchat": [],
"whatsapp": [],
"telegram": [],
"medium": [],
"discord": []
}

Summary Record

The last record in the dataset. Contains all unique contact information collected across every page:

{
"type": "summary",
"start_url": "https://wikimediafoundation.org/contact/",
"domain": "wikimediafoundation.org",
"pages_crawled": 100,
"emails": [
"accessibility@wikimedia.org",
"answers@wikimedia.org",
"business@wikimedia.org",
"donate@wikimedia.org",
"info@wikimedia.org",
"legacy@wikimedia.org",
"legal@wikimedia.org",
"press@wikimedia.org",
"privacy@wikimedia.org",
"publicpolicy@lists.wikimedia.org",
"recruiting@wikimedia.org",
"wikipedialibrary@wikimedia.org",
"wmfbequest@cck-law.com"
],
"phone_numbers": [
"+1 201-920-2020",
"+1 415-347-8540",
"+1 415-839-6885",
"+49 202 32024",
"+49 2903 7109",
"+82 2-903-7109",
"+86 20 1920 2020",
"+91 40 2023 2024"
],
"uncertain_phone_numbers": [
"+34 873 02 44 88",
"+34 912 03 23 36",
"+358 20 232024",
"+358 29 037109",
"+39 02 023 2024",
"+39 02 903 7109",
"+45 20 10 20 11",
"+45 20 11 20 12",
"+45 20 12 20 13",
"+45 20 12 20 14",
"+45 20 13 10 12",
"+45 20 13 20 14",
"+45 20 14 20 15",
"+45 20 15 20 16",
"+45 20 16 20 15",
"+45 20 16 20 17",
"+45 20 17 20 16",
"+45 20 17 20 18",
"+45 20 18 20 17",
"+45 20 18 20 19",
"+45 20 19 05 18",
"+45 20 19 20 18",
"+45 20 19 20 20",
"+45 20 20 20 19",
"+45 20 20 20 21",
"+45 20 20 20 24",
"+45 20 21 20 20",
"+45 20 21 20 22",
"+45 20 22 20 21",
"+45 20 22 20 23",
"+45 20 22 20 25",
"+45 20 22 40 20",
"+45 20 23 20 22",
"+45 20 23 20 24",
"+45 20 24 20 23",
"+45 20 24 20 25",
"+45 20 25 20 24",
"+46 20 23 20 24",
"+46 290 371 09",
"+47 02019",
"+47 02025",
"+52 201 920 2020",
"+66 2 023 2024",
"+66 2 903 7109",
"+84 873 024 488",
"+84 912 032 336",
"+971 800 00"
],
"twitter": [
"http://twitter.com/intent/tweet?text=1%20billion%2B%20unique%20devices%20access%20Wikimedia%20sites%20every%20month%20https://wikimediafoundation.org/technology?pagename=technology",
"http://twitter.com/intent/tweet?text=40%20million%20articles%20across%20nearly%20300%20languages%20on%20Wikipedia%20https://wikimediafoundation.org/research?pagename=research",
"http://twitter.com/intent/tweet?text=6,000%20views%20per%20second%20across%20Wikimedia%20sites%20https://wikimediafoundation.org/technology?pagename=technology"
],
"facebook": [
"http://www.facebook.com/sharer/sharer.php?s=100&p%5Burl%5D=https://wikimediafoundation.org/news/2025/11/09/7-reasons-you-should-donate-to-wikipedia/&&p%5Btitle%5D=",
"http://www.facebook.com/sharer/sharer.php?s=100&p%5Burl%5D=https://wikimediafoundation.org/news/2026/01/15/wikipedia-celebrates-25years/&&p%5Btitle%5D=",
"http://www.facebook.com/sharer/sharer.php?s=100&p%5Burl%5D=https://wikimediafoundation.org/news/2026/02/02/open-knowledge-journalism-awards-applications-open/&&p%5Btitle%5D=",
"http://www.facebook.com/sharer/sharer.php?s=100&p%5Burl%5D=https://wikimediafoundation.org/news/2026/03/02/the-winners-of-wiki-loves-earth-2025/&&p%5Btitle%5D=",
"https://www.facebook.com/wikimediafoundation/",
"https://www.facebook.com/wikipedia"
],
"instagram": [
"https://www.instagram.com/wikimediafoundation/",
"https://www.instagram.com/wikipedia/"
],
"linkedin": [
"https://www.linkedin.com/company/wikimedia-foundation",
"https://www.linkedin.com/company/wikipedia-the-free-encyclopedia/",
"https://www.linkedin.com/shareArticle?mini=true&url=https://wikimediafoundation.org/news/2025/11/09/7-reasons-you-should-donate-to-wikipedia/&title=",
"https://www.linkedin.com/shareArticle?mini=true&url=https://wikimediafoundation.org/news/2026/01/15/wikipedia-celebrates-25years/&title=",
"https://www.linkedin.com/shareArticle?mini=true&url=https://wikimediafoundation.org/news/2026/02/02/open-knowledge-journalism-awards-applications-open/&title=",
"https://www.linkedin.com/shareArticle?mini=true&url=https://wikimediafoundation.org/news/2026/03/02/the-winners-of-wiki-loves-earth-2025/&title=",
"https://www.linkedin.com/showcase/wikimediapolicy"
],
"youtube": [
"https://www.youtube.com/@wikipedia",
"https://www.youtube.com/watch?v=lnsBPcmbFLM&embeds_referring_euri=https%3A%2F%2Fwikimediafoundation.org%2F&source_ve_path=OTY3MTQ"
],
"tiktok": [
"https://www.tiktok.com/@wikipedia"
],
"pinterest": [],
"github": [
"https://github.com/slaporte"
],
"reddit": [],
"snapchat": [],
"whatsapp": [],
"telegram": [],
"medium": [
"https://medium.com/wikimedia-policy/bills-that-claim-to-advance-safety-online-will-cause-more-online-harm-its-time-to-pay-attention-48746b6dc7e7?source=friends_link&sk=7fdfd8900cc36d41bf353dd26c957de2",
"https://medium.com/wikimedia-policy/gonzalez-vs-google-whats-at-stake-for-wikipedia-563162797553?source=friends_link&sk=b587ea9afd38932649d4f7312208c3c2",
"https://medium.com/wikimedia-policy/what-happened-in-the-c%C3%A9sar-do-pa%C3%A7o-lawsuit-f91b7fb5e54b",
"https://medium.com/wikimedia-policy/wikimedia-foundation-tells-ftc-to-rein-in-commercial-surveillance-e2ef85bddb34?source=friends_link&sk=59f499c4db688b2b4c96615f1fc7f902"
],
"discord": []
}

Tip: Filter the dataset by type = "summary" to instantly get all deduplicated contact information, or by type = "page" to view per-page details.


Output Fields

FieldDescription
typeRecord type: page for per-page results, summary for the deduplicated summary.
start_urlThe original URL you provided as input.
domainThe domain of the start URL.
depthThe scraping depth at which this page was found (0 = starting page).
referrer_urlThe URL that linked to this page.
current_urlThe URL of the scraped page.
page_titleThe title of the scraped page.
emailsEmail addresses found on the page (or all unique emails in summary).
phone_numbersHigh-confidence phone numbers, validated for correctness.
uncertain_phone_numbersPotential phone numbers that may require manual verification.
twitter, facebook, instagram, ...Social media profile URLs found, categorized by platform (14 platforms supported).
pages_crawledTotal number of pages scraped (only present in the summary record).

How to Use

  1. Enter your target URLs — Provide one or more website URLs you want to scrape.
  2. Adjust settings — Set the scraping depth, result limit, and optionally add URL filters to focus on specific pages.
  3. Run and download — Click Start, wait for the Actor to finish, and download your results in JSON, CSV, or Excel format.

Tips for Best Results

  • Start small — Test with Depth: 5 and Total_num: 100 first to preview the results before running a large-scale scrape.
  • Use URL filters — Set url_include_patterns to ["/contact", "/about", "/team", "/people"] to focus on pages most likely to contain contact information. This saves time and reduces cost.
  • Check the summary — Filter by type = "summary" in the dataset to get a clean, deduplicated list of all contacts found across the entire website.
  • Adjust concurrency — If you're on a lower-memory plan, reduce Concurrency to 2 or 3 for stable performance.

Notes

Phone number and email formats vary significantly across countries and regions. This Actor does its best to accurately identify and validate contact information, but cannot guarantee 100% accuracy for every result. The phone_numbers field contains high-confidence results, while uncertain_phone_numbers may include entries that require manual verification.

Additionally, this Actor can only access publicly available pages. Pages that require a login, account registration, or any form of authentication cannot be scraped.