LinkedIn Contact Data Enrichment Tool avatar

LinkedIn Contact Data Enrichment Tool

Pricing

from $10.00 / 1,000 enriched leads

Go to Apify Store
LinkedIn Contact Data Enrichment Tool

LinkedIn Contact Data Enrichment Tool

Enrich any LinkedIn profile URL into a comprehensive contact record. Returns 500+ structured data points — full experience history, education, skills, certifications — plus verified email. Built for data teams, researchers, and enrichment pipelines.

Pricing

from $10.00 / 1,000 enriched leads

Rating

0.0

(0)

Developer

Raised Pro

Raised Pro

Maintained by Community

Actor stats

0

Bookmarked

6

Total users

3

Monthly active users

6 days ago

Last modified

Share

Convert LinkedIn profile URLs into comprehensive, structured contact records. This actor retrieves 500+ data points per profile — complete employment history, education, skills, certifications, languages, and more — and appends a verified email address sourced and confirmed independently.

Designed for data engineers, market researchers, and analysts who need reliable, structured LinkedIn data in bulk without managing cookies, sessions, or LinkedIn API access.

Data Coverage

Identity & Contact

  • Full name (first, middle, last parsed)
  • Verified email + deliverability status
  • LinkedIn URL + vanity slug
  • Location (city, region, country)
  • Languages spoken

Professional

  • Current title and employer
  • Complete employment history: company name, industry, headcount, start/end dates, description, location
  • Seniority indicators from title parsing

Company (Current Employer)

  • Company LinkedIn URL
  • Website / company domain
  • Industry classification
  • Employee count range
  • Headquarters location

Education

  • Institution name and URL
  • Degree and field of study
  • Start and end year

Content & Influence

  • LinkedIn follower count
  • Connection count
  • Profile summary / about section
  • Featured content links

Signals

  • Also-viewed profiles (useful for building audience graphs)
  • Skills list (up to 50)
  • Certifications with issuing organization and dates
  • Volunteer activity

Technical Specs

  • Input format: JSON array of LinkedIn profile URLs (standard or Sales Navigator)
  • Output format: Structured JSON per profile, exportable as CSV from Apify dataset
  • Rate: ~40 profiles/minute
  • Pagination: Not applicable (profile endpoint returns complete record in one call)
  • Caching: Optional — use cached data for profiles recently enriched, or force fresh pull

Input Schema

{
"profile_urls": ["https://www.linkedin.com/in/example/"],
"find_email": true,
"include_skills": true,
"include_extra": true,
"use_cache": "if-recent"
}

use_cache options:

  • if-present — return cached data if any exists (fastest, lowest cost)
  • if-recent — return cached data only if refreshed within 7 days (default)
  • never — always fetch live data (freshest, highest cost)

Output Record Structure

{
"full_name": "Alexandra Chen",
"first_name": "Alexandra",
"last_name": "Chen",
"headline": "Data Engineering Lead at Stripe",
"location": "New York, NY",
"country": "US",
"linkedin_url": "https://www.linkedin.com/in/alexandrachen/",
"email": "a.chen@stripe.com",
"email_status": "valid",
"email_confidence": 94,
"millionverifier_result": "ok",
"millionverifier_quality": "high",
"current_company": "Stripe",
"current_title": "Data Engineering Lead",
"company_domain": "stripe.com",
"company_linkedin_url": "https://www.linkedin.com/company/stripe/",
"industry": "Financial Services",
"employee_count": "5001-10000",
"followers": 8300,
"connections": 500,
"about": "Building data infrastructure at scale...",
"skills": ["Apache Spark", "dbt", "Snowflake", "Python"],
"experience": [
{
"title": "Data Engineering Lead",
"company": "Stripe",
"start_date": "2021-09",
"end_date": null,
"location": "New York, NY",
"description": "..."
}
],
"education": [...],
"certifications": [...],
"also_viewed": [...]
}

Enrichment Pipeline Integration

This actor is designed to sit inside larger data pipelines:

Apify API call (Python):

import apify_client
client = apify_client.ApifyClient("YOUR_APIFY_TOKEN")
run = client.actor("ACTOR_SLUG").call(run_input={
"profile_urls": profile_url_list,
"find_email": True,
"include_extra": True
})
dataset = client.dataset(run["defaultDatasetId"]).list_items()

Works natively with Clay (Apify integration), n8n (HTTP Request node), Make, and Zapier.

Pricing

Pay-per-event — charged only on successful record delivery:

  • Profile + verified email: $0.020/record ($20/1,000)
  • Profile only (no email): $0.008/record ($8/1,000)

Data Quality Notes

  • Emails are found using domain-matched lookup against the company domain extracted from the profile, then independently verified by MillionVerifier
  • Profiles with incomplete company data (freelancers, students, self-employed) have lower email find rates
  • include_extra: true adds gender signal, birthdate (where public), full industry classification, and extended experience descriptions at +1 enrichment credit credit per profile
  • also_viewed is useful as a starting point for building audience lookalike graphs

FAQ

How fresh is the profile data? By default the actor uses cached data refreshed within 7 days. Set use_cache: never to force live pulls on every profile.

What's the email find rate? Typically 55–75% for B2B profiles at companies with standard domain structures. Lower for freelancers, students, and profiles with incomplete employment sections.

Can I export to BigQuery or Snowflake? Yes — export the Apify dataset as JSON or NDJSON and load via your standard ingestion pipeline.

Is there a bulk input method? Yes. The profile_urls field accepts an array. You can submit up to 500 URLs per run. For larger batches, call the actor multiple times or use the Apify API to queue runs programmatically.