Linkedin Employee Extractor avatar

Linkedin Employee Extractor

Pricing

from $5.00 / 1,000 results

Go to Apify Store
Linkedin Employee Extractor

Linkedin Employee Extractor

Extract employee profiles from LinkedIn company people search using authenticated requests (requires session cookies)

Pricing

from $5.00 / 1,000 results

Rating

0.0

(0)

Developer

Bhojraj Pilaniya

Bhojraj Pilaniya

Maintained by Community

Actor stats

0

Bookmarked

6

Total users

3

Monthly active users

7 days ago

Last modified

Share

LinkedIn Employee Extractor & API

This project is unofficial and not affiliated with LinkedIn.

A lightweight LinkedIn employee extractor that uses your session cookies to fetch people profiles (no browser automation needed).

Features

  • Authenticated Requests: Uses li_at and JSESSIONID cookies.
  • Fast & Lightweight: No Selenium/Puppeteer overhead.
  • API Server: Built-in HTTP server to trigger scrapes remotely.
  • Pagination Support: Fetch all available results for a company/role.

Setup

1. Install dependencies

$pip install -r requirements.txt

2. Get your LinkedIn cookies

  1. Open Chrome and go to linkedin.com
  2. Make sure you're logged in
  3. Open DevTools (F12 or Cmd+Option+I)
  4. Go to Application tab → Cookieshttps://www.linkedin.com
  5. Copy these two values:
    • li_at: Your session token.
    • JSESSIONID: Your CSRF token (remove quotes if present, e.g., ajax:...).

3. Configure .env

Create a .env file based on the example:

GEO_URN="102713980" # India (optional)
DELAY_BETWEEN_REQUESTS=2
PORT=8000

Usage

Start the server:

$python server.py

Send a scraping request:

curl -X POST http://localhost:8000/scrape \
-H "Content-Type: application/json" \
-d '{
"company_url": "https://www.linkedin.com/company/google",
"tags": ["software engineer"],
"max_pages": 0,
"li_at": "YOUR_LI_AT_COOKIE",
"jsessionid": "YOUR_JSESSIONID_COOKIE"
}'

Note: If you provide li_at and jsessionid in the request body, they will override the environment variables.

Option 2: Run as CLI Script

Edit .env with your target COMPANY_URLS and TAGS, then run:

$python scraper.py

Deployment

Docker

Build and run the container:

docker build -t linkedin-scraper .
docker run -p 8000:8000 --env-file .env linkedin-scraper

Cloud (Render/Railway/Heroku)

  1. Push to GitHub (see instructions below).
  2. Connect your repo to a cloud provider.
  3. Add your Environment Variables (LI_AT, JSESSIONID, etc.) in the cloud dashboard.
  4. Deploy!