# 🏢 LinkedIn Company Scraper Pro - Bulk Data \[NO COOKIES] ✅ (`unseenuser/linkedin-company-scraper`) Actor

Bulk-scrape LinkedIn company data - name, industry, employees, HQ, website, founding year - without logging in. No cookies, no Chrome extension, no risk to your account. Built for B2B sales, recruiters, and ABM platforms. Outputs clean JSON for HubSpot, Salesforce, Clay, n8n.

- **URL**: https://apify.com/unseenuser/linkedin-company-scraper.md
- **Developed by:** [Unseen User](https://apify.com/unseenuser) (community)
- **Categories:** Automation, Lead generation, Social media
- **Stats:** 2 total users, 1 monthly users, 100.0% runs succeeded, 2 bookmarks
- **User rating**: 5.00 out of 5 stars

## Pricing

$5.50 / 1,000 results

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## 🏢 LinkedIn Company Scraper Pro - Profiles + Search + Posts (No Login)

> **No cookies. No login. No account-ban risk.** Cookie-based tools like PhantomBuster and Dux-Soup hijack your real LinkedIn session - one wrong move and your account gets restricted. This LinkedIn company scraper never touches your account, never asks for cookies, and never burns your personal LinkedIn quota.

Full company intelligence in one Actor: rich profiles with funding data, ICP-filtered company search, and recent company posts. The **3-in-1 LinkedIn company data API** for B2B sales prospecting, ABM target lists, investor research, and competitive intelligence. **Bulk LinkedIn scraper** with up to 1,000 companies per search and pay-per-result pricing.

---

### ⚖️ This vs. cookie-based LinkedIn scrapers

| | **This Actor** | **Cookie-based scrapers** (PhantomBuster, Dux-Soup, etc.) |
|---|---|---|
| **Account safety** | ✅ Zero risk - no login, no cookies, no browser session | ❌ Real risk of account restriction or permanent ban |
| **Setup time** | ✅ ~2 min (paste one API key) | ❌ Browser extension + cookie export + session refresh |
| **Bulk scale** | ✅ 1,000+ companies per search, 50+ posts per company | ❌ Hard rate limits tied to your personal account |
| **Rate limits** | ✅ Server-side shared infra, retry/backoff handled | ❌ Capped by your LinkedIn account quota |
| **Cost per company** | $0.001 (search) / $0.005 (profile) | $0.04-$0.10+ per profile |
| **Funding data (Crunchbase)** | ✅ Included | ❌ Profile only |
| **Concurrency / scheduling** | ✅ Apify-native cron, runs hands-off | ❌ Browser tab must stay open |
| **Output formats** | ✅ JSON, JSONL, CSV, Excel, RSS, XML, HTML | ⚠️ Often CSV-only |

If you want a **no cookies LinkedIn scraper** that won't put your account at risk, this is the cleanest path.

---

### ⚡ Why This Actor?

- **3 Modes in 1** - Profile fetch, advanced search, company posts
- **Funding data included** - Crunchbase integration shows funding rounds, investors, valuation
- **Rich filtering** - Industry, headcount, location, founding year, revenue
- **Up to 1,000 companies per search** - Pagination handled automatically
- **No login required** - Public data only
- **Pay-per-result**

Whether you need to **scrape LinkedIn companies** in bulk, run a **LinkedIn company search** by industry, or build your own **LinkedIn company data API** layer for an internal tool - this Actor covers it. It's also a one-stop **LinkedIn company finder** for sales teams who need fresh ICP-matched accounts on a schedule.

---

### 🎯 Use Cases

- **B2B Sales Prospecting** - Build target account lists matching your ICP. Pair with our LinkedIn Profile Extractor: scrape company → search company employees → enrich top 50 profiles.
- **Investor Research** - Track companies in your investment thesis using `crunchbaseFundingData` (rounds, investors, amounts, last announce date).
- **Competitive Intelligence** - Watch competitor headcount, hiring patterns, and post activity.
- **Account-Based Marketing (ABM prospecting tool)** - Build precise target lists from ICP filters.
- **M&A Research** - Identify acquisition targets by size, industry, geography.
- **Recruiting** - Map target companies before sourcing candidates.
- **B2B company database building** - Bulk-export verified company data into your own warehouse.

---

### 🔗 Pipeline guide - Build a complete LinkedIn intelligence stack

This LinkedIn company scraper is the **company-side anchor** of a four-step intelligence pipeline. Each step's output feeds directly into the next.

```text
┌─────────────────────────┐
│  1. THIS ACTOR          │ search_companies → 200 ICP-matched companies
│     Find target accounts│ get_company      → full profile + funding
└──────────┬──────────────┘
           │ pass linkedinUrl / id
           ▼
┌─────────────────────────┐
│  2. Profile Scraper +   │ Find decision-makers at those companies
│     Email Enrichment    │ → VPs, Heads, Managers + verified emails
└──────────┬──────────────┘
           │
           ▼
┌─────────────────────────┐
│  3. Jobs Scraper        │ See who they're hiring (intent signal)
│                         │ → open roles = budget allocation = buying signals
└──────────┬──────────────┘
           │
           ▼
┌─────────────────────────┐
│  4. Post Search Scraper │ Track their thought leaders
│                         │ → personalization fuel for outreach
└─────────────────────────┘
````

**Concrete workflow:**

1. **Find companies (this Actor, search mode):** `industryIds: ["96"]`, `companySize: ["201-500"]`, `location: "United States"` → 200 matches.
2. **Find people:** Pass each `linkedinUrl` to [LinkedIn Profile Scraper + Email Enrichment](https://apify.com/unseenuser/LinkedIn-Profile) → get the VP-level decision-makers with verified emails.
3. **Read intent:** Pass each `id` to [LinkedIn Jobs Scraper](https://apify.com/unseenuser/LinkedIn-Jobs-Scraper) → an open Director of Sales role = budget for sales tooling.
4. **Personalize:** Pass each top profile to [LinkedIn Post Search Scraper](https://apify.com/unseenuser/LinkedIn-Post-Seach-Scraper) → reference their last post in your cold email.

**[See all 16 scrapers by unseenuser →](https://apify.com/unseenuser)**

***

### 🚀 Quick Start

Each mode has its own dedicated input section in the form. Pick a mode, fill the matching section, ignore the rest.

#### Mode 1: `get_company` - Rich profile + funding

```json
{
  "mode": "get_company",
  "profileCompanies": [
    "https://www.linkedin.com/company/anthropic/",
    "openai",
    "Google"
  ]
}
```

**Smart input routing:**

- `linkedin.com/company/...` URL → fetched by URL
- single lowercase token (e.g. `openai`) → fetched by `universalName` (fastest)
- free text (e.g. `Google`) → search-and-pick-best-match

#### Mode 2: `search_companies` - ICP filters

```json
{
  "mode": "search_companies",
  "searchKeywords": "AI infrastructure",
  "location": "United States",
  "companySize": ["51-200", "201-500", "501-1000"],
  "industryIds": ["96"],
  "maxResults": 200
}
```

`geoId` overrides `location` if both are provided. Industry IDs come from the [LinkedIn V2 industry codes list](https://github.com/HarvestAPI/linkedin-industry-codes-v2/blob/main/linkedin_industry_code_v2_all_eng.csv).

#### Mode 3: `company_posts` - Recent activity

```json
{
  "mode": "company_posts",
  "postsCompanies": ["anthropic", "openai"],
  "maxPostsPerCompany": 50,
  "scrapePostedLimit": "month"
}
```

`scrapePostedLimit` is applied client-side after fetch - it preserves accurate pagination (unlike LinkedIn's native date filter).

***

### ⚙️ Inputs

| Field | Type | Used in mode | Notes |
|---|---|---|---|
| `mode` | enum | all | `get_company` | `search_companies` | `company_posts` |
| `profileCompanies` | string\[] | `get_company` | URL, universal name, or free text. One per line. |
| `postsCompanies` | string\[] | `company_posts` | URL, universal name, or free text. One per line. |
| `searchKeywords` | string | `search_companies` | Free-text keywords |
| `location` | string | `search_companies` | e.g. "Australia" - ignored if `geoId` is set |
| `geoId` | string | `search_companies` | LinkedIn geo ID (overrides location) |
| `companySize` | string\[] | `search_companies` | `1-10`, `11-50`, `51-200`, `201-500`, `501-1000`, `1001-5000`, `5001-10000`, `10001+` |
| `industryIds` | string\[] | `search_companies` | LinkedIn V2 industry IDs |
| `maxResults` | integer | `search_companies` | Default 100, max 1000 |
| `scrapePostedLimit` | enum | `company_posts` | `1h` | `24h` | `week` | `month` | `3months` | `6months` | `year` |
| `maxPostsPerCompany` | integer | `company_posts` | Default 50 |
| `includeSimilarCompanies` | boolean | all | Default `false`. Keep LinkedIn's "People also viewed" list (slimmed). |

***

### 📤 Output

#### `get_company` - Full company record (every field shown)

After auto-cleanup (nulls and empty fields stripped), a typical record contains:

```json
{
  "id": "1035",
  "universalName": "anthropic",
  "name": "Anthropic",
  "tagline": "AI safety company",
  "website": "https://www.anthropic.com",
  "linkedinUrl": "https://www.linkedin.com/company/anthropic/",
  "logo": "https://media.licdn.com/.../company-logo_400_400/...",
  "logos": [
    { "url": "...", "width": 400, "height": 400, "expiresAt": 1779321600000 },
    { "url": "...", "width": 200, "height": 200, "expiresAt": 1779321600000 },
    { "url": "...", "width": 100, "height": 100, "expiresAt": 1779321600000 }
  ],
  "backgroundCover": "https://media.licdn.com/.../company-background_10000/...",
  "backgroundCovers": [
    { "url": "...", "width": 1584, "height": 396, "expiresAt": 1778626800000 }
  ],
  "foundedOn": { "year": 2021, "month": 1, "day": 1 },
  "employeeCount": 1100,
  "employeeCountRange": { "start": 1001, "end": 5000 },
  "followerCount": 952183,
  "description": "Anthropic is an AI safety company...",
  "headquarter": {
    "country": "US",
    "city": "San Francisco",
    "geographicArea": "CA",
    "line1": "548 Market St",
    "postalCode": "94104",
    "description": "Headquarters",
    "parsed": {
      "text": "San Francisco, CA, United States",
      "countryCode": "US",
      "country": "United States",
      "countryFull": "United States of America",
      "state": "California",
      "city": "San Francisco"
    }
  },
  "locations": [
    {
      "country": "US", "city": "San Francisco", "geographicArea": "CA",
      "line1": "548 Market St", "postalCode": "94104", "headquarter": true,
      "parsed": { "text": "San Francisco, CA, United States", "countryCode": "US", "country": "United States", "state": "California", "city": "San Francisco" }
    }
  ],
  "industries": [
    { "id": "70", "name": "Research", "urn": "urn:li:fsd_industry:70", "title": "Research Services", "hierarchy": "Professional Services > Research Services" }
  ],
  "specialities": ["AI safety", "ML research", "alignment", "interpretability"],
  "active": true,
  "pageVerified": true,
  "pageType": "COMPANY",
  "autoGenerated": false,
  "callToActionUrl": "https://www.anthropic.com",
  "jobSearchUrl": "https://www.linkedin.com/jobs/search?geoId=92000000&f_C=1035",
  "crunchbaseFundingData": {
    "numberOfFundingRounds": 8,
    "lastFundingRound": {
      "localizedFundingType": "Series E",
      "moneyRaised": { "amount": "4000000000", "currencyCode": "USD" },
      "announcedOn": { "year": 2024, "month": 11, "day": 22 },
      "leadInvestors": [{ "name": "Lightspeed Venture Partners" }],
      "numberOfOtherInvestors": 6,
      "fundingRoundUrl": "https://www.crunchbase.com/funding_round/...",
      "investorsUrl": "https://www.crunchbase.com/funding_round/.../investors"
    },
    "organizationUrl": "https://www.crunchbase.com/organization/anthropic",
    "fundingRoundsUrl": "https://www.crunchbase.com/organization/anthropic/funding_rounds",
    "updatedAt": 1730851200
  },
  "inputQuery": "https://www.linkedin.com/company/anthropic/",
  "scrapedAt": "2026-05-05T12:00:00.000Z"
}
```

**Fields you might see when present:** `phone`, `tagline`, `announcement`, `similarOrganizations` (opt-in), `peopleStats`, `repostId`, `repost`, `repostedBy`, `newsletterUrl`, `newsletterTitle`.

#### `search_companies` - Lightweight row

```json
{
  "id": "1035",
  "universalName": "anthropic",
  "name": "Anthropic",
  "industries": "Research Services",
  "followers": "950K followers",
  "summary": "AI safety company...",
  "linkedinUrl": "https://www.linkedin.com/company/anthropic",
  "location": { "linkedinText": "San Francisco, CA" },
  "page": 1,
  "scrapedAt": "2026-05-05T12:00:00.000Z"
}
```

#### `company_posts` - Post record

```json
{
  "id": "urn:li:activity:7193...",
  "content": "We're hiring researchers...",
  "linkedinUrl": "https://www.linkedin.com/feed/update/urn:li:activity:7193...",
  "author": { "name": "Anthropic", "linkedinUrl": "..." },
  "postedAt": { "timestamp": 1714521600000, "postedAgoText": "2 weeks ago" },
  "engagement": { "likes": 1240, "comments": 87, "shares": 33 },
  "companyInput": "anthropic",
  "scrapedAt": "2026-05-05T12:00:00.000Z"
}
```

***

### 📦 Output schema (top-level)

The Actor ships a top-level **output schema** (`.actor/output_schema.json`, `actorOutputSchemaVersion: 1`) that tells Apify Console where to find the run's results:

| Output | Where it lives | Template |
|---|---|---|
| 📁 **Default Dataset** | All scraped records (companies / hits / posts) | `{{links.publicDatasetUrl}}` |
| 📊 **Run Summary** | KV store key `OUTPUT` (mode, counts, durations, per-input breakdown) | `{{links.publicKeyValueStoreUrl}}/records/OUTPUT` |

For the per-record schema and the three pre-configured table views, see the **Dataset Views** section below.

***

### 📊 Dataset Views

The Actor ships with a full **dataset schema** (`.actor/dataset_schema.json`) that defines three pre-configured Apify Console table views - pick the one that matches your mode:

| View | Mode | Columns |
|---|---|---|
| 🏢 **Company Profiles** | `get_company` | Company, Handle, Tagline, Industries, Employees, Followers, HQ City, HQ Country, Website, LinkedIn, Founded, Funding rounds, Last round type, Last $ raised, Specialities, Active, Verified, Scraped at |
| 🔍 **Search Results** | `search_companies` | Company, Handle, Industries, Location, Followers, Summary, LinkedIn, Page, Scraped at |
| 📰 **Company Posts** | `company_posts` | Company, Posted by, Content, Posted (relative), Date, Likes, Comments, Shares, Post URL, Linked article, Newsletter, Scraped at |

Nested fields (e.g. `headquarter.city`, `crunchbaseFundingData.lastFundingRound.moneyRaised.amount`, `engagement.likes`) are **flattened automatically** for clean tabular display and CSV export.

You can also export the dataset in any standard Apify format: **JSON, JSONL, CSV, Excel, HTML, RSS, XML**.

***

### 🗃️ Key-Value Store Layout

The Actor organizes the run's KV store using prefixed collections (defined in `.actor/key_value_store_schema.json`):

| Collection | Key prefix | What's inside |
|---|---|---|
| 📥 **Run Input** | `INPUT` | The exact input record this run was started with. |
| 📊 **Run Summary** | `OUTPUT` | End-of-run summary: mode, total records, success/error counts, per-input breakdown, duration, notes. Written automatically. |
| ❌ **Errors** | `errors-<slug>` | One JSON record per failed input (bad URL, missing company, upstream error). |
| 🗂️ **Raw search pages** *(opt-in)* | `search-page-<n>` | Raw API response for each search page. Only written when env var `DEBUG_RAW=1`. Useful for debugging pagination. |
| 🗂️ **Raw post pages** *(opt-in)* | `posts-page-<slug>-<n>` | Raw API response for each company-posts page. Only written when `DEBUG_RAW=1`. |

The `OUTPUT` summary is a great hook for downstream automation - read it after the run finishes to know exactly what happened without scanning the dataset.

```json
// Example OUTPUT record
{
  "mode": "get_company",
  "startedAt": "2026-05-05T12:00:00.000Z",
  "finishedAt": "2026-05-05T12:00:14.000Z",
  "durationSeconds": 14,
  "totalRecords": 3,
  "successes": 3,
  "errors": 0,
  "perInput": {
    "anthropic": { "records": 1 },
    "openai": { "records": 1 },
    "Google": { "records": 1 }
  }
}
```

***

### 🌐 Live API (Standby mode)

The Actor ships with a full **OpenAPI 3 schema** (`.actor/openapi.json`) describing a synchronous request/response API for the same three operations:

| Endpoint | Mode equivalent | Purpose |
|---|---|---|
| `GET /` | - | Health check |
| `GET /company?input=<...>` | `get_company` | Single-company profile lookup |
| `GET /search?search=...&geoId=...&companySize=...` | `search_companies` | Filtered search |
| `GET /posts?input=<...>&max=50` | `company_posts` | Recent posts for one company |

Authenticated via Apify's standard `Authorization: Bearer <APIFY_TOKEN>` header. Apify Console renders this schema as interactive docs once the Actor is started in Standby mode.

***

### 🔌 Integrations - send results anywhere

The Apify dataset is JSON-native, which means it plugs into every modern automation tool. Replace `<TOKEN>`, `<ACTOR>`, and `<DATASET_ID>` with your values.

#### HubSpot (CRM)

Push each company as a HubSpot company record via the Apify-HubSpot integration, or roll your own with the HubSpot REST API:

```bash
curl https://api.hubapi.com/crm/v3/objects/companies \
  -H "Authorization: Bearer <HUBSPOT_TOKEN>" \
  -H "Content-Type: application/json" \
  -d @- <<'JSON'
{
  "properties": {
    "name": "Anthropic",
    "domain": "anthropic.com",
    "linkedin_company_page": "https://www.linkedin.com/company/anthropic/",
    "numberofemployees": 1100,
    "industry": "Research Services"
  }
}
JSON
```

#### Salesforce (CRM)

Use Salesforce Bulk API 2.0 to upsert in batches. Apify's [Salesforce integration](https://apify.com/integrations) can do this without code:

```bash
curl https://<INSTANCE>.my.salesforce.com/services/data/v60.0/jobs/ingest \
  -H "Authorization: Bearer <SF_TOKEN>" \
  -H "Content-Type: application/json" \
  -d '{ "object": "Account", "operation": "upsert", "externalIdFieldName": "LinkedIn_URL__c", "contentType": "CSV" }'
```

Then PUT the dataset CSV (Apify gives you a direct CSV link per dataset).

#### Clay (data enrichment / outbound)

Pull this Actor's dataset into Clay as a data source:

1. In Clay, **+ Add Source → HTTP API**.
2. URL: `https://api.apify.com/v2/datasets/<DATASET_ID>/items?token=<TOKEN>&format=json&clean=1`
3. Map columns: `name`, `linkedinUrl`, `website`, `employeeCount`, `industries[0].name`, `crunchbaseFundingData.lastFundingRound.localizedFundingType`.

#### Airtable

```bash
## Append every dataset row to an Airtable base
curl "https://api.apify.com/v2/datasets/<DATASET_ID>/items?token=<TOKEN>&clean=1" \
  | jq -c '.[] | { fields: { Name: .name, LinkedIn: .linkedinUrl, Employees: .employeeCount, Industry: .industries[0].name } }' \
  | while read row; do
      curl -X POST "https://api.airtable.com/v0/<BASE>/<TABLE>" \
        -H "Authorization: Bearer <AIRTABLE_TOKEN>" \
        -H "Content-Type: application/json" \
        -d "{\"records\":[{$row}]}"
    done
```

#### n8n

Add the **Apify** node → operation `Run Actor and Get Dataset Items` → Actor `unseenuser/linkedin-company-pro`. Pipe the output node into HubSpot / Slack / Postgres / Sheets nodes - no code needed.

#### Make (Integromat)

Use the Apify module **Run an Actor** → output → connect to any Make app (Sheets, Notion, Pipedrive, Slack, etc.). The Apify module exposes the dataset as iterable items.

#### Zapier

Trigger: **Apify - Run Finished**. Action: **<Your CRM> - Create/Update Record**. Map dataset fields to CRM fields in the visual editor. Useful for slow trickle integrations (one row at a time).

#### Webhooks (any system)

Apify can POST a webhook on `ACTOR.RUN.SUCCEEDED`. The payload includes `defaultDatasetId` - your endpoint then GETs the rows and stores them however you like:

```json
{
  "eventType": "ACTOR.RUN.SUCCEEDED",
  "resource": {
    "id": "<RUN_ID>",
    "actId": "<ACTOR_ID>",
    "defaultDatasetId": "<DATASET_ID>",
    "status": "SUCCEEDED"
  }
}
```

***

### 🕐 Schedule examples

- **Daily competitor posts.** Schedule mode `company_posts` with your competitor list and `scrapePostedLimit: "24h"` once per day.
- **Weekly investor watchlist refresh.** Schedule mode `get_company` for your target accounts to capture new funding rounds.
- **One-shot ICP build.** Run mode `search_companies` with filters, then feed results into mode `get_company` for full profiles.

***

### ❓ FAQ

#### Data & coverage

**Q: Does this include Crunchbase funding data?**
A: For companies that have it on their LinkedIn page (most well-funded companies do). When absent, `crunchbaseFundingData` is omitted from the cleaned record.

**Q: Why is `companyId` faster than `universalName`?**
A: The upstream API resolves IDs without an extra lookup. If you've previously fetched a company and have its `id`, prefer that for posts/related calls.

**Q: `location` vs `geoId` - which should I use?**
A: `location` accepts free text (`Australia`, `Tel Aviv`); `geoId` is precise. If both are set, `geoId` wins. Look up geo IDs via Harvest's `/linkedin/geo-id-search` endpoint.

**Q: Why are some image URLs expiring?**
A: All `expiresAt` fields are Unix timestamps. LinkedIn CDN URLs expire (typically 7-30 days). Download and re-host images in your own storage if you need durable links.

**Q: What fields are NOT returned?**
A: This is a public-data-only LinkedIn company scraper. The following are **not** included, by design:

- ❌ Email addresses of employees (use [LinkedIn Profile Scraper + Email Enrichment](https://apify.com/unseenuser/LinkedIn-Profile))
- ❌ Direct phone numbers of individual employees
- ❌ Private connections / network graph
- ❌ Premium / Sales Navigator-only fields (intent data, growth signals)
- ❌ Anything behind login walls (private events, employee surveys, internal comms)
- ❌ Salaries, internal headcount breakdowns
- ❌ Anything LinkedIn has marked private at the page level

If you need any of those, they're either out of scope for a no-cookies scraper, available in a sister Actor, or simply not exposed by LinkedIn publicly.

#### Legality & compliance

**Q: Is scraping LinkedIn legal?**
A: **Scraping publicly accessible data is generally lawful in the United States.** The leading case is *hiQ Labs v. LinkedIn* (9th Circuit, 2019; reaffirmed 2022), which held that scraping data publicly available on the open web does **not** violate the Computer Fraud and Abuse Act (CFAA) because there is no "unauthorized access" to a system that doesn't require authentication.

That said:

- Other jurisdictions vary. Check your local law.
- *hiQ* did not give a free pass for breach-of-contract claims under LinkedIn's User Agreement, copyright, or database rights.
- The Actor accesses **only public LinkedIn pages** via a third-party API (HarvestAPI), without logging into any account or bypassing technical protections.
- **You** remain responsible for how you use the output. See the **Master ToS** below.

**Q: GDPR / UK GDPR compliance?**
A: Company data (firmographics, funding, posts) is largely **not personal data** under GDPR. Where the output incidentally includes personal data (e.g., named post authors), you become the **data controller** for any further processing. Standard GDPR obligations apply: lawful basis (typically legitimate interest with documented balancing test for B2B prospecting), transparent notice if you reach out to data subjects, honor SAR / erasure / objection requests. The Publisher does not store or retain output - we are a software vendor, not a data processor.

**Q: CCPA compliance?**
A: California residents have rights under the CCPA. If you're processing data of California residents (e.g., named US-based employees), apply the standard CCPA workflow: notice at collection, opt-out of sale, deletion on request. Public information is partially exempt from CCPA's definition of "personal information" (Cal. Civ. Code § 1798.140(o)(2)) but consult your counsel for your specific use case.

**Q: Account ban risk - is my LinkedIn account at risk?**
A: **No.** This Actor doesn't touch your LinkedIn account. There's no cookie, no login, no browser session under your name. Cookie-based tools (PhantomBuster, Dux-Soup, LinkedIn Helper, Octopus CRM, etc.) drive your real account at scale - that's how they work, and that's why LinkedIn restricts them. This Actor uses an independent third-party API (HarvestAPI) operating on its own infrastructure. Your account never enters the picture.

**Q: Is this Actor official / affiliated with LinkedIn or Crunchbase?**
A: **No.** This is an unofficial scraper. LinkedIn and Crunchbase are unaffiliated trademarks of their respective owners. Public data only.

**Q: Does HarvestAPI have a Terms of Service I should look at?**
A: Yes - https://harvest-api.com. The Publisher passes your input through their API and returns their response unchanged.

***

### 🔧 Technical Details

- **API Provider:** [HarvestAPI](https://harvest-api.com)
- **Endpoints used:**
  - `GET /linkedin/company`
  - `GET /linkedin/company-search`
  - `GET /linkedin/company-posts`
- **Authentication:** `X-API-Key` header (set `HARVEST_API_KEY` env var on Apify - see *Setup* above)
- **Retry policy:** 3 retries, exponential backoff (1s → 2s → 4s) on `429` and `5xx`
- **Pagination:** automatic via `page` and `paginationToken`
- **Schemas shipped:**
  - Input - `.actor/INPUT_SCHEMA.json`
  - Output (top-level) - `.actor/output_schema.json`
  - Dataset (per-record + views) - `.actor/dataset_schema.json`
  - Key-Value Store - `.actor/key_value_store_schema.json`
  - Web-server OpenAPI - `.actor/openapi.json`
- **Optional env vars:**
  - `DEBUG_RAW=1` - also save raw upstream responses to the KV store under `search-page-*` / `posts-page-*` keys (helpful when debugging filters or pagination).

***

### 🔗 Related scrapers

Build a complete LinkedIn intelligence stack:

- **[LinkedIn Profile Scraper + Email Enrichment](https://apify.com/unseenuser/LinkedIn-Profile)** - enrich the people at these companies (decision-makers + verified emails)
- **[LinkedIn Jobs Scraper](https://apify.com/unseenuser/LinkedIn-Jobs-Scraper)** - see who they're hiring (intent signal for sales / recruiting)
- **[LinkedIn Post Search Scraper](https://apify.com/unseenuser/LinkedIn-Post-Seach-Scraper)** - find their thought leaders (personalization fuel)

**[See all 16 scrapers by unseenuser →](https://apify.com/unseenuser)**

***

## 📜 Master Terms of Service V4.0

**Version:** 4.0
**Effective Date:** May 5, 2026

### 0. ACCEPTANCE BY USE - IMPORTANT

**Read this section first.**

These Terms of Service ("Terms") form a binding legal agreement between you ("User," "you," "your") and **UnseenUser**, the Publisher of this Apify actor ("UnseenUser," "the Publisher," "we," "us," "our").

#### 0.1 How You Accept These Terms

You accept these Terms by any of the following actions, each of which constitutes a clear, affirmative act of acceptance:

- **(a) Running the Actor** - Initiating any execution of the Actor on the Apify platform
- **(b) Using any output returned by the Actor** for any purpose
- **(c) Continuing to access the Actor's listing or documentation** after these Terms are visible

#### 0.2 Continuing Acceptance

Each time you run the Actor or use its outputs, you reaffirm your acceptance of the then-current Terms. If you do not agree to these Terms or any subsequent update, you must stop using the Actor immediately.

#### 0.3 No Anonymous Acceptance

You cannot disclaim acceptance by:

- Failing to read these Terms before running the Actor
- Running the Actor through automated systems
- Sharing your Apify account with others who may not have read these Terms

By the act of running the Actor on Apify, you bind yourself, your organization (if applicable), and any individuals or systems acting on your behalf or under your authority.

#### 0.4 If You Do Not Accept

If you do not agree to these Terms, you must not run the Actor. No use is authorized without acceptance.

***

### PREAMBLE - UNDERSTANDING THE ARCHITECTURE

Before using the Actor, please understand the technical architecture of the service:

#### The Data Flow

```
You (User) → Apify Platform → Actor (software) → Third-Party API → Source Platform
                                                       ↓
You (User) ← Apify Platform ← Actor (software) ← Third-Party API
```

#### What Each Party Does

- **You (the User):** Run the Actor on the Apify platform with input parameters you choose
- **Apify:** Operates the cloud infrastructure that hosts and executes Actors. Apify is a Czech-incorporated company (Apify Technologies s.r.o.) governed by its own Terms of Service.
- **The Publisher (us):** Publishes software code (the Actor) on Apify's platform. The Actor is a thin wrapper that translates your input into requests to a third-party API and returns the API's responses to you. The Publisher does not operate scraping infrastructure. The Publisher does not store or retain data returned by the Actor. The Publisher does not see, log, or process the personal data of any individuals returned in the Actor's outputs beyond what is incidental to passing the data through.
- **Third-Party API Provider:** HarvestAPI (https://harvest-api.com) or Scrape Creators (https://scrapecreators.com). These are independent third-party companies that operate scraping infrastructure and return data from source platforms.
- **Source Platform:** LinkedIn, TikTok, YouTube, Reddit, Linktree, etc. These are the platforms whose publicly visible data is accessed by the Third-Party API Providers.

#### Why This Matters

Your relationship with the Publisher is that of a software user to a software vendor. The Publisher has the responsibilities of a software vendor (functional code, accurate documentation) and the limits of one (the Publisher is not responsible for how you use the data you obtain).

***

These Terms operate alongside but do not replace:

- **Apify's Terms of Service** and Acceptable Use Policy (governing your relationship with Apify)
- **HarvestAPI Terms of Service** and **Scrape Creators Terms of Service** (governing the underlying data infrastructure)
- **Source Platform terms** (LinkedIn, TikTok, etc.) governing the public data accessed
- **Applicable law** in your jurisdiction and the jurisdictions of data subjects

These Terms incorporate the actor-specific addendum published in each Actor's individual listing ("Addendum"). In the event of a conflict, the more restrictive provision applies.

***

### 1. NATURE OF THE SERVICE

#### 1.1 What the Actor Is

The Actor is a software program published on the Apify platform. Each Actor:

- (a) Accepts structured input from you on the Apify platform
- (b) Translates that input into HTTP requests to a third-party API operated by HarvestAPI or Scrape Creators
- (c) Receives HTTP responses from that third-party API
- (d) Returns the response data to you in a structured format on the Apify platform

The Actor's source code is hosted on Apify's infrastructure. The Actor runs in Apify's cloud, not on the Publisher's servers. The Publisher operates no servers running the Actor.

#### 1.2 What the Actor Is Not

The Actor is **not**:

- (a) A scraping tool - the Publisher does not operate scraping infrastructure, proxies, headless browsers, or fake accounts
- (b) A direct connection to any source platform - connections to source platforms are made by HarvestAPI / Scrape Creators
- (c) A data storage or data retention service - the Publisher does not maintain a database of any data the Actor returns
- (d) A licensed access channel to LinkedIn, TikTok, YouTube, Reddit, X (Twitter), Meta, Linktree, or any other source platform
- (e) Affiliated with, endorsed by, sponsored by, or authorized by any source platform

#### 1.3 The Publisher's Limited Role

The Publisher's role is limited to:

- (a) Designing and writing the Actor's source code
- (b) Publishing the Actor on the Apify Store
- (c) Maintaining the Actor (updating it when API providers change schemas)
- (d) Providing documentation and customer support via Apify's contact mechanism

The Publisher is a software vendor, similar to a developer who publishes an app on the Apple App Store or Google Play Store. The Publisher is not a data provider, data broker, data processor, or data controller for purposes of GDPR, CCPA, Israel's Privacy Protection Law, or equivalent.

#### 1.4 The Third-Party API Providers' Role

HarvestAPI (https://harvest-api.com) and Scrape Creators (https://scrapecreators.com) are independent third-party companies. They:

- (a) Operate the actual data scraping infrastructure
- (b) Maintain relationships with source platforms (or accept the operational risk of accessing public data without such relationships)
- (c) Are themselves Apify publishers (HarvestAPI publishes 9+ actors directly; Scrape Creators publishes 10+)
- (d) Provide their own Terms of Service governing their operations
- (e) Are responsible for compliance obligations relating to the data collection itself

The Publisher is a customer of these providers. The Publisher is not their agent, partner, or representative.

***

### 2. WHO MAY USE THE ACTOR

#### 2.1 Eligibility

You may use the Actor only if:

- (a) You are at least 18 years old or the age of majority in your jurisdiction
- (b) You have legal capacity to enter into binding contracts
- (c) You are not located in or resident of a country subject to comprehensive sanctions by the United States, European Union, United Kingdom, or Israel
- (d) You are not on any prohibited persons list

#### 2.2 User Representations

By using any Actor, you represent and warrant that:

- (a) **Truthful identity:** Information you provide about your identity and intended use is accurate
- (b) **Lawful intent:** Your intended use complies with applicable law in your jurisdiction
- (c) **Source platform compliance:** You will independently comply with the Terms of Service of any source platform whose data you obtain through the Actor
- (d) **Data subject rights:** Where Actor outputs include personal data, you will respect data subject rights under applicable law
- (e) **No prohibited use:** You will not use the Actor for any of the purposes prohibited in Section 4

These representations are continuous - they must remain true throughout your use.

***

### 3. PERMITTED USES

The Actor may be used for any lawful purpose, including:

- Market research and competitive analysis
- Academic research
- Journalism and investigative reporting
- Internal business intelligence
- Brand monitoring
- Recruitment research where consistent with applicable employment law
- Building products that further process publicly available information lawfully

Specific permitted uses for each Actor are described in that Actor's individual listing and Addendum.

***

### 4. PROHIBITED USES

You may not use the Actor for any of the following:

#### 4.1 Illegal Activity

Activity illegal under the law of your jurisdiction, the User's jurisdiction, or the jurisdiction of any data subjects.

#### 4.2 Harassment, Stalking, and Personal Targeting

- Compiling profiles for harassment, stalking, or doxxing
- Tracking individuals' movements or activities without their knowledge
- Building profiles of journalists, activists, dissidents, or vulnerable populations for retaliatory purposes

#### 4.3 Discrimination

- Using outputs for discriminatory employment, lending, housing, or insurance decisions based on protected characteristics
- Building lists for discriminatory purposes

#### 4.4 Spam and Unsolicited Commercial Communication

- Sending unsolicited marketing in violation of CAN-SPAM, CASL, GDPR, PECR, Israeli Anti-Spam Law (סעיף 30א לחוק התקשורת), or equivalent laws
- Building "lead lists" from scraped contacts without proper consent infrastructure
- Reselling contact data for spam purposes

#### 4.5 Fraud and Deception

- Identity theft or impersonation
- Generation of fake reviews, testimonials, or coordinated inauthentic behavior
- Election interference or political disinformation
- Securities fraud

#### 4.6 Source Platform Abuse

- Using outputs to circumvent technical protection measures of source platforms
- Creating fake accounts on source platforms based on Actor outputs
- Vote manipulation, engagement manipulation, or platform algorithm gaming
- Building services that competitively substitute for source platforms

#### 4.7 Reselling the Actor's Service

- Reselling raw Actor outputs as your own data product or scraping-as-a-service
- Sharing your Apify credentials to provide third parties indirect access
- Building competing API services using Actor outputs

#### 4.8 AI Training Without Authorization

- Using Actor outputs as training data for commercial AI/ML models without separate licensing authority from the source platform

#### 4.9 Sensitive Targeting

- Specifically targeting or profiling based on health conditions, sexual orientation, religious beliefs, political opinions, or other sensitive characteristics
- Targeting children under 16 (or local age of consent for data processing)

#### 4.10 Privacy Law Violations

- Processing personal data of EU/UK/California/Israeli residents without complying with applicable privacy law
- Failing to honor data subject access, deletion, or objection requests
- Processing data for purposes incompatible with its publication context

***

### 5. SOURCE PLATFORM TERMS - YOUR RESPONSIBILITY

#### 5.1 Acknowledgment

The Actor accesses publicly visible data on third-party platforms ("Source Platforms") through the Third-Party API Providers (HarvestAPI / Scrape Creators). Source Platforms include LinkedIn, TikTok, YouTube, Reddit, X (Twitter), Meta/Facebook, Linktree, Komi, Pillar, Linkbio, Linkme, and Amazon.

#### 5.2 Your Sole Responsibility

You acknowledge:

- (a) You are solely responsible for ensuring your downstream use of data obtained through the Actor complies with the Source Platform's Terms of Service
- (b) The Publisher makes no representation that any specific use is permitted under any Source Platform's terms
- (c) The Third-Party API Providers, not the Publisher, bear responsibility for the lawfulness of the data collection itself
- (d) You should review Source Platform terms before commercial use:
  - LinkedIn: https://www.linkedin.com/legal/user-agreement
  - TikTok: https://www.tiktok.com/legal/page/global/terms-of-service/en
  - YouTube: https://www.youtube.com/static?template=terms
  - X: https://twitter.com/en/tos
  - Reddit: https://www.redditinc.com/policies/user-agreement
  - Meta: https://www.facebook.com/legal/terms
  - Linktree: https://linktr.ee/s/terms/

#### 5.3 Cease-and-Desist Compliance

If you receive a cease-and-desist letter or other legal demand from a Source Platform regarding your use of Actor outputs, you must:

- (a) Cease the contested use immediately
- (b) Notify UnseenUser within 48 hours via UnseenUser's Apify profile contact form (https://apify.com/UnseenUser)
- (c) Cooperate with the Publisher as needed to mitigate
- (d) Not assert against the Publisher any claim arising from your inability to use the Actor for that Source Platform

***

### 6. DATA PROTECTION - REFLECTING ACTUAL ARCHITECTURE

#### 6.1 Roles Under Privacy Law

For purposes of GDPR, UK GDPR, CCPA, Israel's Privacy Protection Law (PPL) including Amendment 13, and equivalents:

- **You (the User)** are the Data Controller of any personal data you obtain through the Actor and subsequently process for your own purposes
- **HarvestAPI and Scrape Creators** are the entities that collect data from source platforms - they bear the responsibilities of data processors or controllers (depending on context) for the collection itself
- **The Publisher** acts solely as a software vendor, not as a data controller or processor, because the Publisher does not store, retain, or substantively process personal data - the Actor merely passes API responses through

#### 6.2 No Data Retention by the Publisher

The Publisher confirms:

- (a) The Publisher does not maintain a database of personal data obtained through the Actor
- (b) The Actor passes data from the Third-Party API directly to you on the Apify platform - data does not flow through the Publisher's infrastructure
- (c) Apify's standard execution and operational logging may include limited information about Actor runs (input parameters, run duration, data volume) - this is governed by Apify's own privacy practices
- (d) The Publisher does not access, view, or analyze your Actor outputs except as needed for technical support if you specifically share them with the Publisher

#### 6.3 Your Obligations as Data Controller

Where your use of the Actor involves processing personal data, you are responsible for:

- (a) Establishing a lawful basis for your processing (consent, legitimate interest with documented balancing test, contract, etc.)
- (b) Providing transparent notice to data subjects as required by applicable law
- (c) Honoring data subject access, rectification, erasure, restriction, and portability requests
- (d) Implementing appropriate security measures
- (e) Conducting Data Protection Impact Assessments where required
- (f) Appointing a Data Protection Officer if your operations require one
- (g) Registering databases with applicable supervisory authorities
- (h) Honoring opt-out requests for direct marketing
- (i) Cross-border transfer safeguards where data crosses borders

#### 6.4 Israel's Amendment 13 - User Compliance

If your use of the Actor involves Israeli residents' personal data, you must comply with the Privacy Protection Law as amended (Amendment 13, effective August 14, 2025). These obligations are yours as the data controller, not the Publisher's as the software vendor.

#### 6.5 Sensitive Data Targeting Restrictions

You will not use the Actor to specifically target, profile, or build datasets focused on:

- Health or medical conditions
- Religious beliefs
- Political opinions
- Sexual orientation or gender identity
- Genetic or biometric data
- Criminal history
- Children under 16

***

### 7. INTELLECTUAL PROPERTY

#### 7.1 Actor Code

The Actor's source code, schemas, documentation, and branding are owned by the Publisher. You receive a limited, non-exclusive, non-transferable, revocable license to use the Actor for permitted purposes during your active subscription/run with Apify.

#### 7.2 Output Data

The Publisher claims no ownership over the public data the Actor returns. Source Platforms may have copyright, database rights, or other rights in their data; data subjects may have copyright in user-generated content. Your use of output data must respect these rights independently.

#### 7.3 Restrictions

You may not reverse engineer, decompile, or reuse the Actor's code in a competing actor.

#### 7.4 Feedback

Feedback you provide may be used by the Publisher to improve products without compensation to you.

***

### 8. PRICING AND PAYMENT

#### 8.1 Apify Platform Billing

Pricing is administered through Apify's pricing models. Apify processes all payments. Apify's payment terms govern refunds and disputes.

#### 8.2 Pricing Changes

The Publisher may change Actor pricing with at least 14 days' notice via the Actor's Apify listing.

#### 8.3 No Refunds for Misuse

If your access is suspended or terminated for breach of these Terms, you forfeit any unused balance and are not entitled to refunds.

***

### 9. SERVICE AVAILABILITY AND CHANGES

#### 9.1 No Uptime Guarantee

The Actor depends on:

- (a) The Apify platform
- (b) Underlying API providers (HarvestAPI, Scrape Creators)
- (c) Source Platforms' continued public accessibility

Any of these may change behavior, restrict access, or become unavailable without notice. The Publisher makes no uptime guarantees.

#### 9.2 Service Discontinuation

The Publisher may discontinue any Actor at any time. Reasonable notice will be provided when feasible.

***

### 10. DISCLAIMERS

#### 10.1 "AS IS" Service

THE ACTOR IS PROVIDED "AS IS" AND "AS AVAILABLE" WITHOUT WARRANTIES OF ANY KIND, INCLUDING WARRANTIES OF MERCHANTABILITY, FITNESS FOR PURPOSE, NON-INFRINGEMENT, OR ACCURACY OF DATA.

#### 10.2 No Representation of Lawfulness

The Publisher makes no representation that your specific use of the Actor or the data it returns is lawful in your jurisdiction or under any Source Platform's terms. The burden of determining lawfulness for your use case is yours.

#### 10.3 No Endorsement of Source Content

Content returned by the Actor was created by third parties. The Publisher does not endorse, verify, or take responsibility for it.

***

### 11. LIMITATION OF LIABILITY

#### 11.1 Aggregate Liability Cap

TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, IN NO EVENT SHALL THE AGGREGATE LIABILITY OF THE PUBLISHER FOR ALL CLAIMS RELATING TO THE ACTOR EXCEED THE GREATER OF:

- (a) ONE HUNDRED U.S. DOLLARS (US $100), OR
- (b) THE AMOUNTS YOU PAID THROUGH APIFY FOR USE OF THE ACTOR IN THE THREE (3) MONTHS IMMEDIATELY PRECEDING THE EVENT

#### 11.2 Excluded Damages

THE PUBLISHER IS NOT LIABLE FOR INDIRECT, INCIDENTAL, SPECIAL, CONSEQUENTIAL, EXEMPLARY, OR PUNITIVE DAMAGES, OR FOR LOSS OF PROFITS, REVENUE, OR DATA, EVEN IF ADVISED OF THE POSSIBILITY.

#### 11.3 Time Limit

Any claim must be brought within one (1) year of the event.

***

### 12. INDEMNIFICATION

#### 12.1 Your Indemnification of the Publisher

You agree to defend, indemnify, and hold harmless the Publisher from any:

- Claims arising from your use of the Actor
- Claims arising from your violation of these Terms
- Claims arising from your violation of any law (including privacy law)
- Claims arising from your violation of any Source Platform's Terms of Service
- Claims arising from your processing of personal data obtained through the Actor
- Reasonable attorneys' fees and costs of defending such claims

#### 12.2 Defense

The Publisher may assume defense at your expense. You will cooperate with the Publisher's defense.

#### 12.3 Scope

The indemnification covers reasonable, foreseeable third-party claims arising from your use. It does not extend to:

- Claims arising from the Publisher's gross negligence or willful misconduct
- Claims regarding the Actor's source code itself (those are the Publisher's responsibility)
- Claims regarding the Third-Party API Provider's data collection (those are their responsibility)

***

### 13. SUSPENSION AND TERMINATION

#### 13.1 Termination by the Publisher

The Publisher may terminate your access for material breach, illegal use, breach of warranty, or upon credible legal demand.

#### 13.2 Effects of Termination

Your license ends, you must cease use, and applicable provisions survive.

#### 13.3 Termination by You

You may stop using the Actor at any time on Apify.

***

### 14. DISPUTE RESOLUTION

#### 14.1 Informal Resolution First

Send a detailed written description of the dispute via UnseenUser's Apify profile contact form (https://apify.com/UnseenUser) and wait 60 days for resolution attempt before any formal claim.

#### 14.2 Governing Law

These Terms are governed by the substantive laws of the State of Israel, without regard to conflict of law principles.

#### 14.3 Exclusive Jurisdiction

Any dispute shall be brought exclusively in the competent civil courts of Tel Aviv-Jaffa, Israel.

#### 14.4 No Class Actions

You agree to bring claims only in your individual capacity.

#### 14.5 Attorneys' Fees

The prevailing party recovers reasonable attorneys' fees.

***

### 15. MISCELLANEOUS

#### 15.1 Entire Agreement

These Terms (with Addendum and incorporated documents) are the entire agreement.

#### 15.2 Severability

Unenforceable provisions are reformed to the minimum extent or severed.

#### 15.3 Assignment

You may not assign without the Publisher's consent. The Publisher may assign to affiliates, successors, or acquirers.

#### 15.4 Force Majeure

Neither party is liable for failure due to events beyond reasonable control, including changes by Source Platforms or Third-Party API Providers, or actions by Apify.

#### 15.5 Third-Party Beneficiaries

Apify, HarvestAPI, and Scrape Creators are intended third-party beneficiaries of Sections 4 (Prohibited Uses), 5 (Source Platform Compliance), and 12 (Indemnification).

#### 15.6 Survival

Sections 0 (Acceptance), 4, 5, 6, 7, 10, 11, 12, 14, and 15 survive termination.

#### 15.7 Language

English controls. Translations are for convenience only.

#### 15.8 Publisher Identification for Legal Process

The Publisher operates on the Apify platform under the username **UnseenUser** (apify.com/UnseenUser). The Publisher is a registered legal entity. Upon receipt of valid legal process (subpoena, court order, or equivalent) directed through Apify's official channels, the Publisher's full legal identity may be disclosed as required by law. This Section ensures that you have a valid path to legal recourse if needed.

***

### 16. ACKNOWLEDGMENT

By using any Actor, you acknowledge that:

- (a) You have read these Terms
- (b) You understand the architecture: you are using software (the Actor) on Apify's platform that calls third-party APIs
- (c) You accept responsibility for your use, including for compliance with Source Platform terms
- (d) Your indemnification obligations cover third-party claims arising from your use
- (e) Disputes are resolved in Israeli courts
- (f) The Publisher's identity, while not publicly disclosed in this listing, can be obtained through valid legal process via Apify

For questions, use UnseenUser's Apify profile contact form (https://apify.com/UnseenUser) before running the Actor.

***

## 🛡️ Actor-Specific ToS Addendum - LinkedIn Company Scraper

This addendum supplements the Master Terms of Service V4.0. By running this Actor, you accept both the Master ToS and this addendum.

#### A. Architectural Disclosure

This Actor is a software wrapper. It accepts your input parameters, calls HarvestAPI's `/linkedin/company`, `/linkedin/company-search`, and `/linkedin/company-posts` endpoints, and returns the response data to you on the Apify platform. The Publisher does not store, log, or substantively process the data returned.

#### B. Nature of Data Returned

Company profile data, funding and Crunchbase data, company posts, and company search results. Company data is generally non-personal data under GDPR. However, the Actor may incidentally return names of company representatives in posts, photos of employees, and names of post commenters.

#### C. Permitted Use Cases

B2B sales prospecting (with compliant outreach), competitive intelligence, investor research, ABM target lists, financial analysis and due diligence, academic research on company structures.

#### D. Specifically Prohibited Uses

In addition to Master ToS Section 4 prohibitions, you may NOT:

- Build cold-email tools that automate outreach without proper opt-in or anti-spam compliance
- Republish proprietary funding data in a way that competes with Crunchbase or PitchBook
- Use company data for discriminatory business decisions
- Aggregate and resell company data as a substitute for established business intelligence services
- Track individual employees' activities through company associations

#### E. LinkedIn Platform ToS Considerations

This Actor accesses publicly visible LinkedIn company pages via HarvestAPI. HarvestAPI bears responsibility for the lawfulness of the data collection. LinkedIn restricts use of their data for building competing professional networks. Your sales/marketing communications should not impersonate LinkedIn or imply LinkedIn endorsement. If LinkedIn issues a cease-and-desist, notify UnseenUser within 48 hours via the Apify profile contact form (apify.com/UnseenUser).

#### F. B2B Outreach Compliance

If you use this Actor for sales prospecting, comply with: GDPR (legitimate interest basis, balancing test, opt-out); CCPA; Israeli Anti-Spam Law (prior consent for marketing); CAN-SPAM (US); CASL (Canada).

#### G. Funding Data and Public Markets

You may not use funding data for insider trading or other securities fraud. You may not represent the data as official company disclosures. Where data conflicts with official SEC, ISA, or other regulatory filings, the official filings prevail.

# Actor input Schema

## `mode` (type: `string`):

Pick one. Each option below has its own dedicated section - just scroll down and fill in that section.

## `profileCompanies` (type: `array`):

List the companies you want full profile data for. Accepts ANY mix of:

- LinkedIn URL:  https://www.linkedin.com/company/anthropic/
- Handle (universal name):  anthropic
- Free-text name:  Anthropic AI

Smart routing picks the fastest method automatically. One per line.

## `searchKeywords` (type: `string`):

Free-text keywords matching company names, taglines, or descriptions.

Examples:  AI infrastructure, cybersecurity startups, developer tools.

Leave blank if you want to filter only by location/size/industry.

## `location` (type: `string`):

Filter by country, region, or city as free text.

Examples:  United States, San Francisco Bay Area, Tel Aviv.

Ignored if Geo ID below is set.

## `geoId` (type: `string`):

LinkedIn's numeric geo ID. More precise than text location.

Examples:  103644278 (United States), 101174742 (Israel).

Look up via Harvest's /linkedin/geo-id-search endpoint. Leave blank to use the text Location instead.

## `companySize` (type: `array`):

Pick one or more headcount ranges. Multiple selections are OR'd together.

## `industryIds` (type: `array`):

LinkedIn V2 industry IDs (numeric). Multiple values are OR'd together.

Full list: https://github.com/HarvestAPI/linkedin-industry-codes-v2/blob/main/linkedin\_industry\_code\_v2\_all\_eng.csv

Common examples:

- 96  Information Technology and Services
- 4   Computer Software
- 6   Internet
- 43  Financial Services
- 14  Hospital and Health Care

## `maxResults` (type: `integer`):

Hard cap on how many companies to return. Pagination is handled automatically.

## `postsCompanies` (type: `array`):

List the companies whose recent posts you want. Same input format as profile mode (URL, handle, or free-text name).

## `scrapePostedLimit` (type: `string`):

Only keep posts newer than this. Applied AFTER fetch for reliable pagination.

Leave blank to keep all posts.

## `maxPostsPerCompany` (type: `integer`):

Cap how many posts to fetch per company. Pagination stops automatically when reached or when no more posts exist.

## `includeSimilarCompanies` (type: `boolean`):

If on, each company profile keeps LinkedIn's 'People also viewed' list (slimmed to id, name, handle, URL, industry, headcount, followers).

Default off - keeps output ~10x smaller.

## Actor input object example

```json
{
  "mode": "get_company",
  "profileCompanies": [
    "https://www.linkedin.com/company/anthropic/",
    "openai",
    "Google"
  ],
  "searchKeywords": "AI infrastructure",
  "location": "United States",
  "companySize": [],
  "industryIds": [],
  "maxResults": 100,
  "postsCompanies": [
    "anthropic",
    "openai"
  ],
  "maxPostsPerCompany": 50,
  "includeSimilarCompanies": false
}
```

# Actor output Schema

## `dataset` (type: `string`):

All scraped records: company profiles (`get_company`), search hits (`search_companies`), or company posts (`company_posts`). Nulls and empty fields are stripped automatically.

## `summary` (type: `string`):

End-of-run JSON record with mode, total record count, success / error counts, durations, and per-input breakdown.

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "profileCompanies": [
        "https://www.linkedin.com/company/anthropic/",
        "openai",
        "Google"
    ],
    "postsCompanies": [
        "anthropic",
        "openai"
    ]
};

// Run the Actor and wait for it to finish
const run = await client.actor("unseenuser/linkedin-company-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "profileCompanies": [
        "https://www.linkedin.com/company/anthropic/",
        "openai",
        "Google",
    ],
    "postsCompanies": [
        "anthropic",
        "openai",
    ],
}

# Run the Actor and wait for it to finish
run = client.actor("unseenuser/linkedin-company-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "profileCompanies": [
    "https://www.linkedin.com/company/anthropic/",
    "openai",
    "Google"
  ],
  "postsCompanies": [
    "anthropic",
    "openai"
  ]
}' |
apify call unseenuser/linkedin-company-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=unseenuser/linkedin-company-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "🏢 LinkedIn Company Scraper Pro - Bulk Data [NO COOKIES] ✅",
        "description": "Bulk-scrape LinkedIn company data - name, industry, employees, HQ, website, founding year - without logging in. No cookies, no Chrome extension, no risk to your account. Built for B2B sales, recruiters, and ABM platforms. Outputs clean JSON for HubSpot, Salesforce, Clay, n8n.",
        "version": "0.0",
        "x-build-id": "hnIntpQsfqSUwnmGa"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/unseenuser~linkedin-company-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-unseenuser-linkedin-company-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/unseenuser~linkedin-company-scraper/runs": {
            "post": {
                "operationId": "runs-sync-unseenuser-linkedin-company-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/unseenuser~linkedin-company-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-unseenuser-linkedin-company-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "mode"
                ],
                "properties": {
                    "mode": {
                        "title": "Step 1: What do you want to do?",
                        "enum": [
                            "get_company",
                            "search_companies",
                            "company_posts"
                        ],
                        "type": "string",
                        "description": "Pick one. Each option below has its own dedicated section - just scroll down and fill in that section.",
                        "default": "get_company"
                    },
                    "profileCompanies": {
                        "title": "Companies to fetch profiles for",
                        "uniqueItems": true,
                        "type": "array",
                        "description": "List the companies you want full profile data for. Accepts ANY mix of:\n\n  - LinkedIn URL:  https://www.linkedin.com/company/anthropic/\n  - Handle (universal name):  anthropic\n  - Free-text name:  Anthropic AI\n\nSmart routing picks the fastest method automatically. One per line.",
                        "default": [],
                        "items": {
                            "type": "string"
                        }
                    },
                    "searchKeywords": {
                        "title": "Search keywords",
                        "type": "string",
                        "description": "Free-text keywords matching company names, taglines, or descriptions.\n\nExamples:  AI infrastructure, cybersecurity startups, developer tools.\n\nLeave blank if you want to filter only by location/size/industry."
                    },
                    "location": {
                        "title": "Location (text)",
                        "type": "string",
                        "description": "Filter by country, region, or city as free text.\n\nExamples:  United States, San Francisco Bay Area, Tel Aviv.\n\nIgnored if Geo ID below is set."
                    },
                    "geoId": {
                        "title": "Geo ID (precise - overrides Location)",
                        "type": "string",
                        "description": "LinkedIn's numeric geo ID. More precise than text location.\n\nExamples:  103644278 (United States), 101174742 (Israel).\n\nLook up via Harvest's /linkedin/geo-id-search endpoint. Leave blank to use the text Location instead."
                    },
                    "companySize": {
                        "title": "Company size (headcount buckets)",
                        "uniqueItems": true,
                        "type": "array",
                        "description": "Pick one or more headcount ranges. Multiple selections are OR'd together.",
                        "items": {
                            "type": "string",
                            "enum": [
                                "1-10",
                                "11-50",
                                "51-200",
                                "201-500",
                                "501-1000",
                                "1001-5000",
                                "5001-10000",
                                "10001+"
                            ],
                            "enumTitles": [
                                "1-10  (micro)",
                                "11-50  (small)",
                                "51-200  (small-mid)",
                                "201-500  (mid)",
                                "501-1,000  (mid-large)",
                                "1,001-5,000  (large)",
                                "5,001-10,000  (enterprise)",
                                "10,001+  (mega-enterprise)"
                            ]
                        },
                        "default": []
                    },
                    "industryIds": {
                        "title": "Industry IDs",
                        "uniqueItems": true,
                        "type": "array",
                        "description": "LinkedIn V2 industry IDs (numeric). Multiple values are OR'd together.\n\nFull list: https://github.com/HarvestAPI/linkedin-industry-codes-v2/blob/main/linkedin_industry_code_v2_all_eng.csv\n\nCommon examples:\n  - 96  Information Technology and Services\n  - 4   Computer Software\n  - 6   Internet\n  - 43  Financial Services\n  - 14  Hospital and Health Care",
                        "default": [],
                        "items": {
                            "type": "string"
                        }
                    },
                    "maxResults": {
                        "title": "Max search results",
                        "minimum": 1,
                        "maximum": 1000,
                        "type": "integer",
                        "description": "Hard cap on how many companies to return. Pagination is handled automatically.",
                        "default": 100
                    },
                    "postsCompanies": {
                        "title": "Companies to fetch posts from",
                        "uniqueItems": true,
                        "type": "array",
                        "description": "List the companies whose recent posts you want. Same input format as profile mode (URL, handle, or free-text name).",
                        "default": [],
                        "items": {
                            "type": "string"
                        }
                    },
                    "scrapePostedLimit": {
                        "title": "Posts recency filter",
                        "enum": [
                            "1h",
                            "24h",
                            "week",
                            "month",
                            "3months",
                            "6months",
                            "year"
                        ],
                        "type": "string",
                        "description": "Only keep posts newer than this. Applied AFTER fetch for reliable pagination.\n\nLeave blank to keep all posts."
                    },
                    "maxPostsPerCompany": {
                        "title": "Max posts per company",
                        "minimum": 1,
                        "maximum": 1000,
                        "type": "integer",
                        "description": "Cap how many posts to fetch per company. Pagination stops automatically when reached or when no more posts exist.",
                        "default": 50
                    },
                    "includeSimilarCompanies": {
                        "title": "Include 'similar companies' list?",
                        "type": "boolean",
                        "description": "If on, each company profile keeps LinkedIn's 'People also viewed' list (slimmed to id, name, handle, URL, industry, headcount, followers).\n\nDefault off - keeps output ~10x smaller.",
                        "default": false
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
