# LinkedIn Jobs Scraper - Hiring Signals \[NO COOKIES] ✅ (`unseenuser/linkedin-jobs-scraper`) Actor

Bulk-extract LinkedIn job postings by keyword, company, title, or location. Get descriptions, salary ranges, seniority levels, and hiring company data. Built for sales teams using hiring signals for outbound (e.g., 'companies hiring a Head of RevOps in last 14 days') and recruiters.

- **URL**: https://apify.com/unseenuser/linkedin-jobs-scraper.md
- **Developed by:** [Unseen User](https://apify.com/unseenuser) (community)
- **Categories:** Automation, Lead generation, Social media
- **Stats:** 6 total users, 2 monthly users, 100.0% runs succeeded, 4 bookmarks
- **User rating**: 5.00 out of 5 stars

## Pricing

$7.50 / 1,000 results

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## LinkedIn Jobs Scraper

The freshest **LinkedIn jobs scraper** on Apify - turn LinkedIn job postings
into structured **hiring signals** for B2B outbound, **talent intelligence**,
and recruiting market research. No LinkedIn login, no cookies, no scraping
infrastructure to manage. Wraps the
[Harvest LinkedIn jobs API](https://docs.harvest-api.com/linkedin-api-reference/job/search)
and returns a fully-enriched job record (description, salary, applicant count,
parsed location, full company profile) for every result.

> 💡 **The B2B angle.** What if you knew every company that just hired a
> *Head of X* or *Director of Y* in the last 14 days? That's a sales pipeline.
> New LinkedIn job postings are **intent data for hiring** - they tell you
> which teams are growing, who the new decision-maker is, and when to reach
> out. Use this actor as the trigger source for **sales outbound**, competitive
> **talent intelligence**, or **recruiting data** workflows. Pair it with the
> [companion scrapers](#related-scrapers) below for the full pipeline.

> ⚖️ **By running this Actor you accept the [Terms of Service](#apify-actor--terms-of-service)
> and [LinkedIn Jobs Scraper Addendum](#-actor-specific-tos-addendum--linkedin-jobs-scraper)
> at the bottom of this page.** Highlights: not affiliated with LinkedIn /
> Microsoft; you are the data controller for any personal data you collect;
> disputes are resolved under Israeli law in Tel Aviv-Jaffa courts;
> Publisher's aggregate liability is capped at the greater of $100 or 3
> months of fees. Read the full text before use.

---

### Use cases

- **Hiring-signal sales outbound** - turn *"Acme just hired a Head of RevOps"*
  into a same-day outreach trigger. Sync to Salesforce, HubSpot, Apollo, or
  Clay via Apify integrations.
- **Competitive talent intelligence** - track which roles a competitor is
  hiring for by company ID, with salary ranges, seniority, and team-growth
  trends over time.
- **Market trend analysis** - aggregate thousands of postings to spot which
  cities, functions, or employment types are growing or shrinking quarter
  over quarter.
- **Recruiting market research** - benchmark salaries, locations, and
  seniority requirements across an industry before posting your own role.
- **Talent sourcing** - find companies with open roles matching a candidate
  you're representing, plus applicant counts so you know how competitive each
  application is.
- **Job-board aggregation** - fill your own job board with structured,
  fresh LinkedIn data (full description, salary, applicant count, company info).
- **Career research for individual job seekers** - surface under-the-radar
  jobs with fewer than 10 applicants, or filter to LinkedIn Easy Apply only.

---

### Example workflow: from job posting to sales conversation

The hiring-signal pipeline this actor unlocks. Each step feeds the next:

1. **LinkedIn Jobs Scraper (this actor)** - run a daily search for trigger
   roles you care about (e.g. `"VP Marketing"`, posted in the past week,
   US/UK only, 5,000+ employee companies).
2. **Pull the hiring company** - feed each result's `company.id` into
   [LinkedIn Company Scraper Pro](https://apify.com/unseenuser/LinkedIn-Company-Scraper)
   for full firmographics (revenue band, tech stack, recent funding,
   leadership).
3. **Find the hire's manager** - search
   [LinkedIn Profile Scraper + Email Enrichment](https://apify.com/unseenuser/LinkedIn-Profile)
   for the role one rung up at the same company - that's typically the buyer.
4. **Write the outbound** - reference the exact role they just opened, the
   company's growth signal, and the manager's recent posts (via
   [LinkedIn Post Search Scraper](https://apify.com/unseenuser/LinkedIn-Post-Seach-Scraper)).

Result: a CRM record with role, company, and decision-maker, ready for
personalised outreach the same day the job is posted.

---

### Why LinkedIn (vs. Indeed / Glassdoor scrapers)

| Data point                                     | LinkedIn (this actor)        | Indeed / Glassdoor scrapers |
|------------------------------------------------|------------------------------|------------------------------|
| Freshness of new postings                      | Within minutes               | Hours to days                |
| Real seniority signal (Director, VP, C-level)  | ✅ structured field           | ⚠️ guess from title          |
| Hiring-manager / poster identity               | ✅ recruiter on the post      | ❌ usually anonymous          |
| Decision-maker context (1 click away)          | ✅ via LinkedIn graph         | ❌ no graph                   |
| Salary breakdown                               | Often, employer-provided     | Mostly LinkedIn-style estimates |
| Company firmographics in the same record       | ✅ industry, headcount, HQ    | ❌ separate lookup            |
| Easy Apply / applicant-count signal            | ✅ exposed                    | ❌ no equivalent              |

LinkedIn is where the hire is announced first and where the rest of the
buying committee is one hop away. That makes it the highest-signal source
for **sales outbound triggers** and **talent intelligence** workflows.

---

### Inputs

The input form is grouped into sections so you only see what's relevant.

#### 1. What do you want to do?

| Mode                              | When to use it                                                                                |
|-----------------------------------|-----------------------------------------------------------------------------------------------|
| **Search jobs**                   | Most common. Run a LinkedIn search and pull the full job page for every result.               |
| **Get full details for URLs**     | You already have a list of LinkedIn job URLs and just want them enriched.                     |

Both modes return the same elaborate output shape (see *Outputs* below).

#### 2. Search filters

| Field                | What it does                                                                                  |
|----------------------|-----------------------------------------------------------------------------------------------|
| **Job title or keywords** | Free-text search, e.g. *"senior backend engineer"*.                                       |
| **Locations**        | One or more places. Each is searched separately and results are merged. Use cities (`"Tel Aviv"`), countries (`"Germany"`), or `"Remote"`. Empty = worldwide. |
| **Posted within**    | Any time / Past hour / Past 24 hours / Past week / Past month.                                |

#### 3. Job filters

| Field                | Options                                                                                       |
|----------------------|-----------------------------------------------------------------------------------------------|
| **Seniority**        | Internship, Entry, Associate, Mid/Senior, Director, Executive.                                |
| **Employment type**  | Full-time, Part-time, Contract, Internship.                                                   |
| **Workplace**        | Remote, Hybrid, On-site.                                                                      |

Leave any of these empty to skip that filter.

#### 4. Advanced filters

| Field                       | What it does                                                                          |
|-----------------------------|---------------------------------------------------------------------------------------|
| **Sort by**                 | "Best match" (relevance) or "Newest first" (date).                                    |
| **Easy Apply only**         | Only jobs you can apply to with one click on LinkedIn.                                |
| **Less than 10 applicants** | Only fresh listings before they get crowded.                                          |
| **Specific companies**      | Filter to particular LinkedIn company IDs (the number from `linkedin.com/company/<id>`). |

#### 5. Direct job URLs

Used by **Get full details for URLs** mode. You can also paste URLs in
**Search jobs** mode and they'll be enriched alongside the search results.
Use LinkedIn URLs like `https://www.linkedin.com/jobs/view/3899901234/`.

#### 6. Limits

| Field         | Default | What it does                                       |
|---------------|---------|----------------------------------------------------|
| **Max results** | 100   | Stops the actor once this many jobs are collected. |

---

### Input examples

#### Example 1 - Fresh remote backend roles in Israel

```json
{
    "mode": "search",
    "searchKeywords": "backend engineer",
    "locations": ["Tel Aviv", "Remote, Israel"],
    "datePosted": "past_24h",
    "seniorityLevels": ["mid", "associate"],
    "workplaceTypes": ["remote", "hybrid"],
    "sortBy": "date",
    "maxResults": 200
}
````

#### Example 2 - Broad search across the US

```json
{
    "mode": "search",
    "searchKeywords": "data scientist",
    "locations": ["United States"],
    "datePosted": "past_week",
    "jobTypes": ["full_time"],
    "maxResults": 500
}
```

#### Example 3 - Enrich a list of URLs you already collected

```json
{
    "mode": "details",
    "jobUrls": [
        "https://www.linkedin.com/jobs/view/3899901234/",
        "https://www.linkedin.com/jobs/view/3899905678/"
    ]
}
```

#### Example 4 - Track a single company

```json
{
    "mode": "search",
    "companyIds": ["1441"],
    "datePosted": "past_week",
    "maxResults": 50
}
```

#### Example 5 - Find under-the-radar jobs

```json
{
    "mode": "search",
    "searchKeywords": "platform engineer",
    "locations": ["Berlin"],
    "datePosted": "past_24h",
    "easyApplyOnly": true,
    "under10Applicants": true,
    "sortBy": "date"
}
```

***

### Output (dataset views)

Every dataset row is a fully enriched job record - same shape whether the
job came from a search or from a URL you provided. It includes everything
LinkedIn exposes: full description, parsed location, salary breakdown,
applicant counts, full company profile, and more.

The Apify console shows the dataset through several pre-defined views (you
can switch between them with the dropdown above the table):

| View              | What it shows                                                                       |
|-------------------|-------------------------------------------------------------------------------------|
| **Overview**      | Compact list - title, company, location, workplace, seniority, posted date, link.   |
| **With salary**   | Only jobs that include salary info, with min/max/currency/period broken out.        |
| **By company**    | Each job alongside its company profile (industry, headcount, follower count, HQ).   |
| **Remote-friendly** | Jobs flagged as remote-allowed, with apply-link column.                           |

The raw JSON (all fields, in full) is always available via the **JSON** tab
or the dataset API.

```json
{
    "jobId": "3899901234",
    "jobUrl": "https://www.linkedin.com/jobs/view/3899901234/",
    "title": "Senior Backend Engineer",

    "jobState": "LISTED",
    "isNew": true,
    "isApplicationLimitReached": false,
    "postedAt": "2026-05-03T08:14:00.000Z",
    "expiresAt": "2026-06-02T08:14:00.000Z",

    "description": {
        "html": "<p>We're looking for a senior backend engineer...</p>",
        "text": "We're looking for a senior backend engineer..."
    },

    "location": {
        "text": "Tel Aviv, Israel",
        "city": "Tel Aviv",
        "state": "Tel Aviv District",
        "country": "Israel",
        "countryCode": "IL",
        "postalAddress": null
    },
    "workplaceType": "hybrid",
    "isRemoteAllowed": false,

    "seniorityLevel": "mid_senior",
    "employmentType": "full_time",
    "jobFunctions": ["Engineering", "Information Technology"],

    "applyUrl": "https://www.linkedin.com/jobs/view/3899901234/apply",
    "isEasyApply": true,
    "applicantCount": 27,
    "viewCount": 412,
    "applicantTrackingSystem": "Greenhouse",

    "salary": {
        "text": "$120,000.00 - $160,000.00",
        "min": 120000,
        "max": 160000,
        "currency": "USD",
        "period": "yearly",
        "compensationType": "BASE_SALARY",
        "providedByEmployer": true
    },

    "company": {
        "id": "1441",
        "universalName": "acme",
        "name": "Acme Inc.",
        "description": "Acme has been building widgets since 1995...",
        "linkedinUrl": "https://www.linkedin.com/company/acme/",
        "jobSearchUrl": "https://www.linkedin.com/company/acme/jobs/",
        "logoUrl": "https://media.licdn.com/.../logo.png",
        "industry": "Software Development",
        "industries": ["Software Development", "Internet"],
        "employeeCount": 842,
        "employeeCountRange": { "start": 501, "end": 1000 },
        "headcount": "501-1000",
        "followerCount": 152300,
        "headquarters": {
            "city": "San Francisco",
            "state": "California",
            "country": "United States",
            "countryCode": "US",
            "line1": "100 Market Street",
            "line2": null,
            "postalCode": "94105"
        },
        "specialities": ["Cloud", "AI", "Developer tools"]
    },

    "skills": [],
    "benefits": ["Health insurance", "Stock options"],

    "scrapedAt": "2026-05-04T10:21:11.482Z"
}
```

#### Field reference

##### Top-level

| Field | Type | What it is |
|-------|------|------------|
| `jobId` | string | LinkedIn's numeric job ID. |
| `jobUrl` | string | Canonical LinkedIn job URL. |
| `title` | string | Job title. |
| `jobState` | string | E.g. `"LISTED"`, `"CLOSED"`. |
| `isNew` | boolean | LinkedIn's "New" badge. |
| `isApplicationLimitReached` | boolean | True when the job has stopped accepting applications. |
| `postedAt` | string (ISO) | When the job was first posted. |
| `expiresAt` | string (ISO) | When the listing expires. |
| `scrapedAt` | string (ISO) | When this row was scraped. |

##### Description

| Field | Type | What it is |
|-------|------|------------|
| `description.html` | string | Full job description as HTML. |
| `description.text` | string | Same description as plain text. |

##### Location & workplace

| Field | Type | What it is |
|-------|------|------------|
| `location.text` | string | Original location string from LinkedIn. |
| `location.city` | string | Parsed city. |
| `location.state` | string | Parsed state / region. |
| `location.country` | string | Parsed country name. |
| `location.countryCode` | string | ISO 3166-1 alpha-2 country code. |
| `location.postalAddress` | string | Postal address if provided. |
| `workplaceType` | string | `remote`, `hybrid`, or `on_site`. |
| `isRemoteAllowed` | boolean | LinkedIn's "remote allowed" flag. |

##### Role

| Field | Type | What it is |
|-------|------|------------|
| `seniorityLevel` | string | E.g. `mid_senior`, `entry`. |
| `employmentType` | string | `full_time`, `part_time`, `contract`, `internship`. |
| `jobFunctions` | string\[] | LinkedIn job-function tags, e.g. `["Engineering", "Information Technology"]`. |

##### Application

| Field | Type | What it is |
|-------|------|------------|
| `applyUrl` | string | URL to apply: Easy Apply URL when available, otherwise the external apply URL, otherwise the LinkedIn job page itself as a fallback. |
| `isEasyApply` | boolean | Always set. `true` if you can apply with one click on LinkedIn, otherwise `false`. |
| `applicantCount` | number | How many people have applied so far. |
| `viewCount` | number | How many times the job has been viewed. |
| `applicantTrackingSystem` | string | E.g. `"LinkedIn"`, `"Greenhouse"`, `"Lever"` (often `null` for non-Easy-Apply jobs). |

##### Compensation (`salary`)

| Field | Type | What it is |
|-------|------|------------|
| `salary.text` | string | The original salary string LinkedIn shows. |
| `salary.min` | number | Minimum of the range. |
| `salary.max` | number | Maximum of the range. |
| `salary.currency` | string | E.g. `"USD"`, `"EUR"`. |
| `salary.period` | string | `yearly`, `monthly`, `hourly`. |
| `salary.compensationType` | string | E.g. `BASE_SALARY`. |
| `salary.providedByEmployer` | boolean | True if the employer (not LinkedIn estimate) provided the range. |

##### Company (`company`)

| Field | Type | What it is |
|-------|------|------------|
| `company.id` | string | Numeric LinkedIn company ID. |
| `company.universalName` | string | Slug from `linkedin.com/company/<slug>`. |
| `company.name` | string | Display name. |
| `company.description` | string | Full "About" text. |
| `company.linkedinUrl` | string | LinkedIn company page URL. |
| `company.jobSearchUrl` | string | LinkedIn URL listing all jobs at the company (derived from `linkedinUrl` when not surfaced directly). |
| `company.logoUrl` | string | Company logo URL. |
| `company.industry` | string | Primary industry name. |
| `company.industries` | string\[] | All industry tags. |
| `company.employeeCount` | number | Exact employee count if known. |
| `company.employeeCountRange.start` / `.end` | number | LinkedIn size range bounds. |
| `company.headcount` | string | Human-readable range, e.g. `"501-1000"`. |
| `company.followerCount` | number | LinkedIn followers. |
| `company.headquarters` | object | City, state, country, countryCode, line1, line2, postalCode. |
| `company.specialities` | string\[] | Tags like `["Cloud", "AI"]`. |

> **Note:** Some fields are `null` when LinkedIn doesn't expose them on a
> particular job (salary, applicant tracking system, expiry date, etc. are
> commonly missing). The actor returns `null` rather than fabricating data.
> The `salary` object is `null` (not an empty husk) when no salary info is
> available.

***

### Schedule examples

Apify supports cron-style schedules. To watch a saved search and surface new
jobs every 6 hours, create a schedule with cron `0 */6 * * *` and the input:

```json
{
    "mode": "search",
    "searchKeywords": "platform engineer",
    "locations": ["Berlin"],
    "datePosted": "past_24h",
    "workplaceTypes": ["remote", "hybrid"],
    "maxResults": 100
}
```

Other useful cadences:

- **Every hour, top of the hour** - `0 * * * *` for very fast-moving markets.
- **Daily at 07:00 UTC** - `0 7 * * *` for a morning recruiting digest.
- **Weekdays at 09:00 UTC** - `0 9 * * 1-5` for office-hours updates.

Combine with Apify integrations (Slack, webhook, Zapier, Make) to push new
postings into the channel of your choice.

***

### FAQ

**Do I need a LinkedIn account or cookies?**
No. The actor calls the Harvest LinkedIn jobs API, which handles LinkedIn for
you. You only need a Harvest API key.

**Why is `mode` the first thing it asks?**
The other fields don't all apply to every mode. Picking the mode first keeps
the form short and obvious.

**How fresh is the data?**
New LinkedIn postings appear in the Harvest index within minutes of being
posted on LinkedIn. The actor reads live each run - there's no cached layer
between you and the latest jobs.

**What's the maximum date range I can look back?**
The `datePosted` filter exposes *Past hour*, *Past 24 hours*, *Past week*, and
*Past month* (plus *Any time*). LinkedIn itself doesn't expose a precise
"more than 30 days ago" filter for active jobs, so *Past month* is the
practical ceiling. For longer historical runs, schedule the actor weekly or
monthly and persist results to a Named Dataset.

**What's the maximum number of results per search?**
The actor lets you set `maxResults` up to 10,000 per run. Practically, a
single keyword + location pair on LinkedIn returns a few hundred to a few
thousand fresh matches. To go bigger, split your search across multiple
keywords or locations and merge the datasets - the actor de-duplicates by
`jobId` within a run, so you can also chain runs into one dataset.

**Salary fill rates - where will I actually see salary data?**

- **United States, California / NY / WA / CO / IL** - very high (40-90%) due
  to pay-transparency laws.
- **United Kingdom, Germany, Netherlands** - moderate (15-35%).
- **Most of EU / LATAM / APAC** - low (under 15%); LinkedIn shows estimates
  more often than employer-provided ranges.
- **Israel** - low to moderate (under 20%).
  The `salary.providedByEmployer` boolean tells you whether the range came from
  the employer or LinkedIn's own estimator.

**How do I avoid duplicates across runs?**
Use a [Named Dataset](https://docs.apify.com/platform/storage/dataset) and
de-duplicate by `jobId` downstream - or filter outputs where `postedAt >
lastRunAt`. Within a single run, the actor already de-duplicates by `jobId`.

**What happens when I hit a rate limit?**
The actor retries `429` and `5xx` responses with exponential backoff (up to
five attempts), respecting `Retry-After` when provided.

**Why are some fields `null`?**
LinkedIn doesn't surface every field on every job. Salary, applicant count,
expiry date, and applicant tracking system are commonly missing - the actor
returns `null` rather than fabricating data.

**Can I scrape only specific companies?**
Yes - paste their LinkedIn numeric company IDs in the *Specific companies*
field. Find an ID in the URL `linkedin.com/company/<id>`.

**What does each result cost?**
Every result is fully enriched - both pricing events fire - so each job
costs **$0.010** ($0.002 search + $0.008 details).

**How is pagination handled?**
The actor paginates the search until `maxResults` is reached, an empty page
is returned, or pagination metadata says there are no more pages. Multiple
locations are searched sequentially, results merged and de-duplicated.

***

### Local development

```bash
npm install
HARVEST_API_KEY=... npx apify run -p
```

Compiled output lands in `dist/`. Entry point is `dist/main.js`.

***

### Related scrapers

Build the complete hiring-signal sales pipeline using these companion actors
from the same publisher:

- [**LinkedIn Company Scraper Pro**](https://apify.com/unseenuser/LinkedIn-Company-Scraper) - enrich the hiring company with full firmographics (revenue, tech stack, funding, leadership).
- [**LinkedIn Profile Scraper + Email Enrichment**](https://apify.com/unseenuser/LinkedIn-Profile) - find the hire's manager, map the buying committee, get verified emails.
- [**LinkedIn Post Search Scraper**](https://apify.com/unseenuser/LinkedIn-Post-Seach-Scraper) - see what the hiring team is talking about - perfect for personalised opening lines in your outbound.

[**See all 16 scrapers by unseenuser →**](https://apify.com/unseenuser)

***

### Disclaimer & disclosure

**Third-party data source.** This actor calls the [Harvest API](https://harvest-api.com/),
a third-party service that scrapes LinkedIn. It is **not affiliated with,
endorsed by, or sponsored by LinkedIn, Microsoft, or any of their
subsidiaries**. All trademarks belong to their respective owners.

**Terms of Service.** LinkedIn's [User Agreement](https://www.linkedin.com/legal/user-agreement)
restricts automated collection of data. **You are responsible** for ensuring
your use of this actor and the data it returns complies with:

- LinkedIn's Terms of Service and User Agreement,
- Harvest API's [terms of service](https://harvest-api.com/),
- Apify's [Terms of Service](https://apify.com/terms-of-service),
- Your local data-protection laws (GDPR, UK GDPR, CCPA, LGPD, PIPEDA, etc.).

Use this actor only for **legitimate purposes** - recruiting, your own job
hunt, market or competitive research, building authorised internal tools.
**Do not** redistribute personal data scraped through this actor, send
unsolicited messages to people whose details you collect, or train ML models
on personal data without a lawful basis.

**Data quality.** LinkedIn doesn't surface every field on every job.
Salary, applicant-tracking system, expiry dates, and other fields are
commonly missing - the actor returns `null` rather than fabricating data.
Numbers (applicants, views, follower count) are LinkedIn's own counters as
of the moment of scraping and can change minute-to-minute. Some location
strings are country-only and won't include city or region.

**Pricing.** This actor is paid per event. Each fully-enriched job costs
**$0.010** ($0.002 search match + $0.008 enrichment), in addition to your
Apify compute costs. **You will also need your own Harvest API key** -
[Harvest charges separately](https://harvest-api.com/) for API usage.

**No warranty.** This actor is provided **as-is, without warranty of any
kind**. The author and Apify are **not liable** for any losses, downtime,
ToS issues, business decisions, or legal consequences arising from use of
this actor or the data it returns. The Harvest API may rate-limit, return
partial data, or be unavailable; the actor retries transient failures
(429 / 5xx with exponential backoff) but cannot guarantee 100% delivery.

**Privacy / data subjects.** If your run collects data about identifiable
people (job posters, recruiters), and you are subject to GDPR or similar,
you are the data controller for that data. You must determine your lawful
basis (typically *legitimate interest* under GDPR Art. 6(1)(f)), honour
deletion requests, and apply storage limitation. The actor does **not**
deduplicate across runs - pair it with a [Named Dataset](https://docs.apify.com/platform/storage/dataset)
or your own pipeline if you need that.

**No web server.** This actor runs as a one-shot batch job and does **not**
expose a live-view web server, so no OpenAPI spec is published - runs are
controlled via Apify's standard run / dataset / key-value-store APIs.

***

### Apify Actor - Terms of Service

**Version: 4.0**
**Effective Date: May 5, 2026**

#### 0. ACCEPTANCE BY USE - IMPORTANT

**Read this section first.**

These Terms of Service ("Terms") form a binding legal agreement between you
("User," "you," "your") and **UnseenUser**, the Publisher of this Apify actor
("UnseenUser," "the Publisher," "we," "us," "our").

##### 0.1 How You Accept These Terms

You accept these Terms by any of the following actions, each of which
constitutes a clear, affirmative act of acceptance:

(a) **Running the Actor** - Initiating any execution of the Actor on the Apify platform
(b) **Using any output** returned by the Actor for any purpose
(c) **Continuing to access** the Actor's listing or documentation after these Terms are visible

##### 0.2 Continuing Acceptance

Each time you run the Actor or use its outputs, you reaffirm your acceptance
of the then-current Terms. If you do not agree to these Terms or any
subsequent update, you must stop using the Actor immediately.

##### 0.3 No Anonymous Acceptance

You cannot disclaim acceptance by:

- Failing to read these Terms before running the Actor
- Running the Actor through automated systems
- Sharing your Apify account with others who may not have read these Terms

By the act of running the Actor on Apify, you bind yourself, your organization
(if applicable), and any individuals or systems acting on your behalf or
under your authority.

##### 0.4 If You Do Not Accept

If you do not agree to these Terms, you must not run the Actor. No use is
authorized without acceptance.

***

#### PREAMBLE - UNDERSTANDING THE ARCHITECTURE

Before using the Actor, please understand the technical architecture of the
service:

##### The Data Flow

```
You (User) -> Apify Platform -> Actor (software) -> Third-Party API -> Source Platform
                                                       |
You (User) <- Apify Platform <- Actor (software) <- Third-Party API
```

##### What Each Party Does

- **You (the User):** Run the Actor on the Apify platform with input parameters
  you choose.
- **Apify:** Operates the cloud infrastructure that hosts and executes Actors.
  Apify is a Czech-incorporated company (Apify Technologies s.r.o.) governed
  by its own Terms of Service.
- **The Publisher (us):** Publishes software code (the Actor) on Apify's
  platform. The Actor is a thin wrapper that translates your input into
  requests to a third-party API and returns the API's responses to you. The
  Publisher does not operate scraping infrastructure. The Publisher does not
  store or retain data returned by the Actor. The Publisher does not see, log,
  or process the personal data of any individuals returned in the Actor's
  outputs beyond what is incidental to passing the data through.
- **Third-Party API Provider:** [HarvestAPI](https://harvest-api.com) or
  [Scrape Creators](https://scrapecreators.com). These are independent
  third-party companies that operate scraping infrastructure and return data
  from source platforms.
- **Source Platform:** LinkedIn, TikTok, YouTube, Reddit, Linktree, etc. These
  are the platforms whose publicly visible data is accessed by the Third-Party
  API Providers.

##### Why This Matters

Your relationship with the Publisher is that of a software user to a software
vendor. The Publisher has the responsibilities of a software vendor (functional
code, accurate documentation) and the limits of one (the Publisher is not
responsible for how you use the data you obtain).

These Terms operate alongside but do not replace:

- Apify's Terms of Service and Acceptable Use Policy (governing your relationship with Apify)
- HarvestAPI Terms of Service and Scrape Creators Terms of Service (governing the underlying data infrastructure)
- Source Platform terms (LinkedIn, TikTok, etc.) governing the public data accessed
- Applicable law in your jurisdiction and the jurisdictions of data subjects

These Terms incorporate the actor-specific addendum published in each Actor's
individual listing ("Addendum"). In the event of a conflict, the more
restrictive provision applies.

***

#### 1. NATURE OF THE SERVICE

##### 1.1 What the Actor Is

The Actor is a software program published on the Apify platform. Each Actor:

(a) Accepts structured input from you on the Apify platform
(b) Translates that input into HTTP requests to a third-party API operated by HarvestAPI or Scrape Creators
(c) Receives HTTP responses from that third-party API
(d) Returns the response data to you in a structured format on the Apify platform

The Actor's source code is hosted on Apify's infrastructure. The Actor runs in
Apify's cloud, not on the Publisher's servers. The Publisher operates no
servers running the Actor.

##### 1.2 What the Actor Is Not

The Actor is **not**:

(a) A scraping tool - the Publisher does not operate scraping infrastructure, proxies, headless browsers, or fake accounts
(b) A direct connection to any source platform - connections to source platforms are made by HarvestAPI / Scrape Creators
(c) A data storage or data retention service - the Publisher does not maintain a database of any data the Actor returns
(d) A licensed access channel to LinkedIn, TikTok, YouTube, Reddit, X (Twitter), Meta, Linktree, or any other source platform
(e) Affiliated with, endorsed by, sponsored by, or authorized by any source platform

##### 1.3 The Publisher's Limited Role

The Publisher's role is limited to:

(a) Designing and writing the Actor's source code
(b) Publishing the Actor on the Apify Store
(c) Maintaining the Actor (updating it when API providers change schemas)
(d) Providing documentation and customer support via Apify's contact mechanism

The Publisher is a software vendor, similar to a developer who publishes an
app on the Apple App Store or Google Play Store. The Publisher is not a data
provider, data broker, data processor, or data controller for purposes of
GDPR, CCPA, Israel's Privacy Protection Law, or equivalent.

##### 1.4 The Third-Party API Providers' Role

[HarvestAPI](https://harvest-api.com) and [Scrape Creators](https://scrapecreators.com)
are independent third-party companies. They:

(a) Operate the actual data scraping infrastructure
(b) Maintain relationships with source platforms (or accept the operational risk of accessing public data without such relationships)
(c) Are themselves Apify publishers (HarvestAPI publishes 9+ actors directly; Scrape Creators publishes 10+)
(d) Provide their own Terms of Service governing their operations
(e) Are responsible for compliance obligations relating to the data collection itself

The Publisher is a customer of these providers. The Publisher is not their
agent, partner, or representative.

***

#### 2. WHO MAY USE THE ACTOR

##### 2.1 Eligibility

You may use the Actor only if:

(a) You are at least 18 years old or the age of majority in your jurisdiction
(b) You have legal capacity to enter into binding contracts
(c) You are not located in or resident of a country subject to comprehensive sanctions by the United States, European Union, United Kingdom, or Israel
(d) You are not on any prohibited persons list

##### 2.2 User Representations

By using any Actor, you represent and warrant that:

(a) **Truthful identity:** Information you provide about your identity and intended use is accurate
(b) **Lawful intent:** Your intended use complies with applicable law in your jurisdiction
(c) **Source platform compliance:** You will independently comply with the Terms of Service of any source platform whose data you obtain through the Actor
(d) **Data subject rights:** Where Actor outputs include personal data, you will respect data subject rights under applicable law
(e) **No prohibited use:** You will not use the Actor for any of the purposes prohibited in Section 4

These representations are continuous - they must remain true throughout your use.

***

#### 3. PERMITTED USES

The Actor may be used for any lawful purpose, including:

- Market research and competitive analysis
- Academic research
- Journalism and investigative reporting
- Internal business intelligence
- Brand monitoring
- Recruitment research where consistent with applicable employment law
- Building products that further process publicly available information lawfully

Specific permitted uses for each Actor are described in that Actor's individual
listing and Addendum.

***

#### 4. PROHIBITED USES

You may not use the Actor for any of the following:

##### 4.1 Illegal Activity

Activity illegal under the law of your jurisdiction, the User's jurisdiction,
or the jurisdiction of any data subjects.

##### 4.2 Harassment, Stalking, and Personal Targeting

- Compiling profiles for harassment, stalking, or doxxing
- Tracking individuals' movements or activities without their knowledge
- Building profiles of journalists, activists, dissidents, or vulnerable populations for retaliatory purposes

##### 4.3 Discrimination

- Using outputs for discriminatory employment, lending, housing, or insurance decisions based on protected characteristics
- Building lists for discriminatory purposes

##### 4.4 Spam and Unsolicited Commercial Communication

- Sending unsolicited marketing in violation of CAN-SPAM, CASL, GDPR, PECR, Israeli Anti-Spam Law (סעיף 30א לחוק התקשורת), or equivalent laws
- Building "lead lists" from scraped contacts without proper consent infrastructure
- Reselling contact data for spam purposes

##### 4.5 Fraud and Deception

- Identity theft or impersonation
- Generation of fake reviews, testimonials, or coordinated inauthentic behavior
- Election interference or political disinformation
- Securities fraud

##### 4.6 Source Platform Abuse

- Using outputs to circumvent technical protection measures of source platforms
- Creating fake accounts on source platforms based on Actor outputs
- Vote manipulation, engagement manipulation, or platform algorithm gaming
- Building services that competitively substitute for source platforms

##### 4.7 Reselling the Actor's Service

- Reselling raw Actor outputs as your own data product or scraping-as-a-service
- Sharing your Apify credentials to provide third parties indirect access
- Building competing API services using Actor outputs

##### 4.8 AI Training Without Authorization

- Using Actor outputs as training data for commercial AI/ML models without separate licensing authority from the source platform

##### 4.9 Sensitive Targeting

- Specifically targeting or profiling based on health conditions, sexual orientation, religious beliefs, political opinions, or other sensitive characteristics
- Targeting children under 16 (or local age of consent for data processing)

##### 4.10 Privacy Law Violations

- Processing personal data of EU/UK/California/Israeli residents without complying with applicable privacy law
- Failing to honor data subject access, deletion, or objection requests
- Processing data for purposes incompatible with its publication context

***

#### 5. SOURCE PLATFORM TERMS - YOUR RESPONSIBILITY

##### 5.1 Acknowledgment

The Actor accesses publicly visible data on third-party platforms ("Source
Platforms") through the Third-Party API Providers (HarvestAPI / Scrape
Creators). Source Platforms include LinkedIn, TikTok, YouTube, Reddit,
X (Twitter), Meta/Facebook, Linktree, Komi, Pillar, Linkbio, Linkme, and
Amazon.

##### 5.2 Your Sole Responsibility

You acknowledge:

(a) You are solely responsible for ensuring your downstream use of data obtained through the Actor complies with the Source Platform's Terms of Service
(b) The Publisher makes no representation that any specific use is permitted under any Source Platform's terms
(c) The Third-Party API Providers, not the Publisher, bear responsibility for the lawfulness of the data collection itself
(d) You should review Source Platform terms before commercial use:

- LinkedIn: <https://www.linkedin.com/legal/user-agreement>
- TikTok: <https://www.tiktok.com/legal/page/global/terms-of-service/en>
- YouTube: <https://www.youtube.com/static?template=terms>
- X: <https://twitter.com/en/tos>
- Reddit: <https://www.redditinc.com/policies/user-agreement>
- Meta: <https://www.facebook.com/legal/terms>
- Linktree: <https://linktr.ee/s/terms/>

##### 5.3 Cease-and-Desist Compliance

If you receive a cease-and-desist letter or other legal demand from a Source
Platform regarding your use of Actor outputs, you must:

(a) Cease the contested use immediately
(b) Notify UnseenUser within 48 hours via UnseenUser's Apify profile contact form (<https://apify.com/UnseenUser>)
(c) Cooperate with the Publisher as needed to mitigate
(d) Not assert against the Publisher any claim arising from your inability to use the Actor for that Source Platform

***

#### 6. DATA PROTECTION - REFLECTING ACTUAL ARCHITECTURE

##### 6.1 Roles Under Privacy Law

For purposes of GDPR, UK GDPR, CCPA, Israel's Privacy Protection Law (PPL)
including Amendment 13, and equivalents:

- **You (the User)** are the **Data Controller** of any personal data you obtain through the Actor and subsequently process for your own purposes
- **HarvestAPI and Scrape Creators** are the entities that collect data from source platforms - they bear the responsibilities of data processors or controllers (depending on context) for the collection itself
- **The Publisher** acts solely as a software vendor, not as a data controller or processor, because the Publisher does not store, retain, or substantively process personal data - the Actor merely passes API responses through

##### 6.2 No Data Retention by the Publisher

The Publisher confirms:

(a) The Publisher does not maintain a database of personal data obtained through the Actor
(b) The Actor passes data from the Third-Party API directly to you on the Apify platform - data does not flow through the Publisher's infrastructure
(c) Apify's standard execution and operational logging may include limited information about Actor runs (input parameters, run duration, data volume) - this is governed by Apify's own privacy practices
(d) The Publisher does not access, view, or analyze your Actor outputs except as needed for technical support if you specifically share them with the Publisher

##### 6.3 Your Obligations as Data Controller

Where your use of the Actor involves processing personal data, you are
responsible for:

(a) Establishing a lawful basis for your processing (consent, legitimate interest with documented balancing test, contract, etc.)
(b) Providing transparent notice to data subjects as required by applicable law
(c) Honoring data subject access, rectification, erasure, restriction, and portability requests
(d) Implementing appropriate security measures
(e) Conducting Data Protection Impact Assessments where required
(f) Appointing a Data Protection Officer if your operations require one
(g) Registering databases with applicable supervisory authorities
(h) Honoring opt-out requests for direct marketing
(i) Cross-border transfer safeguards where data crosses borders

##### 6.4 Israel's Amendment 13 - User Compliance

If your use of the Actor involves Israeli residents' personal data, you must
comply with the Privacy Protection Law as amended (Amendment 13, effective
August 14, 2025). These obligations are yours as the data controller, not the
Publisher's as the software vendor.

##### 6.5 Sensitive Data Targeting Restrictions

You will not use the Actor to specifically target, profile, or build datasets
focused on:

- Health or medical conditions
- Religious beliefs
- Political opinions
- Sexual orientation or gender identity
- Genetic or biometric data
- Criminal history
- Children under 16

***

#### 7. INTELLECTUAL PROPERTY

##### 7.1 Actor Code

The Actor's source code, schemas, documentation, and branding are owned by
the Publisher. You receive a limited, non-exclusive, non-transferable,
revocable license to use the Actor for permitted purposes during your active
subscription/run with Apify.

##### 7.2 Output Data

The Publisher claims no ownership over the public data the Actor returns.
Source Platforms may have copyright, database rights, or other rights in
their data; data subjects may have copyright in user-generated content. Your
use of output data must respect these rights independently.

##### 7.3 Restrictions

You may not reverse engineer, decompile, or reuse the Actor's code in a
competing actor.

##### 7.4 Feedback

Feedback you provide may be used by the Publisher to improve products without
compensation to you.

***

#### 8. PRICING AND PAYMENT

##### 8.1 Apify Platform Billing

Pricing is administered through Apify's pricing models. Apify processes all
payments. Apify's payment terms govern refunds and disputes.

##### 8.2 Pricing Changes

The Publisher may change Actor pricing with at least 14 days' notice via the
Actor's Apify listing.

##### 8.3 No Refunds for Misuse

If your access is suspended or terminated for breach of these Terms, you
forfeit any unused balance and are not entitled to refunds.

***

#### 9. SERVICE AVAILABILITY AND CHANGES

##### 9.1 No Uptime Guarantee

The Actor depends on:

(a) The Apify platform
(b) Underlying API providers (HarvestAPI, Scrape Creators)
(c) Source Platforms' continued public accessibility

Any of these may change behavior, restrict access, or become unavailable
without notice. The Publisher makes no uptime guarantees.

##### 9.2 Service Discontinuation

The Publisher may discontinue any Actor at any time. Reasonable notice will
be provided when feasible.

***

#### 10. DISCLAIMERS

##### 10.1 "AS IS" Service

THE ACTOR IS PROVIDED "AS IS" AND "AS AVAILABLE" WITHOUT WARRANTIES OF ANY
KIND, INCLUDING WARRANTIES OF MERCHANTABILITY, FITNESS FOR PURPOSE,
NON-INFRINGEMENT, OR ACCURACY OF DATA.

##### 10.2 No Representation of Lawfulness

The Publisher makes no representation that your specific use of the Actor or
the data it returns is lawful in your jurisdiction or under any Source
Platform's terms. The burden of determining lawfulness for your use case is
yours.

##### 10.3 No Endorsement of Source Content

Content returned by the Actor was created by third parties. The Publisher
does not endorse, verify, or take responsibility for it.

***

#### 11. LIMITATION OF LIABILITY

##### 11.1 Aggregate Liability Cap

TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, IN NO EVENT SHALL THE
AGGREGATE LIABILITY OF THE PUBLISHER FOR ALL CLAIMS RELATING TO THE ACTOR
EXCEED THE GREATER OF:

(a) ONE HUNDRED U.S. DOLLARS (US $100), OR
(b) THE AMOUNTS YOU PAID THROUGH APIFY FOR USE OF THE ACTOR IN THE THREE (3) MONTHS IMMEDIATELY PRECEDING THE EVENT

##### 11.2 Excluded Damages

THE PUBLISHER IS NOT LIABLE FOR INDIRECT, INCIDENTAL, SPECIAL, CONSEQUENTIAL,
EXEMPLARY, OR PUNITIVE DAMAGES, OR FOR LOSS OF PROFITS, REVENUE, OR DATA,
EVEN IF ADVISED OF THE POSSIBILITY.

##### 11.3 Time Limit

Any claim must be brought within one (1) year of the event.

***

#### 12. INDEMNIFICATION

##### 12.1 Your Indemnification of the Publisher

You agree to defend, indemnify, and hold harmless the Publisher from any:

- Claims arising from your use of the Actor
- Claims arising from your violation of these Terms
- Claims arising from your violation of any law (including privacy law)
- Claims arising from your violation of any Source Platform's Terms of Service
- Claims arising from your processing of personal data obtained through the Actor
- Reasonable attorneys' fees and costs of defending such claims

##### 12.2 Defense

The Publisher may assume defense at your expense. You will cooperate with the
Publisher's defense.

##### 12.3 Scope

The indemnification covers reasonable, foreseeable third-party claims arising
from your use. It does not extend to:

- Claims arising from the Publisher's gross negligence or willful misconduct
- Claims regarding the Actor's source code itself (those are the Publisher's responsibility)
- Claims regarding the Third-Party API Provider's data collection (those are their responsibility)

***

#### 13. SUSPENSION AND TERMINATION

##### 13.1 Termination by the Publisher

The Publisher may terminate your access for material breach, illegal use,
breach of warranty, or upon credible legal demand.

##### 13.2 Effects of Termination

Your license ends, you must cease use, and applicable provisions survive.

##### 13.3 Termination by You

You may stop using the Actor at any time on Apify.

***

#### 14. DISPUTE RESOLUTION

##### 14.1 Informal Resolution First

Send a detailed written description of the dispute via UnseenUser's Apify
profile contact form (<https://apify.com/UnseenUser>) and wait 60 days for
resolution attempt before any formal claim.

##### 14.2 Governing Law

These Terms are governed by the substantive laws of the **State of Israel**,
without regard to conflict of law principles.

##### 14.3 Exclusive Jurisdiction

Any dispute shall be brought exclusively in the competent civil courts of
**Tel Aviv-Jaffa, Israel**.

##### 14.4 No Class Actions

You agree to bring claims only in your individual capacity.

##### 14.5 Attorneys' Fees

The prevailing party recovers reasonable attorneys' fees.

***

#### 15. MISCELLANEOUS

##### 15.1 Entire Agreement

These Terms (with Addendum and incorporated documents) are the entire agreement.

##### 15.2 Severability

Unenforceable provisions are reformed to the minimum extent or severed.

##### 15.3 Assignment

You may not assign without the Publisher's consent. The Publisher may assign
to affiliates, successors, or acquirers.

##### 15.4 Force Majeure

Neither party is liable for failure due to events beyond reasonable control,
including changes by Source Platforms or Third-Party API Providers, or
actions by Apify.

##### 15.5 Third-Party Beneficiaries

Apify, HarvestAPI, and Scrape Creators are intended third-party beneficiaries
of Sections 4 (Prohibited Uses), 5 (Source Platform Compliance), and 12
(Indemnification).

##### 15.6 Survival

Sections 0 (Acceptance), 4, 5, 6, 7, 10, 11, 12, 14, and 15 survive termination.

##### 15.7 Language

English controls. Translations are for convenience only.

##### 15.8 Publisher Identification for Legal Process

The Publisher operates on the Apify platform under the username **UnseenUser**
(<https://apify.com/UnseenUser>). The Publisher is a registered legal entity.
Upon receipt of valid legal process (subpoena, court order, or equivalent)
directed through Apify's official channels, the Publisher's full legal
identity may be disclosed as required by law. This Section ensures that you
have a valid path to legal recourse if needed.

***

#### 16. ACKNOWLEDGMENT

By using any Actor, you acknowledge that:

(a) You have read these Terms
(b) You understand the architecture: you are using software (the Actor) on Apify's platform that calls third-party APIs
(c) You accept responsibility for your use, including for compliance with Source Platform terms
(d) Your indemnification obligations cover third-party claims arising from your use
(e) Disputes are resolved in Israeli courts
(f) The Publisher's identity, while not publicly disclosed in this listing, can be obtained through valid legal process via Apify

For questions, use UnseenUser's Apify profile contact form
(<https://apify.com/UnseenUser>) before running the Actor.

These Terms reflect best practices for anonymous Apify actor publishing as of
May 2026. Not a substitute for legal advice. Consult qualified Israeli
commercial counsel before deploying.

***

### 🛡️ Actor-Specific ToS Addendum - LinkedIn Jobs Scraper

This addendum supplements the Master Terms of Service V4.0. By running this
Actor, you accept both the Master ToS and this addendum.

#### A. Architectural Disclosure

This Actor is a software wrapper. It accepts your input parameters, calls the
HarvestAPI `/linkedin/job-search` and `/linkedin/job` endpoints, and returns
the response data to you on the Apify platform. The Publisher does not store,
log, or substantively process the data returned. The data flows from
HarvestAPI through Apify's runtime directly to you.

#### B. Nature of Data Returned

The Actor returns:

- **Job listing data** - predominantly business information about open positions
- **Company information** associated with job postings
- **Limited personal data** - typically including names of recruiters, hiring managers, or contact persons listed publicly on LinkedIn job posts

Where the Actor's output includes individual recruiters or hiring managers,
those names constitute personal data subject to GDPR, CCPA, and Israeli
Privacy Protection Law in your downstream processing - but only in your hands
as the data controller, not in the Publisher's hands as the software vendor.

#### C. Permitted Use Cases

You may use this Actor for:

- Job market research and salary analysis
- Recruiter intelligence (understanding which companies hire what roles)
- Building job aggregator websites that respect LinkedIn's terms
- Internal HR competitive analysis
- Career research for individual job seekers
- Academic research on labor markets

#### D. Specifically Prohibited Uses

In addition to Master ToS Section 4 prohibitions, you may **NOT**:

- Send unsolicited recruiting outreach to recruiters or hiring managers identified through this Actor without complying with applicable anti-spam laws (CAN-SPAM, GDPR, Israeli Anti-Spam Law)
- Build "candidate-targeting" tools that match individuals to jobs without their consent
- Republish full job descriptions in a way that competes with LinkedIn's job board
- Aggregate and resell raw job data without adding substantial value
- Use job listings to identify and discriminate against current employees of named companies

#### E. LinkedIn Platform ToS Considerations

LinkedIn's User Agreement governs your use of LinkedIn data. This Actor
accesses publicly visible job posting data via HarvestAPI - HarvestAPI bears
responsibility for the lawfulness of the data collection. Your downstream
use, however, is solely your responsibility:

- LinkedIn may consider commercial use of job data to violate their User Agreement
- If LinkedIn issues a cease-and-desist regarding data obtained via this Actor, notify UnseenUser within 48 hours via the Apify profile contact form (<https://apify.com/UnseenUser>) and cease your use immediately
- The Publisher bears no responsibility for downstream LinkedIn enforcement actions against you

#### F. Data Subject Considerations

Where Actor outputs include names, photos, or contact information of
individuals (recruiters, hiring managers):

- You are the data controller for any subsequent processing
- You must establish a lawful basis for processing under GDPR/CCPA/Israeli law
- You must honor data subject rights (access, deletion, objection)
- You must comply with anti-spam laws for any outreach
- You may not use this data for purposes incompatible with the data's original publication context (i.e., job listings)

# Actor input Schema

## `mode` (type: `string`):

Pick one. This decides which fields below you need to fill in. Each field is labelled with the mode it applies to.

## `searchKeywords` (type: `string`):

Type the job title or keywords, like on linkedin.com/jobs. Examples: "product manager", "senior backend engineer", "AI researcher". Leave empty to search every job.

## `locations` (type: `array`):

Add one or more places to search. Each location is searched separately and the results are merged. Use cities ("Tel Aviv"), countries ("Germany"), or "Remote". Leave empty to search worldwide.

## `datePosted` (type: `string`):

Filter to recently-posted jobs. "Past hour" gives the very latest postings; "Past month" gives the widest net.

## `seniorityLevels` (type: `array`):

Tick one or more. Empty = all seniority levels.

## `jobTypes` (type: `array`):

Tick one or more. Empty = all employment types.

## `workplaceTypes` (type: `array`):

Tick one or more. Empty = remote, hybrid, and on-site jobs all included.

## `sortBy` (type: `string`):

How to order the search results.

## `easyApplyOnly` (type: `boolean`):

If on, only show jobs you can apply to with one click on LinkedIn (no external website).

## `under10Applicants` (type: `boolean`):

If on, only fresh listings before they get crowded - good for finding hidden opportunities.

## `companyIds` (type: `array`):

Limit the search to particular companies by their LinkedIn numeric ID. Find the ID in the URL of any LinkedIn company page (e.g. linkedin.com/company/1441 -> 1441 is Google). Leave empty to search every company.

## `jobUrls` (type: `array`):

Paste one or more LinkedIn job URLs, one per line. Each will be looked up individually and returned with full details. Example URL format: https://www.linkedin.com/jobs/view/3899901234/

## `maxResults` (type: `integer`):

Each job costs about $0.010, so 100 jobs is around $1. Default is 100.

## Actor input object example

```json
{
  "mode": "search",
  "searchKeywords": "senior backend engineer",
  "locations": [
    "Remote"
  ],
  "datePosted": "past_week",
  "seniorityLevels": [],
  "jobTypes": [],
  "workplaceTypes": [],
  "sortBy": "relevance",
  "easyApplyOnly": false,
  "under10Applicants": false,
  "companyIds": [],
  "jobUrls": [],
  "maxResults": 100
}
```

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "searchKeywords": "software engineer",
    "locations": [
        "Remote"
    ]
};

// Run the Actor and wait for it to finish
const run = await client.actor("unseenuser/linkedin-jobs-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "searchKeywords": "software engineer",
    "locations": ["Remote"],
}

# Run the Actor and wait for it to finish
run = client.actor("unseenuser/linkedin-jobs-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "searchKeywords": "software engineer",
  "locations": [
    "Remote"
  ]
}' |
apify call unseenuser/linkedin-jobs-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=unseenuser/linkedin-jobs-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "LinkedIn Jobs Scraper - Hiring Signals [NO COOKIES] ✅",
        "description": "Bulk-extract LinkedIn job postings by keyword, company, title, or location. Get descriptions, salary ranges, seniority levels, and hiring company data. Built for sales teams using hiring signals for outbound (e.g., 'companies hiring a Head of RevOps in last 14 days') and recruiters.",
        "version": "0.0",
        "x-build-id": "4YWadyiLeqQA1Wphi"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/unseenuser~linkedin-jobs-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-unseenuser-linkedin-jobs-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/unseenuser~linkedin-jobs-scraper/runs": {
            "post": {
                "operationId": "runs-sync-unseenuser-linkedin-jobs-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/unseenuser~linkedin-jobs-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-unseenuser-linkedin-jobs-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "mode"
                ],
                "properties": {
                    "mode": {
                        "title": "Step 1: What do you want to do?",
                        "enum": [
                            "search",
                            "details"
                        ],
                        "type": "string",
                        "description": "Pick one. This decides which fields below you need to fill in. Each field is labelled with the mode it applies to.",
                        "default": "search"
                    },
                    "searchKeywords": {
                        "title": "Job title or keywords",
                        "type": "string",
                        "description": "Type the job title or keywords, like on linkedin.com/jobs. Examples: \"product manager\", \"senior backend engineer\", \"AI researcher\". Leave empty to search every job."
                    },
                    "locations": {
                        "title": "Locations",
                        "uniqueItems": true,
                        "type": "array",
                        "description": "Add one or more places to search. Each location is searched separately and the results are merged. Use cities (\"Tel Aviv\"), countries (\"Germany\"), or \"Remote\". Leave empty to search worldwide.",
                        "items": {
                            "type": "string"
                        },
                        "default": []
                    },
                    "datePosted": {
                        "title": "How recent should the jobs be?",
                        "enum": [
                            "any_time",
                            "past_hour",
                            "past_24h",
                            "past_week",
                            "past_month"
                        ],
                        "type": "string",
                        "description": "Filter to recently-posted jobs. \"Past hour\" gives the very latest postings; \"Past month\" gives the widest net.",
                        "default": "past_week"
                    },
                    "seniorityLevels": {
                        "title": "Seniority level (tick any that apply)",
                        "uniqueItems": true,
                        "type": "array",
                        "description": "Tick one or more. Empty = all seniority levels.",
                        "items": {
                            "type": "string",
                            "enum": [
                                "internship",
                                "entry",
                                "associate",
                                "mid",
                                "director",
                                "executive"
                            ],
                            "enumTitles": [
                                "Internship",
                                "Entry level",
                                "Associate",
                                "Mid / Senior",
                                "Director",
                                "Executive"
                            ]
                        },
                        "default": []
                    },
                    "jobTypes": {
                        "title": "Employment type (tick any that apply)",
                        "uniqueItems": true,
                        "type": "array",
                        "description": "Tick one or more. Empty = all employment types.",
                        "items": {
                            "type": "string",
                            "enum": [
                                "full_time",
                                "part_time",
                                "contract",
                                "internship"
                            ],
                            "enumTitles": [
                                "Full-time",
                                "Part-time",
                                "Contract",
                                "Internship"
                            ]
                        },
                        "default": []
                    },
                    "workplaceTypes": {
                        "title": "Where is the job (tick any that apply)",
                        "uniqueItems": true,
                        "type": "array",
                        "description": "Tick one or more. Empty = remote, hybrid, and on-site jobs all included.",
                        "items": {
                            "type": "string",
                            "enum": [
                                "remote",
                                "hybrid",
                                "on_site"
                            ],
                            "enumTitles": [
                                "Remote",
                                "Hybrid",
                                "On-site (in office)"
                            ]
                        },
                        "default": []
                    },
                    "sortBy": {
                        "title": "Sort the results by",
                        "enum": [
                            "relevance",
                            "date"
                        ],
                        "type": "string",
                        "description": "How to order the search results.",
                        "default": "relevance"
                    },
                    "easyApplyOnly": {
                        "title": "Only \"Easy Apply\" jobs",
                        "type": "boolean",
                        "description": "If on, only show jobs you can apply to with one click on LinkedIn (no external website).",
                        "default": false
                    },
                    "under10Applicants": {
                        "title": "Only jobs with fewer than 10 applicants",
                        "type": "boolean",
                        "description": "If on, only fresh listings before they get crowded - good for finding hidden opportunities.",
                        "default": false
                    },
                    "companyIds": {
                        "title": "Only specific companies (LinkedIn company IDs)",
                        "uniqueItems": true,
                        "type": "array",
                        "description": "Limit the search to particular companies by their LinkedIn numeric ID. Find the ID in the URL of any LinkedIn company page (e.g. linkedin.com/company/1441 -> 1441 is Google). Leave empty to search every company.",
                        "items": {
                            "type": "string"
                        },
                        "default": []
                    },
                    "jobUrls": {
                        "title": "LinkedIn job URLs (one per line)",
                        "uniqueItems": true,
                        "type": "array",
                        "description": "Paste one or more LinkedIn job URLs, one per line. Each will be looked up individually and returned with full details. Example URL format: https://www.linkedin.com/jobs/view/3899901234/",
                        "items": {
                            "type": "string"
                        },
                        "default": []
                    },
                    "maxResults": {
                        "title": "Maximum number of jobs",
                        "minimum": 1,
                        "maximum": 10000,
                        "type": "integer",
                        "description": "Each job costs about $0.010, so 100 jobs is around $1. Default is 100.",
                        "default": 100
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
