# 🔍 LinkedIn Posts Scraper - Keyword & Hashtag \[NO COOKIES] ✅ (`unseenuser/linkedin-post-seach-scraper`) Actor

Search LinkedIn's public post graph by keyword, hashtag, or author. Extract content, engagement metrics, reaction breakdowns, author profile data, and timestamps. No login, no cookies. Built for social listening, B2B competitive intelligence, and thought-leader research.

- **URL**: https://apify.com/unseenuser/linkedin-post-seach-scraper.md
- **Developed by:** [Unseen User](https://apify.com/unseenuser) (community)
- **Categories:** Lead generation, News, Social media
- **Stats:** 3 total users, 2 monthly users, 100.0% runs succeeded, 4 bookmarks
- **User rating**: 5.00 out of 5 stars

## Pricing

$5.50 / 1,000 results

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## 🔍 LinkedIn Post Search - LinkedIn Posts Scraper for Brand & Competitor Monitoring (No Login)

> **Find every post mentioning your competitor, your brand, or your target keyword - and engage where it matters.**

A drop-in **LinkedIn posts scraper** and **social-listening** tool. Search public LinkedIn posts by keyword, hashtag, @mentions, content type, or author's company, then export rich engagement data for analysis, alerts, or downstream automation. Built for B2B teams that need a reliable **LinkedIn post search** API for brand mention tracking, competitor monitoring, content benchmarking, and thought-leader research.

`linkedin posts scraper` · `linkedin post search` · `scrape linkedin posts` · `linkedin posts api` · `linkedin keyword search` · `linkedin social listening` · `linkedin content research` · `linkedin hashtag scraper` · `b2b social listening`

---

### ⚡ Why This Actor?

- **Search by keyword or hashtag** - Track topic and hashtag performance across LinkedIn
- **Filter by author's company** - See what your competitors' employees publish
- **Mention search** - Catch every post @-mentioning your brand or a target person
- **Content type filtering** - Restrict to videos, images, documents, articles, polls, jobs
- **Engagement signals** - Likes, comments, shares, full reaction breakdown (like / celebrate / support / love / insightful / funny)
- **No login required** - No cookies, no sessionless tokens to babysit
- **Pay-per-result** - $0.001 per post, $0.005 enriched
- **Webhook-friendly** - Plug straight into Slack, Make, n8n, Zapier
- **Schemas included** - Pre-built dataset views for Overview, Engagement leaderboard, Media-only

---

### 🎯 Use Cases

#### 🛡️ B2B Competitor Monitoring
Track every post published by employees of your top competitors. Set `authorsCompany` to a list of competitor LinkedIn pages, schedule the actor daily with `datePosted: "24h"`, and pipe the dataset into Slack or Notion. You'll see new product launches, hiring sprees, and PR moves the moment they go live.

#### 📣 Brand Mention Tracking
Set `mentioningCompanies` to your own LinkedIn page. Find every customer testimonial, job-seeker tag, partner shout-out, and crisis tweet-storm in real time. Pair with `datePosted: "1h"` and a Slack webhook for near-instant alerts.

#### 🧠 Thought-Leader Research
Combine `searchKeywords` with `authorKeywords` (e.g. `"VP Engineering"`, `"Head of AI"`) to surface how senior practitioners talk about your topic. Identifies speakers, hires, partnership targets, and content collaborators.

#### 📊 Content Benchmarking
Sort by `date`, filter by `contentType: videos` (or `documents`, `collaborative_articles`), and see which formats earn engagement in your niche before you commit a content budget.

#### 🔥 Viral / Trending Discovery
`sortBy: "date"` + `datePosted: "24h"` + a high `maxResults` = a daily firehose of what's breaking in your industry today.

#### 🎯 Lead Generation Signals
Find prospects engaging with competitor posts or specific keywords - the warmest outbound signals available without paid-ad spend. Pair with a profile-enrichment scraper to add verified emails.

#### 📰 PR Crisis Detection
Schedule a `mentioningCompanies` run every 15 minutes during a launch. Negative chatter is surfaced before it lands on the front page of a publication.

---

### 🚀 Quick Start

#### Mode: `search` - keyword search
```json
{
  "mode": "search",
  "searchKeywords": "AI safety",
  "datePosted": "week",
  "sortBy": "relevance",
  "maxResults": 100
}
````

#### Mode: `search` - hashtag + content type

```json
{
  "mode": "search",
  "searchKeywords": "#GenAI",
  "contentType": "videos",
  "datePosted": "month",
  "sortBy": "date",
  "maxResults": 200
}
```

#### Mode: `get_details` - fetch full post details for a list of URLs

```json
{
  "mode": "get_details",
  "postUrls": [
    "https://www.linkedin.com/posts/example-activity-1234567890",
    "https://www.linkedin.com/posts/example-activity-0987654321"
  ]
}
```

#### Mode: `search_and_enrich` - search then enrich every result with full details

```json
{
  "mode": "search_and_enrich",
  "searchKeywords": "developer experience",
  "authorsCompany": ["https://www.linkedin.com/company/stripe"],
  "maxResults": 50
}
```

***

### ⚙️ Inputs

**Basic search (only ones most users need):**

| Input | Required | Notes |
|-------|----------|-------|
| `mode` | Yes | `search`, `get_details`, or `search_and_enrich` |
| `searchKeywords` | Conditional | Free text or hashtags (`#tag`). Required for search modes unless another filter is provided. |
| `maxResults` | Optional | Default 100. |
| `datePosted` | Optional | One of `1h`, `24h`, `week`, `month`, `3months`, `6months`, `year`. |
| `sortBy` | Optional | `relevance` (default) or `date`. |
| `contentType` | Optional | `videos`, `images`, `live_videos`, `documents`, `collaborative_articles`, `jobs`. |

**Filter by who posted (optional):**

| Input | Notes |
|-------|-------|
| `authorsCompany` | Posts by employees of these companies (URLs or IDs). |
| `authorProfiles` | Posts by these specific people (URLs or IDs, auto-detected). |
| `companies` | Posts from these company pages (URLs or IDs, auto-detected). |
| `authorKeywords` | Author's profile must contain these words. |
| `authorsIndustryIds` | LinkedIn industry IDs of the author's company. |
| `group` | LinkedIn group URL or ID. |

**Track mentions (optional):**

| Input | Notes |
|-------|-------|
| `mentioningCompanies` | Posts that mention these companies. |
| `mentioningMembers` | Posts that @mention these people. |

**Get-details mode only:**

| Input | Notes |
|-------|-------|
| `postUrls` | Array of LinkedIn post URLs. |

> Power users hitting the API directly can still pass `postedLimit` / `scrapePostedLimit` / `authorProfileIds` / `companyIds` -- they continue to work for backward compatibility.

***

### 📤 Sample Output

Each dataset record contains the post body, the author profile, the timestamp, attached media, and the full engagement breakdown. Realistic example:

```json
{
  "id": "urn:li:activity:7224930412345678901",
  "content": "Just shipped our new AI safety framework after 18 months of research. Three things every CTO needs to know before adopting LLM agents in production... 🧵",
  "linkedinUrl": "https://www.linkedin.com/posts/janedoe_ai-safety-activity-7224930412345678901",
  "author": {
    "id": "ACoAAA...",
    "urn": "urn:li:fsd_profile:ACoAAA...",
    "publicIdentifier": "janedoe",
    "name": "Jane Doe",
    "linkedinUrl": "https://www.linkedin.com/in/janedoe",
    "avatar": {
      "url": "https://media.licdn.com/dms/image/.../profile.jpg",
      "width": 400,
      "height": 400
    }
  },
  "postedAt": {
    "timestamp": 1714900000000,
    "date": "2025-05-05T12:00:00.000Z",
    "postedAgoShort": "1d",
    "postedAgoText": "1 day ago"
  },
  "postImages": [
    { "url": "https://media.licdn.com/.../image1.jpg", "width": 1280, "height": 720 }
  ],
  "postVideo": null,
  "article": {
    "title": "The CTO's Guide to LLM Safety",
    "subtitle": "What 18 months of red-teaming taught us",
    "link": "https://example.com/llm-safety",
    "image": { "url": "https://example.com/og.jpg", "width": 1200, "height": 630 }
  },
  "engagement": {
    "likes": 1247,
    "comments": 184,
    "shares": 76,
    "reactions": [
      { "type": "like", "count": 820 },
      { "type": "celebrate", "count": 240 },
      { "type": "insightful", "count": 187 }
    ]
  },
  "socialContent": {
    "shareUrl": "https://www.linkedin.com/posts/janedoe_..."
  },
  "scrapedAt": "2026-05-05T12:34:56.000Z"
}
```

The dataset ships with three pre-built **views** in the Apify Console:

- **Overview** - one row per post, key fields only (great for export)
- **Engagement leaderboard** - sort by likes / comments / shares for viral analysis
- **Media-only** - posts with images, video, or attached articles

***

### 🔌 Integrations & Automation

The output is plain JSON in an Apify dataset, so it slots into any workflow tool:

#### Slack: real-time brand-mention alerts

1. Schedule this actor every 15-30 minutes with `mentioningCompanies: ["https://www.linkedin.com/company/yourbrand"]` and `datePosted: "1h"`.
2. Use Apify's [Slack integration](https://docs.apify.com/platform/integrations/slack) (or Make / n8n / a Slack incoming-webhook).
3. Get a ping with author + post URL the moment someone tags your brand.

#### Make.com or n8n: daily competitor monitoring

- **Trigger:** scheduled run (daily, 09:00).
- **Step 1:** run actor with `authorsCompany: [...competitor LinkedIn URLs...]`, `datePosted: "24h"`.
- **Step 2:** filter items where `engagement.likes > 100`.
- **Step 3:** drop into Notion / Airtable / Google Sheets / a CRM.
- **Step 4:** notify the marketing team.

#### Zapier / Pipedream

Trigger a Zap on "new dataset item" and route the post to any of 5,000+ apps in your stack.

#### Native webhooks

Every Apify run can fire a webhook on completion - point it at your own server for direct ingestion into a data warehouse or BI tool.

#### CSV / Excel export

Open the dataset in the Apify Console and download the **Engagement leaderboard** view as CSV / XLSX for pivot-table analysis.

***

### ❓ FAQ

**Q: How recent are the posts?**
A: As fresh as LinkedIn surfaces them in public post search - typically within minutes of publication. Use `datePosted: "1h"` for near-real-time monitoring, or `"24h"` for daily sweeps.

**Q: Are sponsored / promoted posts included?**
A: No. The actor returns only organic posts that appear in LinkedIn's public post search. Paid ad placements are excluded.

**Q: How fresh are the engagement counts (likes, comments, shares)?**
A: Counts are captured live at the moment the actor fetches each post - they're a snapshot, not a continuous feed. For trended engagement, schedule the actor periodically (e.g. every 6 hours) and store the snapshots.

**Q: What's the maximum number of posts per search?**
A: Up to **10,000 per run** via `maxResults`. Need more? Slice your query by date window (`datePosted`) or by a list of `authorsCompany` / `mentioningCompanies` and run multiple times - then aggregate downstream. LinkedIn itself caps deep pagination, so very wide single queries plateau before 10k.

**Q: Should I use post-search to grab one author's full post history?**
A: No. For one user's full history, use a dedicated `profile-posts` actor; for one company's full history, use `company-posts`. LinkedIn returns fewer per-author results inside search than via dedicated endpoints.

**Q: `postedLimit` vs `scrapePostedLimit` vs `datePosted` - what's the difference?**
A: The simple field is **`datePosted`** (recommended). It accepts every window from `1h` to `year` plus `any`, and the actor routes it to the correct underlying API param. `postedLimit` (LinkedIn-side, coarse) and `scrapePostedLimit` (server-side, granular) are still accepted for power users who hit the API directly.

**Q: Can I get private / connection-only posts?**
A: No. Only publicly-visible posts.

**Q: Is this Actor affiliated with LinkedIn?**
A: No. This is an independent tool that calls a third-party data provider (HarvestAPI). It is not endorsed by, sponsored by, or authorized by LinkedIn.

**Q: Will this break LinkedIn's terms of service?**
A: The actor accesses publicly-visible data only. Your downstream use is your responsibility - read LinkedIn's User Agreement and the Master ToS in this README before commercial use, especially for cold outreach.

**Pair this with:** a Comment Reactions actor for deep engagement analysis on the posts you find.

***

### 📐 Schemas Shipped With This Actor

This actor exposes formal schemas so the Apify Console renders rich UI for inputs, dataset views, and API documentation:

| File | What it does |
|------|--------------|
| `.actor/INPUT_SCHEMA.json` | Renders the input form with sectioned groupings, prefills, and select dropdowns |
| `.actor/output_schema.json` | Top-level **Output** map: where to find scraped posts (JSON / CSV), the run summary, and the original input |
| `.actor/dataset_schema.json` | Defines the post item shape and three pre-built dataset **views**: *Overview*, *Engagement leaderboard*, *Media-only* |
| `.actor/key_value_store_schema.json` | Groups KV-store keys into collections: *Run input*, *Run summary*, *Failed records* |
| `.actor/openapi.json` | OpenAPI 3.0 spec describing how to call the actor through Apify's run-sync API |

***

### 🔗 Related Scrapers

Go deeper into LinkedIn engagement intelligence:

- [LinkedIn Post Comments & Reactions Scraper](https://apify.com/unseenuser/linkedin-post-comment-reaction-extractor-no-cookies) - extract every commenter and reactor on the posts you find here
- [LinkedIn User Activity Scraper](https://apify.com/unseenuser/LinkedIn-user-comments-reactions) - research what specific users are commenting on and reacting to
- [LinkedIn Profile Scraper + Email Enrichment](https://apify.com/unseenuser/LinkedIn-Profile) - enrich post authors with verified emails for outreach

[See all 16 scrapers by unseenuser →](https://apify.com/unseenuser)

***

## Apify Actor - Terms of Service

**Version:** 4.0
**Effective Date:** May 5, 2026

***

### 0. ACCEPTANCE BY USE - IMPORTANT

**Read this section first.**

These Terms of Service ("Terms") form a binding legal agreement between you ("User," "you," "your") and **UnseenUser**, the Publisher of this Apify actor ("UnseenUser," "the Publisher," "we," "us," "our").

#### 0.1 How You Accept These Terms

You accept these Terms by any of the following actions, each of which constitutes a clear, affirmative act of acceptance:

- **(a) Running the Actor** - Initiating any execution of the Actor on the Apify platform
- **(b) Using any output returned by the Actor for any purpose**
- **(c) Continuing to access the Actor's listing or documentation after these Terms are visible**

#### 0.2 Continuing Acceptance

Each time you run the Actor or use its outputs, you reaffirm your acceptance of the then-current Terms. If you do not agree to these Terms or any subsequent update, you must stop using the Actor immediately.

#### 0.3 No Anonymous Acceptance

You cannot disclaim acceptance by:

- Failing to read these Terms before running the Actor
- Running the Actor through automated systems
- Sharing your Apify account with others who may not have read these Terms

By the act of running the Actor on Apify, you bind yourself, your organization (if applicable), and any individuals or systems acting on your behalf or under your authority.

#### 0.4 If You Do Not Accept

If you do not agree to these Terms, you must not run the Actor. No use is authorized without acceptance.

***

### PREAMBLE - UNDERSTANDING THE ARCHITECTURE

Before using the Actor, please understand the technical architecture of the service:

#### The Data Flow

```
You (User) → Apify Platform → Actor (software) → Third-Party API → Source Platform
                                                       ↓
You (User) ← Apify Platform ← Actor (software) ← Third-Party API
```

#### What Each Party Does

- **You (the User):** Run the Actor on the Apify platform with input parameters you choose
- **Apify:** Operates the cloud infrastructure that hosts and executes Actors. Apify is a Czech-incorporated company (Apify Technologies s.r.o.) governed by its own Terms of Service.
- **The Publisher (us):** Publishes software code (the Actor) on Apify's platform. The Actor is a thin wrapper that translates your input into requests to a third-party API and returns the API's responses to you. The Publisher does not operate scraping infrastructure. The Publisher does not store or retain data returned by the Actor. The Publisher does not see, log, or process the personal data of any individuals returned in the Actor's outputs beyond what is incidental to passing the data through.
- **Third-Party API Provider:** HarvestAPI (`https://harvest-api.com`) or Scrape Creators (`https://scrapecreators.com`). These are independent third-party companies that operate scraping infrastructure and return data from source platforms.
- **Source Platform:** LinkedIn, TikTok, YouTube, Reddit, Linktree, etc. These are the platforms whose publicly visible data is accessed by the Third-Party API Providers.

#### Why This Matters

Your relationship with the Publisher is that of a software user to a software vendor. The Publisher has the responsibilities of a software vendor (functional code, accurate documentation) and the limits of one (the Publisher is not responsible for how you use the data you obtain).

These Terms operate alongside but do not replace:

- Apify's Terms of Service and Acceptable Use Policy (governing your relationship with Apify)
- HarvestAPI Terms of Service and Scrape Creators Terms of Service (governing the underlying data infrastructure)
- Source Platform terms (LinkedIn, TikTok, etc.) governing the public data accessed
- Applicable law in your jurisdiction and the jurisdictions of data subjects

These Terms incorporate the actor-specific addendum published in each Actor's individual listing ("Addendum"). In the event of a conflict, the more restrictive provision applies.

***

### 1. NATURE OF THE SERVICE

#### 1.1 What the Actor Is

The Actor is a software program published on the Apify platform. Each Actor:

- **(a)** Accepts structured input from you on the Apify platform
- **(b)** Translates that input into HTTP requests to a third-party API operated by HarvestAPI or Scrape Creators
- **(c)** Receives HTTP responses from that third-party API
- **(d)** Returns the response data to you in a structured format on the Apify platform

The Actor's source code is hosted on Apify's infrastructure. The Actor runs in Apify's cloud, not on the Publisher's servers. The Publisher operates no servers running the Actor.

#### 1.2 What the Actor Is Not

The Actor is **not**:

- **(a)** A scraping tool - the Publisher does not operate scraping infrastructure, proxies, headless browsers, or fake accounts
- **(b)** A direct connection to any source platform - connections to source platforms are made by HarvestAPI / Scrape Creators
- **(c)** A data storage or data retention service - the Publisher does not maintain a database of any data the Actor returns
- **(d)** A licensed access channel to LinkedIn, TikTok, YouTube, Reddit, X (Twitter), Meta, Linktree, or any other source platform
- **(e)** Affiliated with, endorsed by, sponsored by, or authorized by any source platform

#### 1.3 The Publisher's Limited Role

The Publisher's role is limited to:

- **(a)** Designing and writing the Actor's source code
- **(b)** Publishing the Actor on the Apify Store
- **(c)** Maintaining the Actor (updating it when API providers change schemas)
- **(d)** Providing documentation and customer support via Apify's contact mechanism

The Publisher is a software vendor, similar to a developer who publishes an app on the Apple App Store or Google Play Store. The Publisher is **not** a data provider, data broker, data processor, or data controller for purposes of GDPR, CCPA, Israel's Privacy Protection Law, or equivalent.

#### 1.4 The Third-Party API Providers' Role

HarvestAPI (`https://harvest-api.com`) and Scrape Creators (`https://scrapecreators.com`) are independent third-party companies. They:

- **(a)** Operate the actual data scraping infrastructure
- **(b)** Maintain relationships with source platforms (or accept the operational risk of accessing public data without such relationships)
- **(c)** Are themselves Apify publishers (HarvestAPI publishes 9+ actors directly; Scrape Creators publishes 10+)
- **(d)** Provide their own Terms of Service governing their operations
- **(e)** Are responsible for compliance obligations relating to the data collection itself

The Publisher is a customer of these providers. The Publisher is not their agent, partner, or representative.

***

### 2. WHO MAY USE THE ACTOR

#### 2.1 Eligibility

You may use the Actor only if:

- **(a)** You are at least 18 years old or the age of majority in your jurisdiction
- **(b)** You have legal capacity to enter into binding contracts
- **(c)** You are not located in or resident of a country subject to comprehensive sanctions by the United States, European Union, United Kingdom, or Israel
- **(d)** You are not on any prohibited persons list

#### 2.2 User Representations

By using any Actor, you represent and warrant that:

- **(a) Truthful identity:** Information you provide about your identity and intended use is accurate
- **(b) Lawful intent:** Your intended use complies with applicable law in your jurisdiction
- **(c) Source platform compliance:** You will independently comply with the Terms of Service of any source platform whose data you obtain through the Actor
- **(d) Data subject rights:** Where Actor outputs include personal data, you will respect data subject rights under applicable law
- **(e) No prohibited use:** You will not use the Actor for any of the purposes prohibited in Section 4

These representations are continuous - they must remain true throughout your use.

***

### 3. PERMITTED USES

The Actor may be used for any lawful purpose, including:

- Market research and competitive analysis
- Academic research
- Journalism and investigative reporting
- Internal business intelligence
- Brand monitoring
- Recruitment research where consistent with applicable employment law
- Building products that further process publicly available information lawfully

Specific permitted uses for each Actor are described in that Actor's individual listing and Addendum.

***

### 4. PROHIBITED USES

You may not use the Actor for any of the following:

#### 4.1 Illegal Activity

Activity illegal under the law of your jurisdiction, the User's jurisdiction, or the jurisdiction of any data subjects.

#### 4.2 Harassment, Stalking, and Personal Targeting

- Compiling profiles for harassment, stalking, or doxxing
- Tracking individuals' movements or activities without their knowledge
- Building profiles of journalists, activists, dissidents, or vulnerable populations for retaliatory purposes

#### 4.3 Discrimination

- Using outputs for discriminatory employment, lending, housing, or insurance decisions based on protected characteristics
- Building lists for discriminatory purposes

#### 4.4 Spam and Unsolicited Commercial Communication

- Sending unsolicited marketing in violation of CAN-SPAM, CASL, GDPR, PECR, Israeli Anti-Spam Law (סעיף 30א לחוק התקשורת), or equivalent laws
- Building "lead lists" from scraped contacts without proper consent infrastructure
- Reselling contact data for spam purposes

#### 4.5 Fraud and Deception

- Identity theft or impersonation
- Generation of fake reviews, testimonials, or coordinated inauthentic behavior
- Election interference or political disinformation
- Securities fraud

#### 4.6 Source Platform Abuse

- Using outputs to circumvent technical protection measures of source platforms
- Creating fake accounts on source platforms based on Actor outputs
- Vote manipulation, engagement manipulation, or platform algorithm gaming
- Building services that competitively substitute for source platforms

#### 4.7 Reselling the Actor's Service

- Reselling raw Actor outputs as your own data product or scraping-as-a-service
- Sharing your Apify credentials to provide third parties indirect access
- Building competing API services using Actor outputs

#### 4.8 AI Training Without Authorization

- Using Actor outputs as training data for commercial AI/ML models without separate licensing authority from the source platform

#### 4.9 Sensitive Targeting

- Specifically targeting or profiling based on health conditions, sexual orientation, religious beliefs, political opinions, or other sensitive characteristics
- Targeting children under 16 (or local age of consent for data processing)

#### 4.10 Privacy Law Violations

- Processing personal data of EU/UK/California/Israeli residents without complying with applicable privacy law
- Failing to honor data subject access, deletion, or objection requests
- Processing data for purposes incompatible with its publication context

***

### 5. SOURCE PLATFORM TERMS - YOUR RESPONSIBILITY

#### 5.1 Acknowledgment

The Actor accesses publicly visible data on third-party platforms ("Source Platforms") through the Third-Party API Providers (HarvestAPI / Scrape Creators). Source Platforms include LinkedIn, TikTok, YouTube, Reddit, X (Twitter), Meta/Facebook, Linktree, Komi, Pillar, Linkbio, Linkme, and Amazon.

#### 5.2 Your Sole Responsibility

You acknowledge:

- **(a)** You are solely responsible for ensuring your downstream use of data obtained through the Actor complies with the Source Platform's Terms of Service
- **(b)** The Publisher makes no representation that any specific use is permitted under any Source Platform's terms
- **(c)** The Third-Party API Providers, not the Publisher, bear responsibility for the lawfulness of the data collection itself
- **(d)** You should review Source Platform terms before commercial use:
  - LinkedIn: https://www.linkedin.com/legal/user-agreement
  - TikTok: https://www.tiktok.com/legal/page/global/terms-of-service/en
  - YouTube: https://www.youtube.com/static?template=terms
  - X: https://twitter.com/en/tos
  - Reddit: https://www.redditinc.com/policies/user-agreement
  - Meta: https://www.facebook.com/legal/terms
  - Linktree: https://linktr.ee/s/terms/

#### 5.3 Cease-and-Desist Compliance

If you receive a cease-and-desist letter or other legal demand from a Source Platform regarding your use of Actor outputs, you must:

- **(a)** Cease the contested use immediately
- **(b)** Notify UnseenUser within 48 hours via UnseenUser's Apify profile contact form (https://apify.com/UnseenUser)
- **(c)** Cooperate with the Publisher as needed to mitigate
- **(d)** Not assert against the Publisher any claim arising from your inability to use the Actor for that Source Platform

***

### 6. DATA PROTECTION - REFLECTING ACTUAL ARCHITECTURE

#### 6.1 Roles Under Privacy Law

For purposes of GDPR, UK GDPR, CCPA, Israel's Privacy Protection Law (PPL) including Amendment 13, and equivalents:

- **You (the User)** are the Data Controller of any personal data you obtain through the Actor and subsequently process for your own purposes
- **HarvestAPI and Scrape Creators** are the entities that collect data from source platforms - they bear the responsibilities of data processors or controllers (depending on context) for the collection itself
- **The Publisher** acts solely as a software vendor, not as a data controller or processor, because the Publisher does not store, retain, or substantively process personal data - the Actor merely passes API responses through

#### 6.2 No Data Retention by the Publisher

The Publisher confirms:

- **(a)** The Publisher does not maintain a database of personal data obtained through the Actor
- **(b)** The Actor passes data from the Third-Party API directly to you on the Apify platform - data does not flow through the Publisher's infrastructure
- **(c)** Apify's standard execution and operational logging may include limited information about Actor runs (input parameters, run duration, data volume) - this is governed by Apify's own privacy practices
- **(d)** The Publisher does not access, view, or analyze your Actor outputs except as needed for technical support if you specifically share them with the Publisher

#### 6.3 Your Obligations as Data Controller

Where your use of the Actor involves processing personal data, you are responsible for:

- **(a)** Establishing a lawful basis for your processing (consent, legitimate interest with documented balancing test, contract, etc.)
- **(b)** Providing transparent notice to data subjects as required by applicable law
- **(c)** Honoring data subject access, rectification, erasure, restriction, and portability requests
- **(d)** Implementing appropriate security measures
- **(e)** Conducting Data Protection Impact Assessments where required
- **(f)** Appointing a Data Protection Officer if your operations require one
- **(g)** Registering databases with applicable supervisory authorities
- **(h)** Honoring opt-out requests for direct marketing
- **(i)** Cross-border transfer safeguards where data crosses borders

#### 6.4 Israel's Amendment 13 - User Compliance

If your use of the Actor involves Israeli residents' personal data, you must comply with the Privacy Protection Law as amended (Amendment 13, effective August 14, 2025). These obligations are yours as the data controller, not the Publisher's as the software vendor.

#### 6.5 Sensitive Data Targeting Restrictions

You will not use the Actor to specifically target, profile, or build datasets focused on:

- Health or medical conditions
- Religious beliefs
- Political opinions
- Sexual orientation or gender identity
- Genetic or biometric data
- Criminal history
- Children under 16

***

### 7. INTELLECTUAL PROPERTY

#### 7.1 Actor Code

The Actor's source code, schemas, documentation, and branding are owned by the Publisher. You receive a limited, non-exclusive, non-transferable, revocable license to use the Actor for permitted purposes during your active subscription/run with Apify.

#### 7.2 Output Data

The Publisher claims no ownership over the public data the Actor returns. Source Platforms may have copyright, database rights, or other rights in their data; data subjects may have copyright in user-generated content. Your use of output data must respect these rights independently.

#### 7.3 Restrictions

You may not reverse engineer, decompile, or reuse the Actor's code in a competing actor.

#### 7.4 Feedback

Feedback you provide may be used by the Publisher to improve products without compensation to you.

***

### 8. PRICING AND PAYMENT

#### 8.1 Apify Platform Billing

Pricing is administered through Apify's pricing models. Apify processes all payments. Apify's payment terms govern refunds and disputes.

#### 8.2 Pricing Changes

The Publisher may change Actor pricing with at least 14 days' notice via the Actor's Apify listing.

#### 8.3 No Refunds for Misuse

If your access is suspended or terminated for breach of these Terms, you forfeit any unused balance and are not entitled to refunds.

***

### 9. SERVICE AVAILABILITY AND CHANGES

#### 9.1 No Uptime Guarantee

The Actor depends on:

- **(a)** The Apify platform
- **(b)** Underlying API providers (HarvestAPI, Scrape Creators)
- **(c)** Source Platforms' continued public accessibility

Any of these may change behavior, restrict access, or become unavailable without notice. The Publisher makes no uptime guarantees.

#### 9.2 Service Discontinuation

The Publisher may discontinue any Actor at any time. Reasonable notice will be provided when feasible.

***

### 10. DISCLAIMERS

#### 10.1 "AS IS" Service

THE ACTOR IS PROVIDED "AS IS" AND "AS AVAILABLE" WITHOUT WARRANTIES OF ANY KIND, INCLUDING WARRANTIES OF MERCHANTABILITY, FITNESS FOR PURPOSE, NON-INFRINGEMENT, OR ACCURACY OF DATA.

#### 10.2 No Representation of Lawfulness

The Publisher makes no representation that your specific use of the Actor or the data it returns is lawful in your jurisdiction or under any Source Platform's terms. The burden of determining lawfulness for your use case is yours.

#### 10.3 No Endorsement of Source Content

Content returned by the Actor was created by third parties. The Publisher does not endorse, verify, or take responsibility for it.

***

### 11. LIMITATION OF LIABILITY

#### 11.1 Aggregate Liability Cap

TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, IN NO EVENT SHALL THE AGGREGATE LIABILITY OF THE PUBLISHER FOR ALL CLAIMS RELATING TO THE ACTOR EXCEED THE GREATER OF:

- **(a)** ONE HUNDRED U.S. DOLLARS (US $100), OR
- **(b)** THE AMOUNTS YOU PAID THROUGH APIFY FOR USE OF THE ACTOR IN THE THREE (3) MONTHS IMMEDIATELY PRECEDING THE EVENT

#### 11.2 Excluded Damages

THE PUBLISHER IS NOT LIABLE FOR INDIRECT, INCIDENTAL, SPECIAL, CONSEQUENTIAL, EXEMPLARY, OR PUNITIVE DAMAGES, OR FOR LOSS OF PROFITS, REVENUE, OR DATA, EVEN IF ADVISED OF THE POSSIBILITY.

#### 11.3 Time Limit

Any claim must be brought within one (1) year of the event.

***

### 12. INDEMNIFICATION

#### 12.1 Your Indemnification of the Publisher

You agree to defend, indemnify, and hold harmless the Publisher from any:

- Claims arising from your use of the Actor
- Claims arising from your violation of these Terms
- Claims arising from your violation of any law (including privacy law)
- Claims arising from your violation of any Source Platform's Terms of Service
- Claims arising from your processing of personal data obtained through the Actor
- Reasonable attorneys' fees and costs of defending such claims

#### 12.2 Defense

The Publisher may assume defense at your expense. You will cooperate with the Publisher's defense.

#### 12.3 Scope

The indemnification covers reasonable, foreseeable third-party claims arising from your use. It does not extend to:

- Claims arising from the Publisher's gross negligence or willful misconduct
- Claims regarding the Actor's source code itself (those are the Publisher's responsibility)
- Claims regarding the Third-Party API Provider's data collection (those are their responsibility)

***

### 13. SUSPENSION AND TERMINATION

#### 13.1 Termination by the Publisher

The Publisher may terminate your access for material breach, illegal use, breach of warranty, or upon credible legal demand.

#### 13.2 Effects of Termination

Your license ends, you must cease use, and applicable provisions survive.

#### 13.3 Termination by You

You may stop using the Actor at any time on Apify.

***

### 14. DISPUTE RESOLUTION

#### 14.1 Informal Resolution First

Send a detailed written description of the dispute via UnseenUser's Apify profile contact form (https://apify.com/UnseenUser) and wait 60 days for resolution attempt before any formal claim.

#### 14.2 Governing Law

These Terms are governed by the substantive laws of the State of Israel, without regard to conflict of law principles.

#### 14.3 Exclusive Jurisdiction

Any dispute shall be brought exclusively in the competent civil courts of Tel Aviv-Jaffa, Israel.

#### 14.4 No Class Actions

You agree to bring claims only in your individual capacity.

#### 14.5 Attorneys' Fees

The prevailing party recovers reasonable attorneys' fees.

***

### 15. MISCELLANEOUS

#### 15.1 Entire Agreement

These Terms (with Addendum and incorporated documents) are the entire agreement.

#### 15.2 Severability

Unenforceable provisions are reformed to the minimum extent or severed.

#### 15.3 Assignment

You may not assign without the Publisher's consent. The Publisher may assign to affiliates, successors, or acquirers.

#### 15.4 Force Majeure

Neither party is liable for failure due to events beyond reasonable control, including changes by Source Platforms or Third-Party API Providers, or actions by Apify.

#### 15.5 Third-Party Beneficiaries

Apify, HarvestAPI, and Scrape Creators are intended third-party beneficiaries of Sections 4 (Prohibited Uses), 5 (Source Platform Compliance), and 12 (Indemnification).

#### 15.6 Survival

Sections 0 (Acceptance), 4, 5, 6, 7, 10, 11, 12, 14, and 15 survive termination.

#### 15.7 Language

English controls. Translations are for convenience only.

#### 15.8 Publisher Identification for Legal Process

The Publisher operates on the Apify platform under the username **UnseenUser** (`apify.com/UnseenUser`). The Publisher is a registered legal entity. Upon receipt of valid legal process (subpoena, court order, or equivalent) directed through Apify's official channels, the Publisher's full legal identity may be disclosed as required by law. This Section ensures that you have a valid path to legal recourse if needed.

***

### 16. ACKNOWLEDGMENT

By using any Actor, you acknowledge that:

- **(a)** You have read these Terms
- **(b)** You understand the architecture: you are using software (the Actor) on Apify's platform that calls third-party APIs
- **(c)** You accept responsibility for your use, including for compliance with Source Platform terms
- **(d)** Your indemnification obligations cover third-party claims arising from your use
- **(e)** Disputes are resolved in Israeli courts
- **(f)** The Publisher's identity, while not publicly disclosed in this listing, can be obtained through valid legal process via Apify

For questions, use UnseenUser's Apify profile contact form (https://apify.com/UnseenUser) before running the Actor.

***

## 🛡️ Actor-Specific ToS Addendum - LinkedIn Post Search

This addendum supplements the Master Terms of Service V4.0. By running this Actor, you accept both the Master ToS and this addendum.

#### A. Architectural Disclosure

This Actor is a software wrapper. It accepts your input parameters, calls HarvestAPI's `/linkedin/post-search` and related endpoints, and returns the response data to you on the Apify platform. The Publisher does not store, log, or substantively process the data returned.

#### B. Nature of Data Returned

Public LinkedIn posts (content shared publicly by individuals or companies), author information (names, profile URLs, photos), engagement data (likes, comments, shares, reactions), article metadata, and newsletter info. **Post content includes substantial personal data.**

#### C. Permitted Use Cases

Brand mention monitoring, industry trend analysis, influencer identification, content benchmarking, academic research, journalism on public statements by public figures, internal market research and competitive intelligence.

#### D. Specifically Prohibited Uses

In addition to Master ToS Section 4 prohibitions, you may **NOT**:

- Build cold-engagement automation - bots that auto-comment, auto-react, or auto-DM post authors
- Republish full posts in a way that competes with LinkedIn or substitutes for visiting LinkedIn
- Profile individuals based on post content for discriminatory purposes
- Track political views, religious beliefs, or other sensitive opinions expressed in posts
- Build "lead lists" from post engagers without proper consent infrastructure for outreach
- Republish individual users' personal stories (e.g., job loss, illness, family events) in commercial contexts

#### E. LinkedIn Platform ToS Considerations

LinkedIn has aggressively pursued companies that automate engagement or build competing professional networks. Using post data for sales prospecting requires careful anti-spam compliance. If LinkedIn issues a cease-and-desist, notify the Publisher within 48 hours.

#### F. Author Personal Data

Author names and photos are personal data. Cold outreach to post authors based on post content must comply with applicable anti-spam laws. Do not use post engagement data (who liked/commented) to build comprehensive engagement profiles of individuals.

#### G. Sensitive Content and Public Figures

LinkedIn posts often contain personal stories, political/social commentary, or confidential business information shared inadvertently. Do not weaponize personal disclosures against post authors. Do not use sensitive content for discriminatory decision-making.

# Actor input Schema

## `mode` (type: `string`):

Pick 'Search posts' if you want to find posts. Pick 'Get details' if you already have post URLs and want their full data.

## `searchKeywords` (type: `string`):

What to search for. Examples: 'AI safety', 'remote work', '#GenAI'. You can leave this blank if you only want to filter by author / company / mentions further down.

## `maxResults` (type: `integer`):

Hard cap. Each post costs $0.001 (search) or $0.005 (with full details).

## `datePosted` (type: `string`):

Only return posts from this time window.

## `sortBy` (type: `string`):

Choose 'Relevance' for the best keyword matches, or 'Date' for the newest posts first.

## `contentType` (type: `string`):

Limit to one kind of post. Leave blank to allow all.

## `authorsCompany` (type: `array`):

Most popular advanced filter. Paste LinkedIn company URLs (e.g. https://www.linkedin.com/company/stripe). One per line. URLs or company IDs both work.

## `authorProfiles` (type: `array`):

Paste LinkedIn profile URLs (e.g. https://www.linkedin.com/in/satyanadella) or profile IDs. One per line.

## `companies` (type: `array`):

Posts published BY these company pages (not by their employees). Paste company URLs or IDs.

## `authorKeywords` (type: `string`):

Match only authors whose LinkedIn profile contains these words. Examples: 'VP Engineering', 'CFO', 'recruiter'.

## `authorsIndustryIds` (type: `array`):

LinkedIn industry IDs of the author's company (e.g. 4 = Software). Find IDs at https://docs.harvest-api.com/linkedin-api-reference/utility/get-industries.

## `group` (type: `string`):

Restrict to one LinkedIn group. Paste the group URL or its ID.

## `mentioningCompanies` (type: `array`):

The brand-monitoring filter. Paste LinkedIn company URLs or IDs - any post tagging or @-mentioning them is returned.

## `mentioningMembers` (type: `array`):

Paste LinkedIn profile URLs or IDs.

## `postUrls` (type: `array`):

Paste full LinkedIn post URLs, one per line. Example: https://www.linkedin.com/posts/example-activity-1234567890

## Actor input object example

```json
{
  "mode": "search",
  "searchKeywords": "AI safety",
  "maxResults": 100,
  "datePosted": "any",
  "sortBy": "relevance",
  "authorsCompany": [],
  "mentioningCompanies": [],
  "postUrls": []
}
```

# Actor output Schema

## `posts` (type: `string`):

Every matching post returned by this run, one JSON object per item. Includes content, author, engagement metrics, attached media, and timestamps. This is the primary output most users consume.

## `postsCsv` (type: `string`):

Same posts as above, but returned as CSV - convenient for spreadsheet exports.

## `summary` (type: `string`):

Small JSON object with mode, started/finished timestamps, and counts (searched, enriched, failed). Lives in the default key-value store under the key RUN\_SUMMARY.

## `input` (type: `string`):

The exact JSON input the run was started with. Useful for reproducibility and debugging.

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "searchKeywords": "AI safety",
    "maxResults": 100,
    "authorsCompany": [],
    "mentioningCompanies": [],
    "postUrls": []
};

// Run the Actor and wait for it to finish
const run = await client.actor("unseenuser/linkedin-post-seach-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "searchKeywords": "AI safety",
    "maxResults": 100,
    "authorsCompany": [],
    "mentioningCompanies": [],
    "postUrls": [],
}

# Run the Actor and wait for it to finish
run = client.actor("unseenuser/linkedin-post-seach-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "searchKeywords": "AI safety",
  "maxResults": 100,
  "authorsCompany": [],
  "mentioningCompanies": [],
  "postUrls": []
}' |
apify call unseenuser/linkedin-post-seach-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=unseenuser/linkedin-post-seach-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "🔍 LinkedIn Posts Scraper - Keyword & Hashtag [NO COOKIES] ✅",
        "description": "Search LinkedIn's public post graph by keyword, hashtag, or author. Extract content, engagement metrics, reaction breakdowns, author profile data, and timestamps. No login, no cookies. Built for social listening, B2B competitive intelligence, and thought-leader research.",
        "version": "0.0",
        "x-build-id": "SLOyvEQOy9aFFs6UR"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/unseenuser~linkedin-post-seach-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-unseenuser-linkedin-post-seach-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/unseenuser~linkedin-post-seach-scraper/runs": {
            "post": {
                "operationId": "runs-sync-unseenuser-linkedin-post-seach-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/unseenuser~linkedin-post-seach-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-unseenuser-linkedin-post-seach-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "mode"
                ],
                "properties": {
                    "mode": {
                        "title": "What do you want to do?",
                        "enum": [
                            "search",
                            "get_details",
                            "search_and_enrich"
                        ],
                        "type": "string",
                        "description": "Pick 'Search posts' if you want to find posts. Pick 'Get details' if you already have post URLs and want their full data.",
                        "default": "search"
                    },
                    "searchKeywords": {
                        "title": "Keyword or hashtag",
                        "type": "string",
                        "description": "What to search for. Examples: 'AI safety', 'remote work', '#GenAI'. You can leave this blank if you only want to filter by author / company / mentions further down."
                    },
                    "maxResults": {
                        "title": "How many posts to return",
                        "minimum": 1,
                        "maximum": 10000,
                        "type": "integer",
                        "description": "Hard cap. Each post costs $0.001 (search) or $0.005 (with full details).",
                        "default": 100
                    },
                    "datePosted": {
                        "title": "Posted within",
                        "enum": [
                            "any",
                            "1h",
                            "24h",
                            "week",
                            "month",
                            "3months",
                            "6months",
                            "year"
                        ],
                        "type": "string",
                        "description": "Only return posts from this time window.",
                        "default": "any"
                    },
                    "sortBy": {
                        "title": "Sort by",
                        "enum": [
                            "relevance",
                            "date"
                        ],
                        "type": "string",
                        "description": "Choose 'Relevance' for the best keyword matches, or 'Date' for the newest posts first.",
                        "default": "relevance"
                    },
                    "contentType": {
                        "title": "Content type",
                        "enum": [
                            "videos",
                            "images",
                            "live_videos",
                            "documents",
                            "collaborative_articles",
                            "jobs"
                        ],
                        "type": "string",
                        "description": "Limit to one kind of post. Leave blank to allow all."
                    },
                    "authorsCompany": {
                        "title": "Posts by employees of these companies",
                        "type": "array",
                        "description": "Most popular advanced filter. Paste LinkedIn company URLs (e.g. https://www.linkedin.com/company/stripe). One per line. URLs or company IDs both work.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "authorProfiles": {
                        "title": "Posts by these specific people",
                        "type": "array",
                        "description": "Paste LinkedIn profile URLs (e.g. https://www.linkedin.com/in/satyanadella) or profile IDs. One per line.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "companies": {
                        "title": "Posts from these company pages",
                        "type": "array",
                        "description": "Posts published BY these company pages (not by their employees). Paste company URLs or IDs.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "authorKeywords": {
                        "title": "Author's profile must contain these words",
                        "type": "string",
                        "description": "Match only authors whose LinkedIn profile contains these words. Examples: 'VP Engineering', 'CFO', 'recruiter'."
                    },
                    "authorsIndustryIds": {
                        "title": "Authors in these industries (advanced)",
                        "type": "array",
                        "description": "LinkedIn industry IDs of the author's company (e.g. 4 = Software). Find IDs at https://docs.harvest-api.com/linkedin-api-reference/utility/get-industries.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "group": {
                        "title": "Inside a LinkedIn group (advanced)",
                        "type": "string",
                        "description": "Restrict to one LinkedIn group. Paste the group URL or its ID."
                    },
                    "mentioningCompanies": {
                        "title": "Posts that mention these companies",
                        "type": "array",
                        "description": "The brand-monitoring filter. Paste LinkedIn company URLs or IDs - any post tagging or @-mentioning them is returned.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "mentioningMembers": {
                        "title": "Posts that @mention these people",
                        "type": "array",
                        "description": "Paste LinkedIn profile URLs or IDs.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "postUrls": {
                        "title": "Post URLs to fetch",
                        "type": "array",
                        "description": "Paste full LinkedIn post URLs, one per line. Example: https://www.linkedin.com/posts/example-activity-1234567890",
                        "items": {
                            "type": "string"
                        }
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
