# 🌱 Greenhouse Jobs Scraper (`skootle/greenhouse-jobs`) Actor

Scrape every open job at any Greenhouse company. Title, location, departments, remote flag, comp, seniority, posted date, hiring-velocity signal, drop-into-LLM card. Watchlist mode emits only new jobs. Export, run via API, schedule, or integrate with other tools.

- **URL**: https://apify.com/skootle/greenhouse-jobs.md
- **Developed by:** [Skootle](https://apify.com/skootle) (community)
- **Categories:** Jobs, AI, Lead generation
- **Stats:** 2 total users, 1 monthly users, 100.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

from $1.50 / 1,000 job listing records

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.
Since this Actor supports Apify Store discounts, the price gets lower the higher subscription plan you have.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

![Greenhouse Jobs Scraper](https://raw.githubusercontent.com/kesjam/skootle-actors-assets/main/heroes/greenhouse-jobs.png)

### TL;DR

BD reps and recruiters waste 20 minutes a day clicking through Greenhouse boards at the 200 companies they are tracking. One run pulls every open job at every company on your list, normalized, with a hiring-velocity signal per company. Watchlist mode emits only jobs that appeared since your last run, so a daily schedule replaces a daily manual sweep. Export to your CRM or ATS in one call.

<!-- skootle:review-cta -->
> Try it on a small dataset, then let us know what you think in a [review](https://apify.com/skootle/greenhouse-jobs/reviews).

---

### What does Greenhouse Jobs do?

Greenhouse Jobs pulls every open role from the public Greenhouse job board of any company you point it at and returns the data normalized and ready to use. You hand it a list of board slugs (the part after `boards.greenhouse.io/` on the company's careers page, like `stripe`, `airbnb`, `figma`, `discord`, `vercel`), and it walks each board's full job catalog in one call per company.

Each open job comes back with the title, the resolved company name, the location, the departments and offices it ladders under, whether the role is remote, the seniority bucket parsed from the title, any salary range mentioned in the description, the original posted/updated timestamp, the full job description as plain text and HTML, and a compact drop-into-LLM card you can paste straight into an agent prompt or a Slack channel.

Alongside the job records you also get a one-row hiring-velocity summary per company: how many jobs are open right now, how many were posted or updated in the last 7 days, and a `cold / steady / hot` velocity hint so you can sort your watchlist by who is actually scaling. This is the signal sales-intel and recruitment-marketing platforms charge a five-figure annual contract for; you get it as part of every run.

Switch on watchlist mode and the actor stores a rolling 50,000-job-ID window in the key-value store and emits only the IDs it has not seen before. Schedule the run hourly or daily and the dataset becomes your real-time new-jobs feed for outbound, sourcing, or AI agents.

### Why scrape Greenhouse?

Greenhouse hosts the careers pages for most of the venture-backed, Series A+ companies you care about. The careers page is the single source of truth for what a company is hiring for right now: who they are looking for, where, at what level, and how fast they are growing the team. That signal is fresher than LinkedIn (where postings lag and duplicates dominate), more complete than any third-party aggregator (which sample), and more accurate than press-release-style "we raised, we're hiring" announcements.

The problem: every Greenhouse board lives at a different URL, paginates differently in the browser, and ships descriptions as HTML-encoded HTML. Tracking 100 companies by hand is impossible. Tracking 1,000 with a script means writing the same boilerplate every week. We give you a single endpoint that returns the entire catalog, normalized, with the hiring-velocity signal already computed.

### Who needs this?

- **BD reps** prospecting fast-growing startups based on hiring signal, who want a daily list of companies that just opened 5+ new roles.
- **Recruiters** sourcing engineers, designers, or PMs at companies that just raised, with one click to see every open role across their target list.
- **VC scouts** tracking team growth at portfolio companies and competitors, who need a weekly delta of new hires.
- **Competitive intel teams** tracking what stack, regions, and roles competitors are hiring for, so they can predict product direction.
- **Sales-intel platforms** enriching company records with hiring velocity, remote percentage, and department distribution.
- **Recruitment marketing teams** measuring time-to-fill across a portfolio of brands.
- **AI agents** that auto-apply on behalf of candidates, auto-recommend matched roles, or auto-route warm intros based on hiring signal.
- **Talent ops** at engineering-heavy companies, who need to benchmark their open-req mix against direct competitors.

### How to use Greenhouse Jobs

1. Open the actor in [Apify Console](https://console.apify.com/) and click **Try for free**.
2. Paste one or more Greenhouse board slugs into `boardTokens`. The slug is the last path segment on the company's Greenhouse careers page. Example: `boards.greenhouse.io/stripe` -> use `stripe`.
3. Optionally narrow with `keywords` (title substrings), `locations`, `departments`, or `remoteOnly`.
4. Optionally flip `watchlistMode` to `true` for monitoring schedules; the first run is the baseline, every later run emits only new jobs.
5. Set `maxItems`, then click **Start**. Results appear in the dataset within seconds for most boards.

### How much will this cost?

Pricing is per result (one record per job, plus one summary row per company) plus a small per-run start fee. The price per result drops as you upgrade your Apify plan.

| Plan | Per-run start | Per result | 500 results |
|---|---|---|---|
| FREE | $0.001 | $0.003 | $1.50 |
| BRONZE | $0.001 | $0.0025 | $1.25 |
| SILVER | $0.001 | $0.002 | $1.00 |
| GOLD | $0.001 | $0.0015 | $0.75 |
| PLATINUM | $0.001 | $0.0015 | $0.75 |
| DIAMOND | $0.001 | $0.0015 | $0.75 |

Typical day, watching 50 companies (~3,000 open roles total, ~50 new per day with watchlist mode on): under $0.20 a day on FREE, well under a dollar a week.

### Is it legal to scrape Greenhouse?

The Greenhouse Job Board API returns the same public data that any visitor sees on the company's careers page. No authentication is required, no rate-limit bypasses are used, and the actor honors the published endpoint's response cadence. The data covered (open jobs, locations, departments) is public-by-design - companies publish it specifically so candidates and partners can see it.

For commercial redistribution (reselling the data, embedding it in a public product, or republishing job descriptions verbatim) consult your own counsel; the source companies retain copyright on job descriptions. For internal use - prospecting, sourcing, market research, AI agents working on your behalf - this is exactly the use case Greenhouse's public API was designed for.

### Examples

Track every Stripe role:
```json
{ "boardTokens": ["stripe"] }
````

Engineering-only across three AI-forward companies:

```json
{
  "boardTokens": ["stripe", "anthropic", "scaleai"],
  "departments": ["Engineering"]
}
```

Remote-only senior roles across a watchlist:

```json
{
  "boardTokens": ["airbnb", "figma", "vercel", "linear"],
  "remoteOnly": true,
  "keywords": ["senior", "staff", "principal"]
}
```

Daily monitor mode (new jobs only since last run):

```json
{
  "boardTokens": ["stripe", "airbnb", "figma"],
  "watchlistMode": true,
  "maxItems": 1000
}
```

Location-targeted recruiter list:

```json
{
  "boardTokens": ["stripe", "airbnb"],
  "locations": ["New York", "Brooklyn"],
  "keywords": ["engineer"]
}
```

VC scout sweep across 20 portfolio companies:

```json
{
  "boardTokens": ["companyA", "companyB", "companyC"],
  "maxItems": 5000
}
```

### Input parameters

| Field | Type | Required | Default | Description |
|---|---|---|---|---|
| `boardTokens` | string\[] | yes | - | Greenhouse board slugs to scrape. |
| `keywords` | string\[] | no | \[] | Case-insensitive title substring match. |
| `locations` | string\[] | no | \[] | Case-insensitive substring match on location string. |
| `departments` | string\[] | no | \[] | Case-insensitive substring match on department names. |
| `remoteOnly` | boolean | no | false | Keep only roles whose location/title/department mentions remote. |
| `watchlistMode` | boolean | no | false | Emit only jobs whose IDs are new since the previous run. |
| `maxItems` | integer | no | 10 | Maximum job records to save (max 5,000). |
| `proxyConfiguration` | object | no | {} | Not required for Greenhouse. Leave at default. |

### Greenhouse job output format

Two record types share the dataset. Filter on `recordType` to separate them.

#### greenhouse\_job

| Field | Type | Description |
|---|---|---|
| `outputSchemaVersion` | string | Literal `2026-05-11`. Bumps on breaking changes. |
| `recordType` | string | Literal `greenhouse_job`. |
| `jobId` | integer | Greenhouse internal job ID. Stable primary key. |
| `boardToken` | string | The board slug you provided (e.g. `stripe`). |
| `company` | string | Resolved company name from the Greenhouse board root. |
| `title` | string | Job title. |
| `absoluteUrl` | string | Direct link to the public posting. |
| `requisitionId` | string | null | Company-internal requisition ID, when shipped. |
| `location` | object | `{ name, normalized, isRemote }`. |
| `departments` | string\[] | Flat department names. |
| `offices` | string\[] | Flat office names. |
| `metadata` | object\[] | `{ name, value, valueType }` array as shipped by Greenhouse (custom fields). |
| `descriptionText` | string | Plain-text description, HTML stripped, max 12,000 chars. |
| `descriptionHtml` | string | Original HTML description, decoded, max 30,000 chars. |
| `seniority` | enum | null | Heuristic-parsed from title: `intern`, `entry`, `mid`, `senior`, `staff`, `principal`, `lead`, `director`, `vp`, `executive`. |
| `compRange` | object | `{ min, max, currency, period }`. Heuristic-parsed from description. |
| `postedAt` | string | ISO 8601 timestamp from Greenhouse's `updated_at`. |
| `scrapedAt` | string | ISO 8601 timestamp of the run. |
| `fieldCompletenessScore` | integer | 0-100. Self-filter sparse rows. |
| `agentMarkdown` | string | 300-500 char drop-into-LLM card. |

```json
{
  "outputSchemaVersion": "2026-05-11",
  "recordType": "greenhouse_job",
  "jobId": 7532733,
  "boardToken": "stripe",
  "company": "Stripe",
  "title": "Account Executive, AI Sales",
  "absoluteUrl": "https://stripe.com/jobs/search?gh_jid=7532733",
  "requisitionId": "See Opening ID",
  "location": { "name": "San Francisco, CA", "normalized": "San Francisco, CA", "isRemote": false },
  "departments": ["Sales"],
  "offices": ["San Francisco"],
  "metadata": [],
  "descriptionText": "Who we are. About Stripe. Stripe is a financial infrastructure platform...",
  "descriptionHtml": "<h2>Who we are</h2>...",
  "seniority": null,
  "compRange": { "min": null, "max": null, "currency": null, "period": null },
  "postedAt": "2026-05-08T17:59:17-04:00",
  "scrapedAt": "2026-05-11T00:00:00.000Z",
  "fieldCompletenessScore": 82,
  "agentMarkdown": "**Account Executive, AI Sales** at Stripe\n- 📍 San Francisco, CA\n- 🧭 Sales\n- 📅 Updated 2026-05-08\n- 🔗 https://stripe.com/jobs/search?gh_jid=7532733"
}
```

#### company\_summary

| Field | Type | Description |
|---|---|---|
| `recordType` | string | Literal `company_summary`. |
| `boardToken` | string | Board slug. |
| `company` | string | Company name. |
| `totalCount` | integer | Open jobs found this run. |
| `openedInLast7Days` | integer | Jobs whose `updated_at` is within the last 7 days. |
| `velocityHint` | enum | `cold` < 5%, `steady` 5-14%, `hot` >= 15% of jobs updated this week. |
| `jobsByDepartment` | object\[] | `[{ name, count }]`, ranked. |
| `jobsByLocation` | object\[] | `[{ name, count }]`, ranked. |
| `scrapedAt` | string | Run timestamp. |

```json
{
  "outputSchemaVersion": "2026-05-11",
  "recordType": "company_summary",
  "boardToken": "stripe",
  "company": "Stripe",
  "totalCount": 494,
  "openedInLast7Days": 38,
  "velocityHint": "steady",
  "jobsByDepartment": [{ "name": "Engineering", "count": 142 }, { "name": "Sales", "count": 76 }],
  "jobsByLocation": [{ "name": "San Francisco, CA", "count": 88 }, { "name": "New York, NY", "count": 71 }],
  "scrapedAt": "2026-05-11T00:00:00.000Z"
}
```

### During the Actor run

The actor talks only to Greenhouse's public Job Board API. No proxy, no browser, no auth. Typical board returns in under 2 seconds and a 50-board sweep finishes in well under a minute. The agent briefing markdown lands at `AGENT_BRIEFING` in the default key-value store, and the rolling watchlist state lives at `WATCHLIST_STATE` so you can clear it from the Console anytime to start a new baseline.

### FAQ

#### How is this different from Greenhouse's careers RSS feeds?

The RSS feed gives you titles and links. Greenhouse Jobs gives you the full description (text and HTML), the parsed seniority bucket, the comp range when shipped, the department and office structure, every metadata field, the hiring-velocity signal per company, and an LLM-ready summary card. RSS is fine for "show me titles"; this is fine for everything else.

#### Can I monitor only new jobs?

Yes. Set `watchlistMode: true`. The first run records every current job ID as the baseline (and emits zero rows to the dataset on new boards seen for the first time, depending on your schedule). Every subsequent run emits only IDs not yet in the rolling 50,000-ID window. The state persists in the actor's key-value store between runs.

#### Will this work with Lever or Workable boards?

Not directly - those are different ATS systems with different APIs. A dedicated Lever scraper is shipping next from the same Skootle portfolio with the same record shape and watchlist pattern; Workable is on the roadmap. Watch the **Other Skootle actors** section below for links.

#### Why does this cost more than free Greenhouse scrapers I see?

The free ones I have audited return a thin slice (title, URL, maybe location) and break when the board ships a new metadata field or when Greenhouse changes the response shape. They do not give you the hiring-velocity signal, the seniority parsing, the comp parsing, or the watchlist mode. If you are using this for one-off curiosity, save the money. If you are feeding it into a CRM, an outbound sequence, or an AI agent, the per-result fee at GOLD ($0.0015) is a rounding error compared to the time you spend rebuilding your pipeline when the free one silently goes empty.

#### Can I use this with Python, n8n, or Make?

Yes. Apify exposes every actor as a REST endpoint and provides SDKs for Python, JavaScript, and CLI. n8n and Make have official Apify nodes; trigger a run, wait for completion, then read the dataset.

#### How many companies can I track in one run?

There is no hard cap on `boardTokens`. We have tested up to 100 boards in a single run; cost scales linearly with results, not with board count. For watchlist mode at scale, lean on `maxItems` to bound a single run and schedule frequently.

#### Why is the seniority field sometimes null?

Seniority is parsed heuristically from the title (`Senior`, `Staff`, `Principal`, `Director`, etc.). Titles like "Software Engineer" with no level word return null on purpose - we would rather give you `null` than a guess. Filter on `seniority IS NOT NULL` in your downstream query when you need only graded roles.

#### Why is `compRange` sometimes empty?

Salary disclosure depends on the company and the job's location (US states like CA, CO, NY require it; many engineering job descriptions in other locations omit it entirely). We parse the most common patterns (`$X - $Y per year`, `USD X-Y`, `$X-$Yk`) but do not synthesize ranges when the description does not state one.

#### Does watchlist mode survive an actor restart?

Yes. The state is written to the default key-value store under the `WATCHLIST_STATE` key. To reset the baseline, delete that key from the Console.

#### Can I get historical jobs (closed/expired)?

The Greenhouse public API only exposes currently open postings. Historical jobs are not reachable through this endpoint. For longitudinal hiring history at a company, run this actor on a schedule and warehouse the dataset; we keep the `jobId` as a stable primary key so backfilling deduplicates cleanly.

#### What rate limit does Greenhouse enforce?

The public endpoint serves the cached board snapshot and is generous in practice - we routinely scrape 50+ boards back-to-back without throttle. If you point this at 500+ boards in one run we recommend splitting into multiple scheduled runs to be polite.

### Why choose Greenhouse Jobs

- **Hiring-velocity signal per company** in every run - find who is actually scaling, not who has had the same 4 reqs open for 90 days.
- **Watchlist / monitor mode** out of the box - schedule it daily and get a new-jobs delta feed with zero plumbing.
- **One record shape across companies** - downstream code does not care whether you point it at Stripe, Vercel, or a 12-person seed startup; the schema is identical.
- **Sub-minute typical runtime** for single-board sweeps; full 50-board watchlist refresh in under 2 minutes.
- **Seniority and comp parsing** at the record level so you can filter without writing the regex.
- **Agent-ready output** - `agentMarkdown` per job, `AGENT_BRIEFING.md` per run, both drop straight into an LLM context or a Slack channel.
- **Cross-ATS coverage roadmap** - Lever, Workable, Ashby shipping next from the same portfolio with the same record shape, so your pipeline does not fork by ATS.
- **Versioned schema and idempotent primary keys** - `jobId` is stable across runs, and the schema version is on every record so breaking changes never silently corrupt your warehouse.

#### Your feedback

Hit a bug or want a feature? Open an issue on the [Issues tab](https://apify.com/skootle/greenhouse-jobs/issues/open) rather than the reviews page, and we'll fix it fast (typically within 48 hours).

### Other Skootle actors you might want to check

- [Wellfound Jobs Scraper](https://apify.com/skootle/wellfound-jobs-scraper) - hiring data from Wellfound (formerly AngelList Talent), including org updates and team signals.
- Lever Jobs Scraper - shipping soon, same record shape as Greenhouse Jobs.
- [SAM.gov Federal Contracts](https://apify.com/skootle/sam-gov-federal-contracts) - federal contract opportunities for B2G prospecting, enriched with USAspending award history.
- [SEC EDGAR Filings](https://apify.com/skootle/sec-edgar-filings) - 8-K, 10-K, S-1 filings as a funding/scale signal for sales-intel pipelines.

### Support and contact

- Issues and feature requests: [Issues tab](https://apify.com/skootle/greenhouse-jobs/issues/open)
- Reviews and ratings: [reviews](https://apify.com/skootle/greenhouse-jobs/reviews)
- Skootle portfolio: [apify.com/skootle](https://apify.com/skootle)

# Actor input Schema

## `boardTokens` (type: `array`):

List of Greenhouse board slugs to scrape. The board slug is the subpath at boards.greenhouse.io/<slug> (e.g. 'stripe', 'airbnb', 'figma', 'vercel', 'discord'). Add as many as you like.

## `keywords` (type: `array`):

Case-insensitive substring match against job titles. Any match keeps the job. Leave empty to return all titles.

## `locations` (type: `array`):

Case-insensitive substring match against the job's location string (e.g. 'New York', 'San Francisco', 'London'). Leave empty for all locations.

## `departments` (type: `array`):

Case-insensitive substring match against department names (e.g. 'Engineering', 'Sales', 'Design'). Leave empty for all departments.

## `remoteOnly` (type: `boolean`):

When true, keep only jobs whose location, title, or department mentions remote / WFH / distributed / anywhere.

## `watchlistMode` (type: `boolean`):

When true, the actor stores a rolling 50,000-job-ID window in the key-value store and emits only jobs whose IDs are new since the previous run. Ideal for daily / hourly monitoring schedules.

## `maxItems` (type: `integer`):

Maximum job records to save. Conservative default keeps the daily auto-test under the 5-minute window. Raise it for production batches.

## `proxyConfiguration` (type: `object`):

Proxy is not required for Greenhouse. Leave at the default unless you have a reason to override.

## Actor input object example

```json
{
  "boardTokens": [
    "stripe"
  ],
  "keywords": [],
  "locations": [],
  "departments": [],
  "remoteOnly": false,
  "watchlistMode": false,
  "maxItems": 10,
  "proxyConfiguration": {}
}
```

# Actor output Schema

## `datasetItems` (type: `string`):

Normalized greenhouse\_job records plus one company\_summary per board.

## `runSummary` (type: `string`):

Compact OUTPUT object with item counts, errors, and watchlist diagnostics.

## `agentBriefing` (type: `string`):

Per-run digest. Drop into Claude / Codex / Slack as a single document.

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "boardTokens": [
        "stripe"
    ],
    "keywords": [],
    "locations": [],
    "departments": []
};

// Run the Actor and wait for it to finish
const run = await client.actor("skootle/greenhouse-jobs").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "boardTokens": ["stripe"],
    "keywords": [],
    "locations": [],
    "departments": [],
}

# Run the Actor and wait for it to finish
run = client.actor("skootle/greenhouse-jobs").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "boardTokens": [
    "stripe"
  ],
  "keywords": [],
  "locations": [],
  "departments": []
}' |
apify call skootle/greenhouse-jobs --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=skootle/greenhouse-jobs",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "🌱 Greenhouse Jobs Scraper",
        "description": "Scrape every open job at any Greenhouse company. Title, location, departments, remote flag, comp, seniority, posted date, hiring-velocity signal, drop-into-LLM card. Watchlist mode emits only new jobs. Export, run via API, schedule, or integrate with other tools.",
        "version": "0.1",
        "x-build-id": "jzn3Fehpu3SA5sGGJ"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/skootle~greenhouse-jobs/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-skootle-greenhouse-jobs",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/skootle~greenhouse-jobs/runs": {
            "post": {
                "operationId": "runs-sync-skootle-greenhouse-jobs",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/skootle~greenhouse-jobs/run-sync": {
            "post": {
                "operationId": "run-sync-skootle-greenhouse-jobs",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "boardTokens"
                ],
                "properties": {
                    "boardTokens": {
                        "title": "Greenhouse board tokens",
                        "type": "array",
                        "description": "List of Greenhouse board slugs to scrape. The board slug is the subpath at boards.greenhouse.io/<slug> (e.g. 'stripe', 'airbnb', 'figma', 'vercel', 'discord'). Add as many as you like.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "keywords": {
                        "title": "Title keywords (optional)",
                        "type": "array",
                        "description": "Case-insensitive substring match against job titles. Any match keeps the job. Leave empty to return all titles.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "locations": {
                        "title": "Locations (optional)",
                        "type": "array",
                        "description": "Case-insensitive substring match against the job's location string (e.g. 'New York', 'San Francisco', 'London'). Leave empty for all locations.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "departments": {
                        "title": "Departments (optional)",
                        "type": "array",
                        "description": "Case-insensitive substring match against department names (e.g. 'Engineering', 'Sales', 'Design'). Leave empty for all departments.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "remoteOnly": {
                        "title": "Remote-only roles",
                        "type": "boolean",
                        "description": "When true, keep only jobs whose location, title, or department mentions remote / WFH / distributed / anywhere.",
                        "default": false
                    },
                    "watchlistMode": {
                        "title": "Watchlist mode (only emit new jobs since last run)",
                        "type": "boolean",
                        "description": "When true, the actor stores a rolling 50,000-job-ID window in the key-value store and emits only jobs whose IDs are new since the previous run. Ideal for daily / hourly monitoring schedules.",
                        "default": false
                    },
                    "maxItems": {
                        "title": "Max items",
                        "minimum": 1,
                        "maximum": 5000,
                        "type": "integer",
                        "description": "Maximum job records to save. Conservative default keeps the daily auto-test under the 5-minute window. Raise it for production batches.",
                        "default": 10
                    },
                    "proxyConfiguration": {
                        "title": "Proxy configuration (optional)",
                        "type": "object",
                        "description": "Proxy is not required for Greenhouse. Leave at the default unless you have a reason to override.",
                        "default": {}
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
