# LLM Visibility Tracker — ChatGPT, Claude, Perplexity, Gemini (`khadinakbar/llm-visibility-tracker`) Actor

Track LLM visibility, ranking, share-of-voice, and citations for any brand across ChatGPT, Claude, Perplexity, and Gemini MCP-ready. $0.090/result.

- **URL**: https://apify.com/khadinakbar/llm-visibility-tracker.md
- **Developed by:** [Khadin Akbar](https://apify.com/khadinakbar) (community)
- **Categories:** AI, SEO tools, MCP servers
- **Stats:** 1 total users, 1 monthly users, 0.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

from $90.00 / 1,000 keyword × llm visibility checks

This Actor is paid per event and usage. You are charged both the fixed price for specific events and for Apify platform usage.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## LLM Visibility Tracker — Brand Rankings in ChatGPT, Claude, Perplexity & Gemini

Track how your brand actually appears inside answers from the four major LLMs. Built for Answer Engine Optimization (AEO), LLM SEO, and competitive AI visibility tracking — with **zero AI API setup required**.

**Compatible with:** Apify MCP Server (Claude, ChatGPT, Cursor, Cline agents), LangChain, LangGraph, OpenAI Agents SDK, Make.com, Zapier, n8n, and any tool that can hit a REST endpoint.

---

### What it does

LLM Visibility Tracker sends realistic prompts to **ChatGPT, Claude, Perplexity, and Gemini** — all with live web search enabled — and analyzes every answer for:

- **Is the brand mentioned?** Per-LLM mention rate.
- **Where does it rank?** Detected position inside numbered/bulleted lists ("1st place", "3rd place", etc.).
- **How early in the answer?** Position score from the very top to the bottom of the response.
- **Share of Voice (%)** — your brand's mentions vs configured competitors, in the same answer.
- **Is it cited as a source?** Domain-level citation detection across native LLM citations.
- **Sentiment** — is the LLM endorsing, neutral, or critical about the brand?
- **Competitor co-mentions** — which rivals show up alongside (or instead of) your brand.

Plus a single **Visibility Index (0–100)** that rolls all four signals into one trendable score.

---

### Why a separate "LLM Visibility Tracker"?

LLM answer engines have very different ranking dynamics from classic SEO. ChatGPT and Claude pull heavily from Reddit, Hacker News, and review sites. Perplexity ranks by citation authority. Gemini grounds on Google. A brand that ranks #1 on Google can be invisible inside Claude's recommendations.

This actor measures the new ranking surface — the answer itself — across all four LLMs in one run, with the cleanest set of metrics shipped today: **mention rate, in-list rank, position, share-of-voice, citations, and sentiment**.

---

### What you get per check

Each row in the output dataset is one `brand × prompt × LLM` check:

| Field | Type | Example |
|---|---|---|
| `llm` | string | `"chatgpt"` |
| `prompt` | string | `"What's the best note-taking app for engineers in 2026?"` |
| `prompt_intent` | string | `"recommendation"` |
| `is_mentioned` | boolean | `true` |
| `mention_count` | integer | `4` |
| `rank_in_response` | integer | `2` (2nd place in a numbered list) |
| `position_score` | integer (1–10) | `2` (mentioned in the first 20% of the answer) |
| `share_of_voice_pct` | number | `66.7` |
| `is_cited` | boolean | `true` |
| `brand_citation_url` | string\|null | `"https://notion.so/help/..."` |
| `all_citations` | string[] | `["https://notion.so/...", "https://coda.io/..."]` |
| `competitors_mentioned` | string[] | `["Coda", "ClickUp"]` |
| `sentiment` | string | `"positive"` |
| `excerpt` | string | `"...Notion shines for engineering teams thanks to..."` |
| `full_answer` | string | First 1200 chars of the answer |
| `model` | string | `"openai/gpt-4o-search-preview"` |
| `checked_at` | ISO datetime | `"2026-05-01T14:30:00.000Z"` |

A run summary with `visibility_index`, per-LLM scores, share-of-voice, and recommendations is also saved to the key-value store (`LAST_RUN_SUMMARY`) and POSTed to your webhook if configured.

---

### Zero setup — all 4 LLMs included

You do **not** bring AI API keys. ChatGPT, Claude, Perplexity, and Gemini access is bundled — every charge covers the upstream model cost.

Pricing is **PAY_PER_EVENT** at **$0.096 per `brand × prompt × LLM` check**.

| Use case | Approx cost |
|---|---|
| Quick mode (3 prompts × 4 LLMs = 12 checks) | ~$1.15 |
| Standard mode (5 prompts × 4 LLMs = 20 checks) | ~$1.92 |
| Deep audit (10 prompts × 4 LLMs = 40 checks) | ~$3.84 |
| Weekly monitoring on standard (4 runs/month) | ~$7.70/month |

A small actor-start charge applies (≈ $0.00006 per GB-RAM, ~$0.0001 per run).

---

### Quickest possible run

The only required input is `brand`. Everything else has defaults.

```json
{ "brand": "Notion" }
````

This runs **standard mode**: 5 prompts × 4 LLMs = 20 checks, ~$1.92. Visibility Index, share-of-voice, and recommendations land in your dataset and the key-value store.

A more complete run:

```json
{
  "brand": "Notion",
  "domain": "notion.so",
  "competitors": ["Coda", "ClickUp", "Asana"],
  "category": "productivity software",
  "mode": "standard",
  "promptIntents": ["recommendation", "alternatives", "how_to", "use_case"]
}
```

***

### Built for AI agents (MCP, LangChain, OpenAI Agents)

Every input field has multiple natural-language aliases so an LLM agent can call this actor in whichever shape feels native to it. Examples that all work:

- `{ "brand": "Notion" }`
- `{ "brandName": "Notion" }`
- `{ "company": "Notion", "rivals": ["Coda"], "engines": ["chatgpt", "claude"] }`
- `{ "product": "Notion", "questions": ["Should I use Notion or Coda for my engineering team?"] }`

#### Apify MCP Server

Connect via the [Apify MCP Server](https://apify.com/apify/actors-mcp-server) and ask Claude or ChatGPT:

> "Track LLM visibility for Notion vs Coda, ClickUp, and Asana across all four AI search platforms — focus on recommendation and use-case prompts."

> "Run a deep visibility audit on Stripe in fintech and show which LLM is the weakest."

The agent picks `apify--llm-visibility-tracker`, fills the inputs, and returns structured rankings.

***

### Audit modes

| Mode | Prompts per LLM | Total checks (4 LLMs) | Use case |
|---|---|---|---|
| `quick` | 3 | 12 | Scheduled monitoring, fast spot checks |
| `standard` (default) | 5 | 20 | Weekly tracking, share-of-voice |
| `deep` | 10 | 40 | One-time audits, competitive analysis, board reports |

Override with `maxPrompts` for fine-grained control (e.g. `"maxPrompts": 7`).

***

### Prompt intents

The actor auto-builds prompts in plain user phrasing — the kind real customers actually ask LLMs:

- `recommendation` — "What's the best \[category] to use right now?"
- `alternatives` — "What are alternatives to \[brand]?"
- `how_to` — "How do I choose the right \[category] for my team?"
- `comparison` — "\[brand] vs its top \[category] — which wins?"
- `use_case` — "Which \[category] works best for small teams?"
- `review` — "Is \[brand] actually any good?"
- `pricing` — "Is \[brand] worth the money?"

Pass `customPrompts` to add your own (real customer questions are ideal):

```json
{
  "brand": "Notion",
  "customPrompts": [
    "Should an engineering team move from Confluence to Notion?",
    "What's the cleanest way to manage product specs across Notion and Linear?"
  ]
}
```

***

### Track LLM visibility over time

Schedule the actor weekly via the [Apify Scheduler](https://docs.apify.com/platform/schedules) and watch:

- **Visibility Index trend** — is your brand gaining or losing ground in answer engines?
- **Per-LLM gaps** — are you missing on Perplexity but strong on ChatGPT?
- **Share-of-voice swings** — when does a competitor's mention rate spike?
- **Citation rate** — how often is your domain making it into LLM source lists?

Every run writes a fresh `LAST_RUN_SUMMARY` to the key-value store, perfect for dashboards.

***

### Visibility Index (0–100)

The single trendable score weighted toward what matters in answer engines:

- **50%** — overall mention rate across all checks
- **30%** — average rank inside ranked lists (or position score if no lists)
- **20%** — citation rate

Practical reading:

- **70–100** — Strong AI presence. The LLM treats you as a primary recommendation.
- **40–69** — Mid-tier. Mentioned, but rivals are usually mentioned first or more often.
- **0–39** — Visibility gap. Most answers don't include your brand at all.

***

### Integrations

- **Apify MCP Server** — direct invocation from Claude, ChatGPT, Cursor, Cline.
- **LangChain / LangGraph** — wrap the run as a tool, get structured visibility data back.
- **Make.com / Zapier / n8n** — webhook payload includes `visibility_index`, share-of-voice, per-LLM scores, and recommendations.
- **Slack / Google Sheets / Airtable / Notion** — pipe the dataset into reporting wherever you live.

#### REST API

```bash
curl -X POST "https://api.apify.com/v2/acts/khadinakbar~llm-visibility-tracker/runs" \
  -H "Authorization: Bearer YOUR_APIFY_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "brand": "Notion",
    "domain": "notion.so",
    "competitors": ["Coda", "ClickUp", "Asana"],
    "category": "productivity software",
    "mode": "standard"
  }'
```

#### JavaScript

```javascript
import { ApifyClient } from 'apify-client';
const client = new ApifyClient({ token: 'YOUR_APIFY_TOKEN' });
const run = await client.actor('khadinakbar/llm-visibility-tracker').call({
    brand: 'Notion',
    domain: 'notion.so',
    competitors: ['Coda', 'ClickUp', 'Asana'],
    category: 'productivity software',
    mode: 'standard',
});
const { items } = await client.dataset(run.defaultDatasetId).listItems();
// items[0] = { llm: 'chatgpt', is_mentioned: true, rank_in_response: 1, share_of_voice_pct: 75.0, ... }
```

#### Python

```python
from apify_client import ApifyClient
client = ApifyClient('YOUR_APIFY_TOKEN')
run = client.actor('khadinakbar/llm-visibility-tracker').call(run_input={
    'brand': 'Notion',
    'domain': 'notion.so',
    'competitors': ['Coda', 'ClickUp', 'Asana'],
    'mode': 'standard',
})
items = list(client.dataset(run['defaultDatasetId']).iterate_items())
```

***

### FAQ

**Q: Do I need OpenAI / Anthropic / Perplexity / Google API keys?**
A: No. All four LLMs are bundled. You pay one flat per-check price ($0.096) and we cover the upstream model cost.

**Q: Does Claude actually search the web here?**
A: Yes. Claude runs with Anthropic's native `web_search_20250305` server-side tool. ChatGPT uses `gpt-4o-search-preview`. Perplexity uses Sonar. Gemini uses Google Search grounding via the official Google API for authentic native grounding.

**Q: How is this different from a regular SEO rank tracker?**
A: SEO rank trackers measure search engine rankings on Google/Bing. This actor measures **the answer itself** — what the LLM says when a user asks a category question. That's the new ranking surface for AEO and LLM SEO.

**Q: How is this different from your AI Brand Monitor actor?**
A: This actor is laser-focused on LLM rankings and AEO with new metrics: in-list `rank_in_response`, `share_of_voice_pct`, and a re-weighted `visibility_index`. It also has a much simpler input — just `brand` is required — plus a `mode` preset and a richer prompt library tuned to natural user phrasing.

**Q: Can I monitor multiple brands?**
A: One brand per run. Schedule a separate run per brand (Apify schedules support unlimited concurrent runs).

**Q: Free tier?**
A: Apify gives every new account $5 in free credits — enough for ~50 LLM Visibility checks across all 4 platforms.

***

### Legal & ethics

This actor calls official LLM APIs (via OpenRouter for ChatGPT/Claude/Perplexity, direct Google API for Gemini). No scraping, no cookie hijacking, no proxy rotation. All upstream usage stays within each provider's terms of service. Your dataset is private to your Apify account.

***

### Changelog

**v1.0 (May 2026)**

- Initial release — LLM visibility tracking across ChatGPT, Claude, Perplexity, Gemini.
- New metrics: `rank_in_response`, `share_of_voice_pct`, `visibility_index` (0–100).
- Aggressive input aliasing for LLM/MCP/agent calls.
- Mode presets: quick / standard / deep.
- AEO-style prompt library with 7 intent categories.
- Apify MCP-ready: tool name `apify--llm-visibility-tracker`.

# Actor input Schema

## `brand` (type: `string`):

REQUIRED. The brand, product, company, or tool name to track in LLM answers (e.g., 'Notion', 'Ahrefs', 'HubSpot', 'Stripe'). This is the only required input — everything else has sensible defaults. Do NOT put competitor names here; use the 'competitors' field for those. Aliases accepted: brandName, company, product, name, tool.

## `domain` (type: `string`):

Optional but recommended. The brand's primary domain (e.g., 'notion.so', 'ahrefs.com'). Used to detect when an LLM cites the brand's own site as a source. Without this, citation tracking is text-only. Aliases accepted: brandDomain, website, url, homepage.

## `aliases` (type: `array`):

Optional. Other names the brand is known by (e.g., \['Notion', 'Notion HQ', 'Notion AI']). The actor treats any alias mention as a brand mention. Useful for brands with abbreviations or sub-product names. Aliases accepted: brandAliases, alternateNames, otherNames.

## `competitors` (type: `array`):

Optional but high-value. Up to 10 competitor brand names (e.g., \['Coda', 'ClickUp', 'Asana']). Drives the share\_of\_voice\_pct metric and competitor co-mention tracking. If omitted, share-of-voice is not calculated. Aliases accepted: competitorBrands, competitorNames, rivals.

## `category` (type: `string`):

Optional. The category the brand competes in (e.g., 'project management', 'SEO tools', 'CRM', 'AI writing assistant'). Used to make the auto-generated prompts more specific and natural. Leave blank to use generic phrasing. Aliases accepted: industry, niche, vertical.

## `llms` (type: `array`):

Optional. The LLMs to query — defaults to all 4. Valid values: 'chatgpt' (OpenAI gpt-4o-search-preview with web search), 'claude' (Anthropic Claude Sonnet 4.5 with web\_search tool), 'perplexity' (Perplexity Sonar with native citations), 'gemini' (Google Gemini 2.5 Flash with Google Search grounding). Aliases accepted: platforms, engines, aiPlatforms, models.

## `mode` (type: `string`):

Optional. Audit depth preset: 'quick' (3 prompts × selected LLMs ≈ 12 checks), 'standard' (5 prompts × selected LLMs ≈ 20 checks, default), or 'deep' (10 prompts × selected LLMs ≈ 40 checks). Use 'quick' for fast spot-checks and scheduled monitoring. Use 'deep' for one-time audits and competitive analysis. Overridden by maxPrompts if both are set.

## `promptIntents` (type: `array`):

Optional. Categories of prompts to auto-generate. Defaults to a balanced mix. Available intents: 'recommendation' (best/top X for use case), 'alternatives' (alternatives to brand), 'how\_to' (how do I \[task] using \[category]), 'comparison' (X vs Y), 'use\_case' (which X should I pick for Y), 'review' (is X any good?), 'pricing' (is X worth the money?). If you provide 'customPrompts', those are used in addition to the auto-generated ones. Aliases accepted: queryTemplates, intents, promptCategories.

## `customPrompts` (type: `array`):

Optional. Specific prompts to send to each LLM, in the natural phrasing your customers use (e.g., 'What's the best note-taking app for engineers?', 'Should I switch from Evernote to Notion?'). These run IN ADDITION to auto-generated prompts. If you ONLY want to run custom prompts, also set promptIntents to \[]. Aliases accepted: prompts, customQueries, queries, questions.

## `maxPrompts` (type: `integer`):

Optional override. Hard cap on how many unique prompts to send to each LLM. Each prompt × LLM = one paid check. Leave blank to use the 'mode' preset. Aliases accepted: maxQueries, maxQueriesPerPlatform, limit.

## `webhookUrl` (type: `string`):

Optional. HTTPS URL that receives a POST with the full run summary (visibility\_index, share\_of\_voice\_pct, per-LLM scores, recommendations) when the run completes. Use it to push results into Make.com, Zapier, n8n, Slack, or your own dashboard. Aliases accepted: webhook, callbackUrl, notifyUrl.

## `demoMode` (type: `boolean`):

Optional. When true, runs a connectivity check without calling any LLM and exits successfully. Use this to validate scheduling and wiring without spending credits. Aliases accepted: dryRun, healthCheck, testMode.

## Actor input object example

```json
{
  "brand": "Notion",
  "domain": "notion.so",
  "category": "productivity software",
  "llms": [
    "chatgpt",
    "claude",
    "perplexity",
    "gemini"
  ],
  "mode": "standard",
  "promptIntents": [
    "recommendation",
    "alternatives",
    "how_to",
    "use_case"
  ],
  "demoMode": false
}
```

# Actor output Schema

## `results` (type: `string`):

One record per brand × prompt × LLM combination. Use this dataset to track LLM rankings, share-of-voice, and Answer Engine visibility over time.

## `summary` (type: `string`):

Aggregated run results: Visibility Index (0-100), per-LLM mention/rank/citation rates, share-of-voice, competitor mention rates, and actionable recommendations.

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "brand": "Notion",
    "domain": "notion.so",
    "category": "productivity software"
};

// Run the Actor and wait for it to finish
const run = await client.actor("khadinakbar/llm-visibility-tracker").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "brand": "Notion",
    "domain": "notion.so",
    "category": "productivity software",
}

# Run the Actor and wait for it to finish
run = client.actor("khadinakbar/llm-visibility-tracker").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "brand": "Notion",
  "domain": "notion.so",
  "category": "productivity software"
}' |
apify call khadinakbar/llm-visibility-tracker --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=khadinakbar/llm-visibility-tracker",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "LLM Visibility Tracker — ChatGPT, Claude, Perplexity, Gemini",
        "description": "Track LLM visibility, ranking, share-of-voice, and citations for any brand across ChatGPT, Claude, Perplexity, and Gemini MCP-ready. $0.090/result.",
        "version": "1.1",
        "x-build-id": "4IsfThWbTxQ47jRgC"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/khadinakbar~llm-visibility-tracker/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-khadinakbar-llm-visibility-tracker",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/khadinakbar~llm-visibility-tracker/runs": {
            "post": {
                "operationId": "runs-sync-khadinakbar-llm-visibility-tracker",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/khadinakbar~llm-visibility-tracker/run-sync": {
            "post": {
                "operationId": "run-sync-khadinakbar-llm-visibility-tracker",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "brand"
                ],
                "properties": {
                    "brand": {
                        "title": "Brand or product name",
                        "type": "string",
                        "description": "REQUIRED. The brand, product, company, or tool name to track in LLM answers (e.g., 'Notion', 'Ahrefs', 'HubSpot', 'Stripe'). This is the only required input — everything else has sensible defaults. Do NOT put competitor names here; use the 'competitors' field for those. Aliases accepted: brandName, company, product, name, tool."
                    },
                    "domain": {
                        "title": "Brand website domain",
                        "type": "string",
                        "description": "Optional but recommended. The brand's primary domain (e.g., 'notion.so', 'ahrefs.com'). Used to detect when an LLM cites the brand's own site as a source. Without this, citation tracking is text-only. Aliases accepted: brandDomain, website, url, homepage."
                    },
                    "aliases": {
                        "title": "Brand aliases (alternate names)",
                        "type": "array",
                        "description": "Optional. Other names the brand is known by (e.g., ['Notion', 'Notion HQ', 'Notion AI']). The actor treats any alias mention as a brand mention. Useful for brands with abbreviations or sub-product names. Aliases accepted: brandAliases, alternateNames, otherNames.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "competitors": {
                        "title": "Competitor brands to compare",
                        "type": "array",
                        "description": "Optional but high-value. Up to 10 competitor brand names (e.g., ['Coda', 'ClickUp', 'Asana']). Drives the share_of_voice_pct metric and competitor co-mention tracking. If omitted, share-of-voice is not calculated. Aliases accepted: competitorBrands, competitorNames, rivals.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "category": {
                        "title": "Product category or industry",
                        "type": "string",
                        "description": "Optional. The category the brand competes in (e.g., 'project management', 'SEO tools', 'CRM', 'AI writing assistant'). Used to make the auto-generated prompts more specific and natural. Leave blank to use generic phrasing. Aliases accepted: industry, niche, vertical."
                    },
                    "llms": {
                        "title": "Which LLMs to query",
                        "type": "array",
                        "description": "Optional. The LLMs to query — defaults to all 4. Valid values: 'chatgpt' (OpenAI gpt-4o-search-preview with web search), 'claude' (Anthropic Claude Sonnet 4.5 with web_search tool), 'perplexity' (Perplexity Sonar with native citations), 'gemini' (Google Gemini 2.5 Flash with Google Search grounding). Aliases accepted: platforms, engines, aiPlatforms, models.",
                        "items": {
                            "type": "string"
                        },
                        "default": [
                            "chatgpt",
                            "claude",
                            "perplexity",
                            "gemini"
                        ]
                    },
                    "mode": {
                        "title": "Audit depth",
                        "enum": [
                            "quick",
                            "standard",
                            "deep"
                        ],
                        "type": "string",
                        "description": "Optional. Audit depth preset: 'quick' (3 prompts × selected LLMs ≈ 12 checks), 'standard' (5 prompts × selected LLMs ≈ 20 checks, default), or 'deep' (10 prompts × selected LLMs ≈ 40 checks). Use 'quick' for fast spot-checks and scheduled monitoring. Use 'deep' for one-time audits and competitive analysis. Overridden by maxPrompts if both are set.",
                        "default": "standard"
                    },
                    "promptIntents": {
                        "title": "Prompt intent categories",
                        "type": "array",
                        "description": "Optional. Categories of prompts to auto-generate. Defaults to a balanced mix. Available intents: 'recommendation' (best/top X for use case), 'alternatives' (alternatives to brand), 'how_to' (how do I [task] using [category]), 'comparison' (X vs Y), 'use_case' (which X should I pick for Y), 'review' (is X any good?), 'pricing' (is X worth the money?). If you provide 'customPrompts', those are used in addition to the auto-generated ones. Aliases accepted: queryTemplates, intents, promptCategories.",
                        "items": {
                            "type": "string"
                        },
                        "default": [
                            "recommendation",
                            "alternatives",
                            "how_to",
                            "use_case"
                        ]
                    },
                    "customPrompts": {
                        "title": "Custom prompts (real questions users ask)",
                        "type": "array",
                        "description": "Optional. Specific prompts to send to each LLM, in the natural phrasing your customers use (e.g., 'What's the best note-taking app for engineers?', 'Should I switch from Evernote to Notion?'). These run IN ADDITION to auto-generated prompts. If you ONLY want to run custom prompts, also set promptIntents to []. Aliases accepted: prompts, customQueries, queries, questions.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "maxPrompts": {
                        "title": "Max prompts per LLM (overrides mode)",
                        "minimum": 1,
                        "maximum": 30,
                        "type": "integer",
                        "description": "Optional override. Hard cap on how many unique prompts to send to each LLM. Each prompt × LLM = one paid check. Leave blank to use the 'mode' preset. Aliases accepted: maxQueries, maxQueriesPerPlatform, limit."
                    },
                    "webhookUrl": {
                        "title": "Webhook URL (optional)",
                        "type": "string",
                        "description": "Optional. HTTPS URL that receives a POST with the full run summary (visibility_index, share_of_voice_pct, per-LLM scores, recommendations) when the run completes. Use it to push results into Make.com, Zapier, n8n, Slack, or your own dashboard. Aliases accepted: webhook, callbackUrl, notifyUrl."
                    },
                    "demoMode": {
                        "title": "Health check / dry-run mode",
                        "type": "boolean",
                        "description": "Optional. When true, runs a connectivity check without calling any LLM and exits successfully. Use this to validate scheduling and wiring without spending credits. Aliases accepted: dryRun, healthCheck, testMode.",
                        "default": false
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
