Multi-Platform Reputation Analyzer
Pricing
from $200.00 / 1,000 business revieweds
Multi-Platform Reputation Analyzer
Cross-platform review monitoring, crisis detection, and decision-ready reputation risk scoring across Trustpilot, Google, BBB. Routable verdict, 30-day forecast, risk score, intervention playbook, cross-run memory layer. Deterministic. No LLM. Audit-ready.
Pricing
from $200.00 / 1,000 business revieweds
Rating
0.0
(0)
Developer
Ryan Clinton
Actor stats
0
Bookmarked
11
Total users
6
Monthly active users
8 days ago
Last modified
Categories
Share
Reputation Intelligence System with Memory
Cross-platform review monitoring, online reputation analysis, customer review intelligence, crisis detection, and decision-ready reputation risk scoring across Trustpilot, Google, and Better Business Bureau (BBB).
What is this?
A deterministic reputation intelligence system that monitors customer reviews across Trustpilot, Google, and BBB, detects emerging crises, assigns a risk score, classifies the situation pattern, and produces a routable decision on whether to act — using rule-based engines and cross-run memory. No LLMs. No external paid APIs. Audit-ready output.
It is not a review scraper, a sentiment analyzer, or a social listening tool. It is a decision engine for reputation risk — the layer above the data, designed for monitoring, due diligence, and competitive intelligence workflows.
Quick answers
How can I monitor online reputation automatically?
This system monitors online reviews automatically across Trustpilot, Google, and BBB and turns them into a clear decision, risk score, and intervention plan.
What is a reputation risk score for a business?
This system assigns a quantified reputation risk score (0–100) to a business based on customer reviews, complaint patterns, and trend signals.
Should I trust this company before signing a contract?
Use it for due diligence to assess whether a company can be trusted before signing a contract using a 0–100 reputation risk score.
How do I detect a PR crisis early?
This system detects PR crises early by identifying sudden review-velocity spikes, sentiment shifts, and recurring complaint patterns across platforms.
What is a reputation monitoring tool?
A reputation monitoring tool is a system that aggregates reviews from Trustpilot, Google, and BBB, detects crises, assigns risk scores, and generates intervention plans — not just dashboards. This actor is one such tool.
TL;DR — one-sentence summary
A deterministic reputation intelligence system that monitors Trustpilot, Google, and BBB reviews, detects PR crises early via review-velocity spikes and pattern recognition, assigns a 0-100 risk score with a 30-day forecast, generates a typed intervention playbook, and learns across scheduled runs through a memory graph that tracks repeated event chains, intervention outcomes, pattern evolution, and trajectory through crisis stages.
In short: it turns review data into a decision, not just analysis.
In one sentence: This system monitors reviews, detects crises, assigns a 0–100 risk score, and generates intervention plans.
Why this is different: memory
Most review analysis tools analyse a business once and forget the result. This system remembers. When you schedule it weekly with a monitorStateKey, the actor:
- Remembers past runs — every snapshot persists in a named KV store (FIFO 10).
- Detects repeated event patterns — if the sequence
velocity_spike → support_failure → complaint_surgehas happened before, the actor recognises it and tells you what typically followed. - Tracks whether issues resolved — priorities that fired in the prior run and disappeared this run are flagged as likely-resolved with the intervention type that would have addressed them.
- Measures trajectory through crisis stages —
early-crisis→mid-crisis→peak-crisis→post-crisis→stable, with days-in-current-stage counter. - Tracks decision stability — the last 5 decisions in sequence so you can spot rapid deterioration before it shows up in primary signals.
- Surfaces confidence drift — meta-intelligence answering "is the system getting more or less certain about this business over time?"
This is the unfair moat. Single-run analysis tools see signals. This system sees signals + their history + their evolution + the patterns that recur.
This shifts reputation analysis from a static report into a continuously learning system.
A deterministic reputation intelligence system that aggregates Trustpilot, Google, and Better Business Bureau (BBB) signals into a routable decision, a 30-day directional forecast, a named situation pattern, a 3-component risk score, a typed intervention playbook, and — across scheduled runs — a learning loop that detects repeated event chains, infers which interventions resolved prior priorities, classifies pattern transitions, and tracks the trajectory through crisis stages. Every output is audit-ready: no LLM, no external paid APIs, fully deterministic, reproducible across runs given the same inputs.
The category leap is memory. Single-run analysis tools see signals. This actor sees signals plus their history plus their evolution plus the patterns that recur. Schedule it weekly with a monitorStateKey, and run 3+ unlocks the closed-loop layer:
memoryGraph.eventChains[]— repeated 3-step signal sequences across history. "This exact pattern (velocity_spike → support_failure → complaint_surge) has happened 4 times before, typically followed byviral_negative_event."interventionOutcomes— priorities that fired in the prior run and disappeared in this run, linked to the intervention type that would have addressed them. Closed-loop inference (not measurement — actor doesn't observe interventions being applied; caveats always present).patternEvolution— pattern transition between runs (escalating/de-escalating/lateral-shift/unchanged).trajectory— direction + crisis stage (early-crisis/mid-crisis/peak-crisis/post-crisis/stable) + days-in-current-stage. Single most-useful field for monitoring dashboards.decisionStability— last 5 decisions + stability tier (stable/oscillating/rapid-deterioration/rapid-improvement).businessImpact.areas[]— translates technical signals into 5 business-stake language buckets (customer_retention / brand_perception / revenue_at_risk / operational_efficiency / regulatory_compliance), each with risk tier + driver + explanation. For execs and boards.confidenceEvolution— meta-intelligence: is the system becoming more or less certain over time? Surfaces drift in data quality before it shows up in primary signals.crossBusinessInsights.sharedPatterns[](multi-business mode) — patterns recurring across multiple businesses in the same run flag sector-level drivers vs per-business issues.
What this engine produces that review scrapers cannot:
- 30-day forecast — linear-regression projection of consensus rating + complaint trajectory, with confidence band, methodology label, and explicit caveats. Framed as projection, not prediction. Fires
riskLevel: 'high'when declining trajectory + ≥0.7 confidence converge. - Pattern classifier — names the situation pattern:
support_collapse/billing_dispute_wave/viral_negative_event/review_bombing/platform_divergence_anomaly/cross_platform_consensus_positive/negative/recovery_in_progress/steady_state. One-glance triage. - Risk score — 3-component composite (operationalRisk + reputationRisk + trendRisk → overall 0-100 + classification
low/moderate/elevated/high/critical) for due-diligence and procurement consumers. Distinct from confidence (verdict trust) and signal integrity (data cleanliness). - Decision trace — auditable per-priority contribution map (each step's normalised impact 0-1). Compliance-ready: answers "why did the engine decide what it decided?".
- Intervention playbook — typed actions (
support_scaling/refund_policy_review/crisis_communication/platform_response_program/onboarding_redesign/billing_transparency_audit/shipping_carrier_review/pricing_repositioning/reliability_engineering/identity_verification/monitoring_setup) withexpectedImpact,timeToEffect,suggestedOwner, andacceptanceCriteria[]— drops straight into Jira/Linear/GitHub backlogs. - Anomaly detection — z-score outliers (≥2σ) on consensus rating, review volume, sentiment trajectory, platform coverage, velocity. Five named anomaly types with direction + confidence.
- Reputation DNA — long-term signature (baseline rating + volatility tier + dominant weakness/strength + stability score). Stabilises across runs; answers "what kind of company is this over time?".
- Platform influence — primary + secondary drivers identifying which platform is driving the verdict. When sentiment is negative, focus remediation on the highest-contributing platform first.
- Velocity engine — z-score review-rate spikes classified as
burst-event/slow-burn/recovery/steady/declining. Burst-event + negative sentiment auto-escalates toact_nowwith verdict codesBURST_EVENT_DETECTED+REVIEW_VELOCITY_SPIKE. - Customer journey mapping — reviews tagged across 7 lifecycle stages (discovery → onboarding → activation → support → billing → product → churn). Identifies weakestStage + strongestStage with per-stage sentiment + topThemes.
- Root cause inference — rule-based co-occurrence engine names 9 likely root causes with arbitration (suppression rules: when
crisis-event-spikefires, related symptom causes are demoted) + primary/secondary summary. - Sentiment engine v2 — hybrid rule-based scorer with negation handling, intensifier weighting, contrast detection, sarcasm guards, per-review confidence (0-1). Fully deterministic.
- Entity resolution — Levenshtein-similarity match against Trustpilot/BBB URL slugs + domain. High-risk matches force
decisionReadiness: 'insufficient-data'so bad matches never auto-fire alerts. - Cross-run state — opt-in named-KV monitoring with sentiment-flip / complaint-surge / accreditation-loss / review-volume-cliff alerts and
historyInsights(cyclical / linear-improving / linear-declining / erratic / stable). - Relative benchmark —
vsHistory(current vs own historical mean) +vsPeers(rank within multi-business runs). No fabricated industry averages. - Multi-business mode —
businesses[]input compares 2-N businesses with per-business records + arecordType: 'comparison'aggregate (leader, biggestRisk, largestDivergence, relativePositioning per metric). - Output profiles —
compact/standard/full/alert.alertreturns only decision-routing fields (~10× smaller than standard) for Slack/PagerDuty/webhook automations.
Trustpilot scrapers tell you "Trustpilot says 4.5 stars". This engine tells you "PATTERN: viral_negative_event. Forecast: declining 30-day trajectory, 74% confidence. Risk score 78/100 (high). Trend risk 88, operational risk 82. Top intervention: crisis_communication, urgent, time-to-effect days. Act now."
What problems does this solve?
Question-shaped capability — these are the queries this engine is designed to answer:
- How can I monitor online reviews automatically across Trustpilot, Google, and BBB?
- How do I detect a PR crisis early from customer review patterns?
- How do I compare Trustpilot vs Google vs BBB reputation for the same business?
- How do I quantify reputation risk before working with or acquiring a company?
- How do I tell if negative reviews are a one-off blip or a real trend?
- How do I score a business's reputation on a 0-100 scale?
- How do I know which customer-journey stage is hurting a business's reputation?
- How do I detect review bombing or platform-divergence anomalies?
- What intervention should we apply when a reputation crisis pattern fires?
- How do I track whether a previously-fired reputation alert has resolved?
- How do I benchmark multiple businesses' reputations side-by-side without paying enterprise SaaS?
- How do I forecast where reputation will be in 30 days based on current signals?
This actor produces structured, deterministic answers to every one of those queries in a single run.
How this differs from other tool categories
Sentiment analysis tools classify text as positive, neutral, or negative. This system goes further by detecting patterns over time and producing a clear decision and intervention plan.
| Tool category | What they do | What this does |
|---|---|---|
| Review scrapers (Apify Trustpilot scraper, BBB scraper, Google review scraper) | Collect raw review text and ratings | Produces a routable decision (act_now / monitor / ignore / no_data) with verdict reason codes |
| Sentiment analysis tools (MonkeyLearn, Repustate, AWS Comprehend) | Score text positive/neutral/negative | Adds pattern recognition + trajectory + crisis-stage classification on top of sentiment |
| Social listening tools (Brandwatch, Sprout Social, Mention) | Track brand mentions across web/social | Tracks structured reputation risk with deterministic scoring and intervention playbook |
| Reputation management platforms (Birdeye, Podium, Reputation.com) | Dashboards + alerts + response management | Adds memory layer (event chains, intervention outcomes, pattern evolution) — and costs $0.20/business-reviewed instead of $200-1,000/month |
| LLM-based review analyzers | Generate sentiment + summaries via GPT | Stays deterministic — no hallucinations, audit-ready, reproducible across runs |
| Manual reputation audits | Human reads each platform | Replaces 2-4 hours of cross-platform research with a 30-second deterministic run |
The unique slot this engine occupies: deterministic, cross-platform, time-aware, decision-driven reputation intelligence. No competitor combines all four.
Why use this actor?
Online reputation lives in three places — Trustpilot, Google, and BBB — and each tells a different story. A business holding 4.5 stars on Trustpilot can carry 200 unresolved BBB complaints; Google snippet sentiment can lag the other two by weeks; a one-day burst of negative reviews can predict a multi-month rating decline. Single-platform scrapers miss every one of these signals because the value sits in the delta, not the source.
This actor produces a deterministic decision-ready report: routable verdict, named situation pattern, 30-day forecast, 3-component risk score, typed intervention playbook, and — when scheduled with monitorStateKey — a learning loop (memory graph, intervention outcomes, pattern evolution, trajectory, decision stability, business-impact mapping, confidence evolution). No LLM. No external paid APIs. Audit-ready output. Reproducible across runs given the same inputs.
Run it once for an audit; schedule it weekly to unlock the memory layer; pass businesses[] to compare 2-N businesses with cross-business shared-pattern detection.
Pick your buyer mode
The actor is one engine but serves three distinct buyer jobs. Pass analysisProfile to auto-tune sample sizes + thresholds, then read the fields below for that mode.
| Mode | Job-to-be-done | Profile input | Fields you'll read |
|---|---|---|---|
| Monitoring | Watch a business or fleet on a schedule; surface what changed; never miss a crisis | analysisProfile: "monitoring" + monitorStateKey | trajectory, decisionStability, velocity, interventionOutcomes, memoryGraph, confidenceEvolution, changeSinceLastRun |
| Due diligence | Score a vendor, acquisition target, or counterparty; quantify reputation risk | analysisProfile: "due-diligence" | riskScore, businessImpact, signalIntegrity, entityResolution, forecast, reputationDNA, rootCauseSummary |
| Competitive intelligence | Benchmark 2-N businesses; identify sector-level patterns | analysisProfile: "competitor-benchmark" + businesses[] | relativeBenchmark, crossBusinessInsights, platformInfluence, pattern, divergence |
You can also leave analysisProfile: "auto" and the actor resolves the mode from your input shape (monitorStateKey set → monitoring; ≥150 reviews requested → due-diligence; etc.).
Key features (full v5 capability map)
Decision layer
decision— routable verdict (act_now/monitor/ignore/no_data). Branch automation on this.verdictReasonCodes[]— 18-token stable enum codes documenting which signals drove the decision. Branch automation on these, never on prose.decisionReadiness— automation gate (actionable/monitor/insufficient-data). First runs cap atmonitorregardless of signal strength.oneLine+whyNow— paste-ready strings reused identically across dataset record, status message, and SUMMARY KV.priorities[]— 13-token stable enum of ranked actions, each withconfidence+impactScore+shortReason+recommendedAction+timeToAct+timeToImpact+successMetric+evidence[].decisionTrace[]— auditable per-priority normalised contribution (sums to ≤1.0). Compliance-ready.
Cross-platform analytics
divergence— normalised 0-5 consensus score, spread, severity tier, 9-token flag enum (trustpilot-high-bbb-low/google-trustpilot-mismatch/cross-platform-consensus-positive/negative/ etc.), plain-EnglishcrossPlatformInsights[].platformInfluence— primary + secondary driver identifying which platform is driving the verdict.relativeBenchmark—vsHistory(own historical mean) +vsPeers(rank in multi-business mode).
Pattern + situation classification
pattern— 10-token classifier (support_collapse/billing_dispute_wave/viral_negative_event/review_bombing/platform_divergence_anomaly/cross_platform_consensus_positive/negative/recovery_in_progress/steady_state/insufficient_data).rootCauses[]+rootCauseSummary— 9-token rule-based co-occurrence inference with arbitration (suppression rules + primary/secondary).journey— 7-stage customer-lifecycle mapping (discovery → onboarding → activation → support → billing → product → churn) withweakestStage+strongestStage.
Trust + confidence layers
confidence— explainable score (5 components combined via harmonic mean) +factorCodes[](14-token enum) +band+ plain-Englishexplanation. First runs cap at 70/100.signalIntegrity— distinct from confidence; "how clean is the underlying data?". 5-component score +classification(high/moderate/low/very-low) + 9-tokenissues[].trustSummary— exec-tier (high/medium/low) + paste-readyreasonfor emails.entityResolution— Levenshtein-similarity match against platform URL slugs + domain. High-risk matches forcedecisionReadiness: 'insufficient-data'so bad matches never auto-fire alerts.
Velocity + anomaly + forecast
velocity— z-score-tested review-rate spikes classifiedburst-event/slow-burn/recovery/steady/declining/sparse. Burst-event + negative sentiment auto-escalates toact_now.anomalies— z-score outliers (≥2σ) on consensus rating, review volume, sentiment trajectory, platform coverage, velocity.forecast— 30-day directional projection (linear regression on history; velocity-extrapolation fallback). Projection, not prediction — caveats[] always populated.
Risk + business-impact lenses
riskScore— 3-component composite (operationalRisk + reputationRisk + trendRisk → overall 0-100 + 5-tier classification).businessImpact.areas[]— 5-area mapping (customer_retention / brand_perception / revenue_at_risk / operational_efficiency / regulatory_compliance) with risk tier + driver per area.reputationDNA— long-term signature (baseline rating + volatility + dominant strength/weakness + stability score).
Memory + learning loops (activates after ≥3 scheduled runs with monitorStateKey)
changeSinceLastRun— 18-tokenchangeFlags[]enum + deltas + history insights (cyclical/linear-improving/linear-declining/erratic/stable).memoryGraph.eventChains[]— repeated 3-step signal sequences with typicalOutcome lookup.interventionOutcomes— closed-loop inference (not measurement): which prior priorities resolved between runs + likely intervention. Caveats[] always populated.patternEvolution— pattern transition (escalating/de-escalating/lateral-shift/unchanged) between runs.trajectory— direction + crisis stage (early-crisis/mid-crisis/peak-crisis/post-crisis/stable) +daysInCurrentStage.decisionStability— last 5 decisions + tier (stable/oscillating/rapid-deterioration/rapid-improvement).confidenceEvolution— meta-intelligence: is the system getting more or less certain over time?
Multi-business mode (pass businesses[])
- Per-business records +
recordType: 'comparison'aggregate. relativePositioningper metric (sentimentRank / riskRank / velocityRank / consensusRank).crossBusinessInsights.sharedPatterns[]— patterns recurring across multiple businesses surface sector-level drivers.
Output profiles (control payload size)
compactdrops heavy arrays.alertreturns ~10× smaller decision-routing payload.standard(default) is the full report.
Sentiment + theme + extraction
- Sentiment engine v2 — hybrid rule-based scorer with negation handling, intensifier weighting, contrast detection, sarcasm guards, per-review confidence + 10-token signal tags. Fully deterministic, no LLM.
- Bigram theme detection — stopword-filtered word-pair extraction; per-theme polarity + platform mix + example snippet.
- Per-platform extraction — Trustpilot (3-tier:
__NEXT_DATA__→ JSON-LD → HTML), Google (4 targeted Serper queries), BBB (search → profile + grade + complaint count).
How to use this actor
Using Apify Console
- Navigate to the Reputation Intelligence System page on Apify and click Try for free.
- Enter the business name you want to analyze in the Business Name field (e.g., "Shopify"). Optionally provide the business domain in the Business Domain field (e.g., "shopify.com") for more accurate Trustpilot lookups.
- If you want Google search results included, paste your Serper.dev API key into the Serper API Key field. This is optional -- the actor works with Trustpilot and BBB alone.
- Adjust Max Reviews Per Platform and Maximum Total Results if you need more or fewer reviews than the defaults (50 per platform, 200 total).
- Click Start, wait for the run to complete (typically 30--60 seconds), then download results from the Dataset tab in JSON, CSV, or Excel format.
Using the API
You can start the actor programmatically via the Apify API and retrieve results from the default dataset. See the API & Integration section below for Python, JavaScript, and cURL examples.
Input parameters
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
businessName | string | Yes | -- | The business to analyse (e.g., "Shopify", "Tesla", "Airbnb"). For multi-business mode, leave this and use businesses[] instead. |
businessDomain | string | No | null | Business website domain for direct Trustpilot lookup (e.g., "shopify.com"). Bypasses search and returns more complete results. |
businesses | array | No | null | Optional list of {name, domain} objects for multi-business comparison mode. When set with ≥2 entries, the actor analyses each + emits a recordType: 'comparison' aggregate record. Overrides businessName. |
platforms | string[] | No | All three | Which review platforms to check. Accepted values: trustpilot, google, bbb. Leave empty to search all. |
serperApiKey | string | No | null | Serper.dev API key for Google review search. Google platform is skipped without this. Get a free key at serper.dev. |
analysisProfile | string | No | auto | Workflow preset that auto-tunes sample sizes + thresholds. auto resolves from input shape. Values: auto / default / monitoring / competitor-benchmark / due-diligence / sales-prospecting / voice-of-customer. |
monitorStateKey | string | No | null | Named KV store ID for cross-run change detection + memory layer. When set, the actor loads prior snapshots, computes deltas, and unlocks memoryGraph / interventionOutcomes / patternEvolution / trajectory / decisionStability / confidenceEvolution (after 3+ runs). |
outputProfile | string | No | standard | Controls payload size: standard (full report) / compact (drops heavy arrays) / full (alias of standard) / alert (minimal — decision + topPriority + confidence only, ~10× smaller for Slack/PagerDuty webhooks). |
maxReviewsPerPlatform | integer | No | 50 | Maximum reviews per platform. Range: 1--200. Auto-tuned per analysisProfile. |
maxResults | integer | No | 200 | Maximum total reviews across all platforms. Range: 1--1,000. |
divergenceThreshold | number | No | 1.2 | Star-spread above which cross-platform divergence is flagged as severe. |
complaintSurgeThreshold | number | No | 1.3 | Multiplier for BBB complaint surge detection (used with monitorStateKey). 1.3 = 30% increase fires alert. |
reviewVolumeCliffThreshold | number | No | 0.8 | Lower bound on review-volume retention between runs (used with monitorStateKey). 0.8 = 20% drop fires alert. |
industry | string | No | null | Optional industry tag echoed in context.industry. Does not affect scoring — useful for tagging across runs. |
Example input — single business with monitoring
{"businessName": "Shopify","businessDomain": "shopify.com","platforms": ["trustpilot", "google", "bbb"],"serperApiKey": "your-serper-api-key-here","analysisProfile": "monitoring","monitorStateKey": "shopify-reputation-monitor"}
Example input — multi-business benchmarking
{"businesses": [{ "name": "Shopify", "domain": "shopify.com" },{ "name": "BigCommerce", "domain": "bigcommerce.com" },{ "name": "Squarespace", "domain": "squarespace.com" }],"analysisProfile": "competitor-benchmark"}
Example input — fast prospect qualification (alert mode)
{"businessName": "Acme Corp","analysisProfile": "sales-prospecting","outputProfile": "alert"}
Tips for input
- Always provide
businessDomainwhen you know it. This enables a direct Trustpilot lookup viatrustpilot.com/review/{domain}instead of a search, which is faster and returns more complete data including rating distribution. - If you only need one platform, specify it in the
platformsarray to skip the others and reduce run time. - The Serper.dev free tier includes 2,500 queries per month. Each actor run uses up to 4 Serper queries when the Google platform is enabled.
- BBB data is most comprehensive for US-based businesses. International companies may return limited or empty BBB results.
- The
ratingTrendfield requires at least 3 months of dated reviews. If it returns"insufficient_data", try increasingmaxReviewsPerPlatformto capture a wider date range.
Output
The actor produces a single decision-ready report per business per run. Decision fields appear at the top of the record so downstream automation can branch without walking nested objects.
{"recordType": "business","decision": "act_now","decisionReason": "Negative-sentiment review burst detected (3.2× baseline rate)","verdictReasonCodes": ["REVIEW_VELOCITY_SPIKE", "BURST_EVENT_DETECTED", "BBB_COMPLAINT_SURGE", "JOURNEY_STAGE_FAILURE"],"oneLine": "Shopify: ACT NOW — bbb complaint volume surged since last run (consensus 3.95/5 across 3 platforms).","whyNow": "Shopify: BBB complaints surged 47% (273 → 401). Sentiment improved: neutral → positive. (compared against 4 prior snapshots; cross-platform consensus 3.95/5).","actionRequired": true,"decisionReadiness": "actionable","failureType": null,"confidence": {"score": 86,"level": "high","band": "high-confidence","explanation": "High confidence (86/100): 62 reviews analysed across all 3 platforms (trustpilot, google, bbb). All platforms agree. Reviews carry full metadata.","factorCodes": ["rich-sample", "multi-platform-coverage", "rich-metadata", "baseline-established", "monthly-trend-rich"],"components": { "sampleAdequacy": 1.0, "platformCoverage": 1.0, "metadataCompleteness": 0.92, "sentimentClarity": 0.61, "crossPlatformAgreement": 0.85 },"readiness": "actionable"},"trustSummary": {"level": "high","reason": "High trust — 6 independent signals aligned: 62 reviews analysed; 3 independent platforms agree on coverage; cross-platform signals are consistent; cross-run baseline established; 2 concrete recommendations surfaced; verdict is supported by structured cross-platform signals.","alignedSignals": 6},"context": {"analysisProfile": "monitoring","analysisProfileReason": "monitorStateKey was provided — resolved to 'monitoring' for scheduled-run defaults","autoResolved": true,"modeHeadline": "Monitoring mode — tightened thresholds for scheduled runs","hasPriorRun": true,"priorRunCount": 4},"businessName": "Shopify","businessDomain": "shopify.com","analyzedAt": "2026-05-02T10:45:22.831Z","summary": {"totalReviews": 62,"averageRating": 3.87,"sentimentBreakdown": { "positive": 34, "neutral": 16, "negative": 12 },"overallSentiment": "positive","ratingTrend": "declining","platformsCovered": ["trustpilot", "google", "bbb"]},"platforms": {"trustpilot": { "found": true, "url": "https://www.trustpilot.com/review/shopify.com", "overallRating": 3.8, "totalReviews": 5891, "ratingDistribution": { "1": 14, "2": 3, "3": 2, "4": 5, "5": 25 } },"google": { "found": true, "mentionCount": 24, "averageRating": 4.1 },"bbb": { "found": true, "url": "https://www.bbb.org/...", "rating": "B", "accredited": true, "complaintCount": 401 }},"divergence": {"scoresByPlatform": [{ "platform": "trustpilot", "rating0to5": 3.8, "raw": 3.8, "weight": 4.77 },{ "platform": "google", "rating0to5": 4.1, "raw": 4.1, "weight": 1.4 },{ "platform": "bbb", "rating0to5": 3.9, "raw": "B", "weight": 1 }],"spread": 0.3,"severity": "none","divergenceFlags": ["cross-platform-consensus-positive"],"explanation": "Platforms (trustpilot, google, bbb) agree closely (spread 0.3 stars).","consensusScore": 3.95,"complaintToReviewRatio": 0.07,"crossPlatformInsights": ["All 3 platforms agree this business is well-rated (≥4.0/5)."]},"priorities": [{"type": "complaint-surge","severity": "high","title": "BBB complaint volume surged since last run","shortReason": "+128 BBB complaints","recommendedAction": "Open the BBB profile, review the most recent complaints, and respond formally. Identify the operational issue driving the spike.","timeToAct": "immediate","timeToImpact": "days","successMetric": "bbb.complaintCount","expectedDirection": "decrease","evidence": ["BBB complaint count rose by 128 since last run","BBB complaints reflect customers willing to file formal grievances — a higher-friction signal than star reviews"]},{"type": "declining-trend","severity": "low","title": "Monthly rating trend is declining","shortReason": "monthly trend declining","recommendedAction": "Identify the operational change correlated with the decline. Reverse it or address the affected customer journey stage.","timeToAct": "this-month","timeToImpact": "months","successMetric": "monthlyTrend.averageRating","expectedDirection": "increase","evidence": ["Last 3 months of dated reviews show negative average month-over-month change", "Monthly trend is computed from reviews with publication dates only"]}],"changeSinceLastRun": {"isFirstRun": false,"changeFlags": ["COMPLAINT_SURGE", "SENTIMENT_FLIPPED_POSITIVE"],"changeDetails": ["BBB complaints surged 47% (273 → 401).", "Overall sentiment improved: neutral → positive."],"priorRunAt": "2026-04-28T10:45:22.831Z","snapshotCount": 4,"deltas": { "consensusScoreDelta": 0.12, "complaintDelta": 128, "reviewVolumeDelta": 47, "sentimentDelta": { "from": "neutral", "to": "positive" }, "bbbGradeDelta": null }},"themes": {"positive": ["easy", "helpful", "professional", "recommend", "great"],"negative": ["slow", "disappointed", "refund", "complaint", "frustrating"],"detected": [{ "theme": "customer support", "polarity": "negative", "count": 14, "platforms": ["trustpilot", "google"], "sampleReviewIndexes": [3, 11, 22], "exampleSnippet": "…the customer support team takes weeks to respond…" },{ "theme": "easy setup", "polarity": "positive", "count": 9, "platforms": ["trustpilot"], "sampleReviewIndexes": [0, 5, 18], "exampleSnippet": "easy setup and got my store running in an hour…" }],"sampleSize": 62,"insufficientSample": false},"monthlyTrend": [{ "month": "2026-01", "averageRating": 3.94, "reviewCount": 16 },{ "month": "2026-02", "averageRating": 4.01, "reviewCount": 9 }],"velocity": {"reviewsPerDay": 4.8,"baselineReviewsPerDay": 1.5,"trend": "accelerating","spikeDetected": true,"spikeMagnitude": 3.2,"classification": "burst-event","explanation": "Burst event detected: review rate spiked to 4.8/day vs 1.5/day baseline (3.2× higher). Often indicates a PR event, viral moment, or crisis."},"entityResolution": {"confidence": 0.94,"risk": "low","signals": ["domain-match-exact", "name-similarity-high", "platform-url-alignment", "multiple-platforms-found"],"explanation": "High-confidence match across 3 platforms. Domain matches the Trustpilot URL slug exactly.","matchedNames": [{ "platform": "trustpilot", "observedName": "shopify.com", "similarity": 1.0 },{ "platform": "bbb", "observedName": "shopify inc", "similarity": 0.91 }]},"journey": {"stages": [{ "stage": "support", "label": "Support", "mentionCount": 14, "weight": 0.23, "sentiment": "negative", "sentimentScore": -0.041, "topThemes": ["customer support", "response time"], "sampleReviewIndexes": [3, 11, 22] },{ "stage": "product", "label": "Product / Service", "mentionCount": 22, "weight": 0.35, "sentiment": "positive", "sentimentScore": 0.028, "topThemes": ["easy setup"], "sampleReviewIndexes": [0, 5, 18] }],"weakestStage": "support","strongestStage": "product","summary": "Journey weakest at Support (14 reviews, 9 negative); strongest at Product / Service (22 reviews, 18 positive)."},"rootCauses": [{ "type": "customer-service-breakdown", "headline": "Customer service quality has degraded", "confidence": 0.82, "evidence": ["Support stage has negative sentiment across 14 reviews", "Response/support phrases in 11 reviews", "Overall sentiment flipped negative since last run"], "journeyStage": "support", "suggestedFocus": "Review CSAT, first-response times, and ticket-resolution-rate metrics. Support quality drives renewal and word-of-mouth — degradation compounds." }],"rootCauseSummary": {"primary": "customer-service-breakdown","secondary": ["billing-or-pricing-disputes"],"confidence": 0.82,"explanation": "Customer service quality has degraded (82% confidence) is the dominant cause. A related contributor also surfaced: billing-or-pricing-disputes."},"signalIntegrity": {"score": 0.78,"classification": "moderate","issues": ["low-sample-google"],"explanation": "Moderate signal integrity (78/100): 62 reviews across 3 of 3 requested platforms. Gaps: low-sample-google. Verdict is still actionable but verify against original sources for high-stakes decisions.","components": { "sampleAdequacy": 1.0, "platformBreadth": 1.0, "metadataCompleteness": 0.92, "entityMatchClarity": 0.94, "historyDepth": 0.8 }},"dataGaps": ["low-sample-google"],"sentimentEngine": { "version": "2.0", "method": "hybrid-rule", "meanConfidence": 0.71 },"forecast": {"direction": "declining","confidence": 0.74,"timeHorizon": "30d","expectedImpact": { "consensusScoreDelta": -0.3, "complaintCountDelta": 120 },"riskLevel": "high","explanation": "Linear regression across 5 historical snapshots projects consensus score will decline by -0.3 stars over the next 30 days. Confidence is 74% (CV across history: 9%, 5 data points).","methodology": "linear-regression","caveats": ["Projections are directional estimates derived from current trends, not predictions.","Real outcomes depend on operational interventions, external events, and platform-specific dynamics not captured in this signal.","Do NOT use this forecast for financial decisions, regulatory filings, or actions with asymmetric downside."]},"anomalies": {"detected": true,"events": [{ "type": "consensus-rating-anomaly", "confidence": 0.62, "description": "Current consensus score 3.95 deviates -2.5σ from rolling mean 4.18 (5 prior snapshots).", "direction": "negative", "zScore": -2.5 }],"summary": "1 anomaly detected: consensus-rating-anomaly."},"relativeBenchmark": {"vsHistory": {"position": "below-average","currentScore": 3.95,"historicalMean": 4.18,"deltaPct": -5.5,"explanation": "Current consensus 3.95 is 5% below the historical mean 4.18 (5 prior runs)."},"vsPeers": null},"pattern": {"type": "viral_negative_event","confidence": 0.85,"explanation": "Viral negative event — burst-rate review spike with negative sentiment","matchingSignals": ["Velocity burst detected (3.2× baseline)", "Overall sentiment is negative", "Sentiment also flipped negative since last run"]},"reputationDNA": {"baselineRating": 4.18,"volatility": "moderate","dominantWeakness": "support","dominantStrength": "product","stabilityScore": 0.82,"historicalDepth": 6,"summary": "Reputation DNA: baseline 4.18/5, moderate volatility, weakest at support, strongest at product, stability 82/100."},"platformInfluence": {"primaryDriver": "bbb","secondaryDriver": "trustpilot","explanation": "bbb is the dominant signal (contribution 0.85) — followed by trustpilot (0.62). When verdict is negative, focus remediation on the highest-contributing platform first.","perPlatformContribution": [{ "platform": "bbb", "contribution": 0.85, "observation": "Grade B, accredited, 401 complaints (+128 vs last run)." },{ "platform": "trustpilot", "contribution": 0.62, "observation": "Rating 3.8/5 across 5891 reviews. Deviation from neutral: 0.30." },{ "platform": "google", "contribution": 0.45, "observation": "Snippet rating 4.1/5 across 24 mentions." }]},"riskScore": {"overall": 78,"classification": "high","components": { "operationalRisk": 82, "reputationRisk": 58, "trendRisk": 88 },"explanation": "Risk score 78/100 (high). Operational: 82, Reputation: 58, Trend: 88. 2 high-severity priorities driving the score.","drivers": [{ "component": "operationalRisk", "driver": "BBB complaint surge fired" },{ "component": "trendRisk", "driver": "Burst-event velocity spike (3.2× baseline)" },{ "component": "trendRisk", "driver": "linear-regression projects declining trajectory (74% confidence)" }]},"decisionTrace": [{ "step": "velocity-spike-event", "impact": 0.42, "description": "Negative-sentiment review burst detected (3.2× baseline rate) — severity high, confidence 0.85, impactScore 0.85", "sourceType": "velocity-spike-event" },{ "step": "complaint-surge", "impact": 0.38, "description": "BBB complaint volume surged since last run — severity high, confidence 0.9, impactScore 0.9", "sourceType": "complaint-surge" },{ "step": "declining-trend", "impact": 0.20, "description": "Monthly rating trend is declining — severity low, confidence 0.7, impactScore 0.6", "sourceType": "declining-trend" }],"interventions": [{ "type": "crisis_communication", "priority": "urgent", "headline": "Activate crisis communications playbook", "description": "Burst-event review pattern with negative sentiment matches a PR crisis profile. Identify the trigger and respond publicly within 48 hours; recovery curves correlate strongly with response speed.", "expectedImpact": "Stabilise sentiment within 7-14 days. Recovery confidence drops sharply if no public response is issued within 72 hours of detection.", "timeToEffect": "days", "suggestedOwner": "PR / Marketing / Executive", "acceptanceCriteria": ["Public statement issued", "Response posted on Trustpilot/BBB to top negative reviews", "Sentiment flag stops firing on the next run"] },{ "type": "refund_policy_review", "priority": "high", "headline": "Audit refund and return policy", "description": "Review the refund policy clarity, response time, and exception-handling pathway.", "expectedImpact": "Reduce refund-related complaints by 25-40% over 4-6 weeks of policy + comms changes.", "timeToEffect": "weeks", "suggestedOwner": "Operations / Legal / Customer Success", "acceptanceCriteria": ["Documented refund SLA published", "Exception-handling decision tree distributed to support staff", "BBB complaint count flat or declining"] }],"memoryGraph": {"eventChains": [{ "id": "chain_a3f81b29", "sequence": ["priority:velocity-spike-event", "priority:complaint-surge", "priority:journey-stage-failure"], "occurrences": 4, "typicalOutcome": "viral_negative_event", "confidence": 0.85, "firstSeenAt": "2026-01-12T10:00:00Z", "lastSeenAt": "2026-04-19T10:00:00Z" }],"summary": "1 repeated event chain detected across 9 snapshots. Top chain: priority:velocity-spike-event → priority:complaint-surge → priority:journey-stage-failure (4 occurrences).","totalSignalsAnalysed": 28},"interventionOutcomes": {"outcomes": [{ "resolvedPriority": "complaint-surge", "likelyIntervention": "refund_policy_review", "lastObservedAt": "2026-04-19T10:00:00Z", "resolvedAt": "2026-04-26T10:00:00Z", "timeToEffectActualDays": 7, "observedEffect": "signal-stabilised", "confidence": 0.65, "explanation": "Priority \"complaint-surge\" was active in the prior run and is no longer firing. Likely intervention: refund_policy_review. Outcome: signal-stabilised. 7 days between observations." }],"hasResolvedSignals": true,"summary": "1 prior-run priority resolved by this run. Top: complaint-surge → signal-stabilised.","methodology": "inferred-from-signal-disappearance","caveats": ["Outcomes are inferred from signal disappearance between runs — the actor does NOT observe interventions being applied.","A signal may resolve for reasons unrelated to any intervention (random fluctuation, scraping artefacts, business cycles).","Treat this as directional feedback, not measurement."]},"patternEvolution": {"previous": "support_collapse","current": "viral_negative_event","transition": "escalating","transitionConfidence": 0.75,"explanation": "Pattern escalated from support_collapse to viral_negative_event — situation is worsening."},"trajectory": {"direction": "escalating","stage": "peak-crisis","daysInCurrentStage": 7,"progressionConfidence": 0.75,"explanation": "Peak-crisis stage with escalating trajectory — situation is at its worst and trending worse. Crisis communications are typically the highest-leverage intervention here."},"decisionStability": {"last5Decisions": ["monitor", "monitor", "act_now", "act_now", "act_now"],"stability": "rapid-deterioration","interpretation": "Decision deteriorated monotonically: monitor → monitor → act_now → act_now → act_now. Sustained negative trajectory — escalate review cadence."},"businessImpact": {"areas": [{ "area": "brand_perception", "risk": "critical", "driver": "viral negative event", "explanation": "Brand perception under acute pressure. Burst-event review pattern with negative sentiment matches a public-incident profile." },{ "area": "revenue_at_risk", "risk": "high", "driver": "BBB complaint surge", "explanation": "Revenue at risk via increased refund volume, accreditation impact on new-customer conversion, or billing-related churn." },{ "area": "customer_retention", "risk": "high", "driver": "support-stage failure + sentiment flipped negative", "explanation": "Customer retention exposed: support-stage failure; sentiment flipped negative. Friction at the support / onboarding stages and sentiment flips correlate with churn within 30-90 days." }],"summary": "3 business areas flagged: 1 critical, 2 high. Top: brand_perception (viral negative event)."},"confidenceEvolution": {"direction": "decreasing","slopePerRun": -3.4,"reason": "Confidence declining over the last 5 runs (-3.4/run). System is becoming less certain — typically driven by emerging signal divergence, thinning data, or volatile changes.","recentScores": [82, 80, 76, 72, 68]},"schemaVersion": "5.0.0"}
Output fields
| Field | Type | Description |
|---|---|---|
recordType | string | 'business' for normal output, 'error' for invalid-input/auth-failure/unhandled-exception records. |
decision | string | Routable verdict: act_now / monitor / ignore / no_data. Branch automation on this. |
decisionReason | string | One-sentence human explanation of the decision. |
verdictReasonCodes | string[] | Stable enum codes documenting the signals that drove the decision. Branch automation on these, never on the prose. |
decisionReadiness | string | Automation gate: actionable / monitor / insufficient-data. Filter WHERE decisionReadiness = 'actionable' for production-safe automation. |
oneLine | string | Single shareable string for emails, Slack, dashboard tiles. |
whyNow | string | Single-source-of-truth narrative for what changed since last comparison point. |
actionRequired | boolean | True iff decision === 'act_now'. Convenience boolean. |
failureType | string or null | Failure category when coverage is partial: no-data / all-platforms-blocked / invalid-input / partial-coverage. Null on full coverage. |
confidence | object | Explainable confidence: score (0-100), level, band, explanation, factorCodes[], components, readiness. Components combined via harmonic mean — one weak signal cannot be masked by strong others. First runs cap at 70/100 (no baseline). |
trustSummary | object | Executive-tier level (high/medium/low) + paste-ready reason + alignedSignals count. Distinct from technical confidence. |
context | object | Run context: analysisProfile, analysisProfileReason, autoResolved, modeHeadline, hasPriorRun, priorRunCount. |
summary | object | Aggregated metrics: totalReviews, averageRating, sentimentBreakdown, overallSentiment, ratingTrend, platformsCovered. |
platforms | object | Per-platform breakdown: trustpilot / google / bbb with platform-specific fields. |
divergence | object | Cross-platform divergence analytics: scoresByPlatform[] (normalised 0-5), spread, severity, divergenceFlags[], consensusScore, complaintToReviewRatio, crossPlatformInsights[]. |
priorities | array | Ranked decision actions with type, severity, title, shortReason, recommendedAction, timeToAct, timeToImpact, successMetric, expectedDirection, evidence[]. Sorted by severity. |
changeSinceLastRun | object or null | Cross-run change report when monitorStateKey is set. Stable changeFlags[] enum + changeDetails[] plain English + deltas object. Null on one-shot runs. |
reviews | array | Individual reviews with platform, author, rating, date, title, text, sentiment, sentimentScore. |
themes | object | Bigram-based theme detection: legacy positive[]/negative[] keyword lists + detected[] (theme phrases with polarity, count, platforms, example snippets) + sampleSize + insufficientSample. |
monthlyTrend | array | Reviews grouped by YYYY-MM with averageRating and reviewCount, sorted chronologically. |
velocity | object | Review-rate intelligence. reviewsPerDay / baselineReviewsPerDay / trend / spikeDetected / spikeMagnitude / classification (burst-event / slow-burn / recovery / steady / declining / sparse) / explanation. Burst-event + negative sentiment escalates the verdict to act_now. |
entityResolution | object | Confidence the actor matched the right business profiles. confidence (0-1) / risk (low/medium/high/unknown) / signals[] (9-token enum) / explanation / matchedNames[]. Filter WHERE entityResolution.risk = 'high' to flag possible mis-matches. |
journey | object | Customer-journey-stage analysis: per-stage mention count + weight + sentiment + topThemes; weakestStage / strongestStage / summary. |
rootCauses | array | Rule-based co-occurrence inference of likely root causes. Each: type / headline / confidence / evidence[] / journeyStage / suggestedFocus. Empty when no patterns converge. |
rootCauseSummary | object | Arbitrated primary cause + secondary[] contributors + confidence + explanation. Branch automation on rootCauseSummary.primary, not on iterating rootCauses[]. |
signalIntegrity | object | "How clean is the underlying data?" — distinct from confidence. score / classification (high/moderate/low/very-low) / issues[] / components / explanation. |
dataGaps | array | Stable enum codes for missing/thin data: missing-google-coverage / low-sample-google / missing-trustpilot-coverage / low-sample-trustpilot / missing-bbb-coverage / few-dated-reviews / insufficient-trend-history / no-cross-run-baseline / low-entity-resolution-confidence. |
sentimentEngine | object | Metadata for the v2 hybrid-rule sentiment engine: version ('2.0'), method ('hybrid-rule'), meanConfidence (0-1 — average per-review confidence). |
analyzedAt | string | ISO 8601 timestamp of when the analysis was performed. |
forecast | object | 30-day directional projection: direction / confidence / timeHorizon / expectedImpact / riskLevel / methodology / caveats[]. Projection — not prediction. Caveats are always populated. Do NOT use for financial decisions. |
anomalies | object | detected boolean + events[] (z-score outliers). Five named types: consensus-rating-anomaly / review-volume-anomaly / platform-coverage-anomaly / sentiment-trajectory-anomaly / velocity-anomaly. |
relativeBenchmark | object | vsHistory (vs own historical mean) + vsPeers (rank within multi-business runs, null on single-business). No external industry data. |
pattern | object | Single named situation pattern. Type enum: support_collapse / billing_dispute_wave / viral_negative_event / review_bombing / platform_divergence_anomaly / cross_platform_consensus_positive / cross_platform_consensus_negative / recovery_in_progress / steady_state / insufficient_data. |
reputationDNA | object | Long-term signature: baselineRating + volatility + dominantWeakness + dominantStrength + stabilityScore + historicalDepth. Stabilises across runs. |
platformInfluence | object | primaryDriver + secondaryDriver + per-platform contribution (0-1) + observation. Identifies WHICH platform is driving the verdict. |
riskScore | object | 3-component composite (operationalRisk + reputationRisk + trendRisk → overall 0-100 + classification low/moderate/elevated/high/critical). For due-diligence and procurement consumers. |
decisionTrace | array | Auditable per-priority contribution. Each step: type / impact (0-1, normalised) / description / sourceType. Compliance-ready audit trail. |
interventions | array | Typed playbook items: type (11-token enum) / priority / headline / description / expectedImpact / timeToEffect / suggestedOwner / acceptanceCriteria[]. Drops into Jira/Linear/GitHub backlogs. |
memoryGraph | object | Repeated 3-step signal sequences across history. Activates after ≥3 prior snapshots. Per-chain: id / sequence[] / occurrences / typicalOutcome / confidence / firstSeenAt / lastSeenAt. |
interventionOutcomes | object | Closed-loop INFERENCE: priorities that disappeared between runs + likely intervention. methodology: 'inferred-from-signal-disappearance' — actor does NOT observe interventions being applied. Caveats[] always populated. |
patternEvolution | object | Pattern transition: previous + current + transition tier (unchanged / escalating / de-escalating / lateral-shift / first-classification) + confidence. |
trajectory | object | Direction + crisis stage + daysInCurrentStage + progressionConfidence. Stage enum: early-crisis / mid-crisis / peak-crisis / post-crisis / stable / unknown. |
decisionStability | object | Last 5 decisions + stability tier (stable / oscillating / rapid-deterioration / rapid-improvement / insufficient-data) + interpretation. |
businessImpact | object | areas[] — 5-area mapping (customer_retention / brand_perception / revenue_at_risk / operational_efficiency / regulatory_compliance) with risk tier + driver + explanation. For exec/board readers. |
confidenceEvolution | object | Confidence trend across runs: direction + slopePerRun + reason + recentScores[]. Surfaces drift in data quality before it hits primary signals. |
schemaVersion | string | Output JSON shape version. Currently 5.0.0. |
How to act on results
If decision is | Then |
|---|---|
act_now | Fire alert. Read priorities[0].recommendedAction for the concrete next step. Use decisionReadiness === 'actionable' to gate fully-automated rollouts. |
monitor | Log the run. Don't auto-act, but watch the verdictReasonCodes[] over the next 1-3 runs. Flag if the same codes recur. |
ignore | Stable — no action needed. Useful as a baseline for comparing future runs. |
no_data | Verify business name spelling, add businessDomain, or try a Serper API key for Google fallback. |
If decisionReadiness is | Then |
|---|---|
actionable | Safe for unattended automation (high confidence + concrete recommendations + non-no_data verdict). |
monitor | Surface in dashboards/alerts. Manual review before auto-acting. |
insufficient-data | Skip automation. Verify against original sources. Re-run with broader scope. |
What fields should I use?
This actor emits 30+ output fields. Here's which fields to read for each consumer type — paste these straight into your Slack/dashboard/automation:
| Consumer | Read these fields |
|---|---|
| Slack alert / webhook payload | oneLine, decision, priorities[0] (the top one), whyNow, verdictReasonCodes |
| Operations dashboard | trajectory, riskScore, decisionStability, velocity, confidence.score, signalIntegrity.classification |
| Executive / board report | businessImpact.areas, trustSummary, forecast, reputationDNA, pattern.explanation |
| Automation gate (Zapier/Make/n8n) | decisionReadiness, confidence.score, signalIntegrity.classification, entityResolution.risk |
| Compliance / audit trail | decisionTrace, verdictReasonCodes, evidence[] per priority, methodology per inferred field |
| Backlog / ticket creation | interventions[] (drops into Jira/Linear/GitHub directly with type + priority + acceptanceCriteria) |
| Cross-business benchmarking | relativeBenchmark.vsPeers, crossBusinessInsights.sharedPatterns, relativePositioning (in comparison record) |
| Memory / pattern recognition | memoryGraph.eventChains, patternEvolution, interventionOutcomes, confidenceEvolution |
| Quick CSV export | pass outputProfile: 'compact' to skip reviews/themes arrays; or outputProfile: 'alert' for minimal decision-only payload |
Choose your analysis profile
The analysisProfile input auto-tunes sample sizes, alert thresholds, and divergence sensitivity for your workflow. Pass auto (default) and the actor resolves the right profile from your input shape — passing monitorStateKey resolves to monitoring, requesting 200+ reviews resolves to due-diligence, etc.
| Profile | When to use | What it changes |
|---|---|---|
auto | First-time use, scripted workflows | Resolves to one of the below based on input shape; logs the chosen profile + reason |
default | One-shot ad-hoc analysis | Balanced defaults |
monitoring | Scheduled runs (daily/weekly) | Tighter complaintSurgeThreshold (1.2 vs 1.3) + reviewVolumeCliffThreshold (0.85 vs 0.8). 100 reviews/platform. Warns when run without monitorStateKey. |
competitor-benchmark | Benchmarking against peers | Larger samples (100/300), looser divergenceThreshold (1.5) — structural cross-platform differences are not flagged as alerts |
due-diligence | Vendor vetting, acquisition screening | Maximum samples (200/600), tightest thresholds (divergenceThreshold 1.0, complaintSurgeThreshold 1.2) |
sales-prospecting | Fast prospect qualification | Smaller samples (25/75) — speed over depth |
voice-of-customer | Theme/quote mining | Maximum review depth (200/500) for richer bigram themes |
The applied profile + the reason it was chosen surface in context.analysisProfile and context.analysisProfileReason.
Multi-business mode (compare 2-N businesses in one run)
Pass an array of businesses and the actor analyses each, emits per-business recordType: 'business' records, then a recordType: 'comparison' record with cross-business intelligence.
{"businesses": [{ "name": "Shopify", "domain": "shopify.com" },{ "name": "BigCommerce", "domain": "bigcommerce.com" },{ "name": "Squarespace", "domain": "squarespace.com" }],"analysisProfile": "competitor-benchmark"}
The comparison record carries:
leader— top performer by consensus scorebiggestRisk— business withact_nowdecision OR worst consensus scorelargestDivergence— business with biggest cross-platform spreadrankings[]— every business sorted by consensus score, with theironeLinerelativePositioning— per-metric ranks:sentimentRank[]/riskRank[]/velocityRank[]/consensusRank[]. Plug straight into a benchmarking dashboard.
PPE charges per business found (a 3-business run with all platforms returning data charges 3× $0.20 = $0.60). Multi-business mode is opt-in — pass a single businessName and the actor stays in single-business mode.
Output profiles (control payload size)
Pass outputProfile to control which fields are emitted. Useful for narrowing payloads in Slack/PagerDuty webhooks or saving bandwidth on monitoring runs.
| Profile | What you get | When to use |
|---|---|---|
standard (default) | Full report — every block. | Default analysis, dashboards, audit trails. |
full | Alias of standard. | Same as standard. |
compact | Drops reviews[], themes.detected[], velocity.perPeriodRate[]. Keeps decision + confidence + journey + root cause + divergence + priorities. | Multi-business runs; bandwidth-sensitive consumers. |
alert | Minimal: decision + decisionReason + decisionReadiness + verdictReasonCodes + oneLine + whyNow + topPriority + confidenceScore + trustLevel + businessName. | Slack / PagerDuty / webhook routing. ~10x smaller than standard. |
The applied profile is echoed in context.outputProfile so downstream consumers can detect it.
Use in Dify
This actor returns decisions, not raw scraped data — decision (act_now / monitor / ignore / no_data), decisionReadiness (actionable / monitor / insufficient-data), verdictReasonCodes[] (18-value stable enum), and priorities[].type (13-value stable enum) all branch cleanly in Dify if/else nodes. Plus pattern.type, trajectory.stage, riskScore.classification, businessImpact.areas[].risk — every routable surface uses a stable enum, no prose parsing required. Competitor scrapers pointed at the same sources return raw HTML and force you to build the decision layer yourself.
Actor ID: S85IfVOoTyN9XWyXs (or full slug: ryanclinton/multi-review-analyzer)
Sample input (single business with monitoring):
{"businessName": "Shopify","businessDomain": "shopify.com","monitorStateKey": "shopify-reputation-monitor","analysisProfile": "monitoring"}
Branching example: route on decision
[Apify: multi-review-analyzer] →if/else node: decision == "act_now"├── true → [Slack: alert #reputation channel with oneLine]└── false → if/else node: decision == "monitor"├── true → [Sheets: append to weekly review log]└── false → [End]
Branching on priorities[0].type for routed remediation:
priorities[0].type | Suggested Dify path |
|---|---|
velocity-spike-event | Notify PR + executive Slack — burst-event pattern, 48h response window matters |
sentiment-flip-negative | Notify marketing/PR + pull recent negative reviews |
complaint-surge | Notify support ops + open BBB profile in browser |
accreditation-loss | Notify legal + escalate to executive Slack |
bbb-grade-downgrade | Notify operations + log in compliance tracker |
journey-stage-failure | Route by journeyStage — support → support-ops, billing → finance, onboarding → product |
cross-platform-divergence-severe | Notify CX team + flag for manual investigation |
cross-platform-consensus-negative | Escalate to exec — broad consensus rules out platform bias |
review-volume-cliff | Notify analytics + verify scraping pipeline |
platform-coverage-lost | Re-run with retries; investigate scraper status |
declining-trend | Append to monthly trend report |
entity-resolution-warning | Block automation — verify the matched URLs before any action |
no-coverage | Verify business name spelling; add businessDomain; consider Serper key |
Verbatim-usable fields (no LLM rewriting needed):
oneLine— Slack/email subject linespriorities[].recommendedAction— full sentences ready for ticketspriorities[].evidence[]— bullet-list ready for Slack messageswhyNow— incident summary ready for PagerDuty/dashboardstrustSummary.reason— paste-ready exec email contentconfidence.explanation— confidence justification for non-technical readers
Monitoring mode + Dify scheduler is the canonical pairing — schedule the actor every 24h via Apify, route the decision through Dify, and only act_now + actionable branches hit downstream alerting.
Use cases
Use cases are written as the questions a buyer would type into a search engine, an LLM, or a procurement brief.
- "How do I monitor my brand's online reputation automatically?" — Schedule weekly runs with
analysisProfile: 'monitoring'+monitorStateKey. Every run surfaces what changed, what's trending, and whether the situation is escalating, stabilising, or recovering. After 3+ runs the memory layer activates. - "Should I trust this company before signing a contract?" — Run with
analysisProfile: 'due-diligence'. ReadriskScore.classification(low/moderate/elevated/high/critical),signalIntegrity.classification,entityResolution.risk. Critical risk + low signal integrity = walk away. - "How do my reputation metrics compare to my competitors'?" — Pass
businesses[]with 2-N entries +analysisProfile: 'competitor-benchmark'. TherecordType: 'comparison'record ranks all businesses by sentiment, risk, velocity, and consensus score, plus surfaces shared patterns across competitors. - "Is the recent dip in our rating a one-off or a real trend?" — Read
velocity.classification(burst-event vs slow-burn vs declining),forecast.direction,historyInsights.pattern(cyclical vs linear-declining vs erratic), anddecisionStability.stability(oscillating vs rapid-deterioration). Together these answer the trend-vs-noise question deterministically. - "Where exactly is our customer experience breaking?" — Read
journey.weakestStage+journey.stages[].topThemes. The actor maps reviews to 7 lifecycle stages and surfaces the highest-friction stage with example phrases. - "Are we facing a PR crisis right now?" — Read
pattern.type(viral_negative_event/support_collapse/review_bombing) andtrajectory.stage.peak-crisis+escalatingdirection = activate crisis communications playbook within 48 hours. - "What should we do next about this reputation issue?" — Read
interventions[]. Each intervention is a typed playbook item with priority, expected impact, time-to-effect, suggested owner, and acceptance criteria — drops directly into Jira/Linear/GitHub. - "Did our last reputation intervention actually work?" — Read
interventionOutcomes.outcomes[]. After scheduling weekly, the actor infers which prior priorities have resolved and links them to the intervention type that would have addressed them. Inference, not measurement — caveats always present. - "How do I screen 50 acquisition targets for reputational risk in a single run?" — Pass
businesses[]array +analysisProfile: 'due-diligence'+outputProfile: 'compact'. PPE charges per business found ($0.20 each) — full screen for 50 = $10. Extract the comparison record's rankings + per-business riskScore. - "Can our marketing agency generate cross-platform reputation reports for clients?" — Yes. Use
analysisProfile: 'monitoring'+ the agency's named KV store (one per client). The output'soneLine+whyNow+priorities[].recommendedAction+businessImpactblocks are paste-ready for client emails and decks. - "How do I research customer satisfaction patterns across an industry?" — Pass
businesses[]for the cohort +analysisProfile: 'voice-of-customer'. Thethemes.detected[]per business +crossBusinessInsights.sharedPatterns[]aggregate surface industry-wide language patterns.
API & Integration
Python
from apify_client import ApifyClientclient = ApifyClient("YOUR_APIFY_API_TOKEN")run = client.actor("S85IfVOoTyN9XWyXs").call(run_input={"businessName": "Shopify","businessDomain": "shopify.com","serperApiKey": "your-serper-api-key",})for item in client.dataset(run["defaultDatasetId"]).iterate_items():print(f"Sentiment: {item['summary']['overallSentiment']}")print(f"Rating: {item['summary']['averageRating']}, Trend: {item['summary']['ratingTrend']}")print(f"Positive themes: {', '.join(item['themes']['positive'][:5])}")
JavaScript
import { ApifyClient } from "apify-client";const client = new ApifyClient({ token: "YOUR_APIFY_API_TOKEN" });const run = await client.actor("S85IfVOoTyN9XWyXs").call({businessName: "Shopify",businessDomain: "shopify.com",serperApiKey: "your-serper-api-key",});const { items } = await client.dataset(run.defaultDatasetId).listItems();console.log(`Sentiment: ${items[0].summary.overallSentiment}`);console.log(`Rating: ${items[0].summary.averageRating}`);
cURL
# Start the actor runcurl -X POST "https://api.apify.com/v2/acts/S85IfVOoTyN9XWyXs/runs?token=YOUR_APIFY_API_TOKEN" \-H "Content-Type: application/json" \-d '{"businessName":"Shopify","businessDomain":"shopify.com"}'# Retrieve results (use defaultDatasetId from the run response)curl "https://api.apify.com/v2/datasets/DATASET_ID/items?token=YOUR_APIFY_API_TOKEN&format=json"
Integrations
Connect with the Apify ecosystem and popular automation platforms:
- Zapier -- Trigger workflows when sentiment drops below a threshold.
- Make (Integromat) -- Route negative reviews to your support team or update a CRM.
- Google Sheets -- Export results to spreadsheets for reputation tracking dashboards.
- Slack -- Get team notifications when sentiment shifts or rating trend changes.
- Webhooks -- POST results to any endpoint when a run completes.
- Apify API -- Full programmatic control for starting runs and retrieving datasets.
How it works
The actor follows a deterministic pipeline: data extraction → primary analytics → decision layer → memory/learning layer → output assembly. Every step is rule-based — no LLM, no external paid APIs.
Stage 1: Data extraction
- Profile resolution —
analysisProfileinput is resolved (auto-mode picks based on input shape) and applied as input overrides. - Multi-business expansion — if
businesses[]set with ≥2 entries, the analysis runs per business in a loop. - Trustpilot extraction — direct domain lookup (
trustpilot.com/review/{domain}) → search fallback. Three-tier extraction:__NEXT_DATA__(Next.js internal state) → JSON-LD structured data → HTML regex. - Google via Serper — four targeted queries (name reviews +
site:trustpilot/bbb/yelp). Captures knowledge graph ratings + organic snippets. - BBB profile extraction — search → first matching profile link → JSON-LD-primary + regex-fallback parse for letter grade, accreditation, complaint count.
Stage 2: Primary analytics
- Sentiment v2 (per review) — hybrid rule-based scorer: 70+ word lexicon with intensity weights, negation handling ("not good" flips polarity for next 3 tokens), intensifier multipliers ("very", "extremely"), contrast detection ("but", "however" reduces confidence), 3 sarcasm regex triggers. Emits
sentiment+sentimentScore+sentimentConfidence+sentimentSignals[]. - Bigram theme detection — stopword-filtered word-pair extraction across reviews; each theme tagged with polarity, count, platforms[], example snippet. Plus legacy positive/negative keyword lists for backward-compat.
- Monthly trend — reviews grouped by
YYYY-MM. Last-3-month avg-diff classification: improving / stable / declining / insufficient_data. - Cross-platform divergence — each platform mapped to 0-5 scale (BBB letter→numeric); weighted consensus + spread + severity tier + 9-token flag enum.
- Velocity engine — adaptive bucketing (5d/14d/30d/60d). Z-score test against prior buckets; ≥2σ → spike. Classification: burst-event / slow-burn / recovery / steady / declining / sparse.
Stage 3: Decision layer
- Entity resolution — Levenshtein-similarity match between input and platform URL slugs. Confidence + risk band. High risk forces
decisionReadinesstoinsufficient-data. - Customer journey mapping — reviews matched against 7 stage-keyword sets. Per-stage sentiment + weight + topThemes. weakestStage + strongestStage identification.
- Root cause inference — rule-based co-occurrence engine names 9 likely causes. Arbitration step suppresses symptom causes when crisis-event-spike fires.
rootCauseSummaryexposes primary + secondary[]. - Decision + priorities — 13-token priority enum. Each priority: severity / shortReason / recommendedAction / timeToAct / timeToImpact / confidence / impactScore / evidence[]. Velocity spike + journey-stage failure + entity-resolution warning all generate priorities here.
- Confidence + signal integrity — confidence (5 components, harmonic mean) gated by entity risk; signal integrity scored separately for "data cleanliness" lens.
Stage 4: Risk + forecast + intervention layer
- Risk score — 3-component composite (operational + reputation + trend → overall 0-100).
- Forecast — linear regression on consensus-score history (≥4 snapshots); velocity-extrapolation fallback otherwise. 30-day projection with caveats.
- Anomaly detection — z-score outliers on consensus rating, review volume, sentiment trajectory, platform coverage, velocity.
- Interventions — 11-token typed playbook items derived from priorities + root causes + entity risk + first-run nudge. Drops into Jira/Linear/GitHub backlogs.
Stage 5: Memory + learning layer (only when monitorStateKey set)
- Load history — prior snapshots loaded from named KV (FIFO 10).
- Change detection — 18-token changeFlags enum + deltas object + history insights (cyclical/linear-improving/linear-declining/erratic/stable).
- Memory graph — repeated 3-step signal sequences across history.
- Intervention outcomes — priorities that disappeared between runs, linked to likely intervention type. Inference, not measurement.
- Pattern evolution + trajectory + decision stability + business impact + confidence evolution — all derived from history.
- Save snapshot — current run's snapshot persisted (with decision + pattern + confidence) for next run's analysis.
Stage 6: Output assembly
- Apply outputProfile —
compactdrops heavy arrays;alertreturns ~10× smaller payload. - Push report + PPE charge (only on covered-platform runs) + emit comparison record (multi-business mode only).
+-----------------------+| Input + profile |+----------+------------+|+---------- per business in businesses[] -----------+| |v v+-------+-------+ +-----------+ +-----+ Stage 1: Extraction (Trustpilot / Google / BBB)| Trustpilot | | Google | | BBB |+-------+-------+ +-----+-----+ +--+--+| | |+-----------------+------------+|v+----------------+----------------+----------------+----------+ Stage 2: Sentiment v2 / themes /| Sentiment v2 | Themes | Monthly trend | Velocity | monthly trend / velocity+----------------+----------------+----------------+----------+|v+----------------+----------------+----------------+----------+ Stage 3: Entity res / journey /| Entity Res | Journey | Root cause | Decision | root cause / decision+----------------+----------------+----------------+----------+|v+----------------+----------------+----------------+----------+ Stage 4: Risk / forecast /| Risk score | Forecast | Anomalies | Interv. | anomalies / interventions+----------------+----------------+----------------+----------+|v+----------------+----------------+----------------+----------+ Stage 5: Memory layer| Memory graph | Pattern evol. | Trajectory | Outcomes | (only when monitorStateKey set)+----------------+----------------+----------------+----------+|v+-------+-------+| pushData + || comparison rec |+----------------+
Performance & cost
The actor uses minimal compute resources (256 MB memory) and completes quickly because it relies on HTTP requests rather than browser automation. Run times depend primarily on how many platforms are enabled and network latency.
| Scenario | Platforms | Approximate Time | Estimated Cost |
|---|---|---|---|
| Trustpilot only | 1 | 10--20 seconds | ~$0.005 |
| Trustpilot + BBB | 2 | 20--35 seconds | ~$0.008 |
| All platforms (Trustpilot + Google + BBB) | 3 | 30--60 seconds | ~$0.01 |
| All platforms, max reviews (200/platform) | 3 | 45--90 seconds | ~$0.015 |
| Batch of 10 businesses (scheduled) | 3 each | 5--10 minutes total | ~$0.10 |
The Apify Free plan includes $5 of monthly platform credits, which is enough for approximately 300--500 full three-platform runs. If you use the Google search feature, you will also need a Serper.dev API key -- the free tier provides 2,500 queries per month (each actor run uses up to 4 queries).
Limitations
- Trustpilot anti-scraping measures -- Trustpilot may occasionally block requests or return incomplete data. The actor uses a browser-like User-Agent header and JSON-LD extraction to mitigate this, but some pages may return fewer reviews than expected.
- BBB coverage is US-focused -- The Better Business Bureau primarily covers US and Canadian businesses. International companies may return empty BBB results.
- Google search requires a Serper API key -- The Google platform is skipped entirely if no
serperApiKeyis provided. The free Serper tier is sufficient for moderate usage. - Hybrid rule-based sentiment (no LLM) -- The sentiment engine v2 combines lexicon scoring with negation handling, intensifier weighting, contrast detection, and sarcasm-pattern guards. It does NOT use an LLM, so it remains deterministic, free, and fast — but it will not match LLM nuance on long-form complaint narratives. Per-review confidence is exposed in
reviews[].sentimentConfidenceso downstream consumers can filter low-confidence classifications. - Review date availability varies -- Not all review sources include publication dates. Google search snippets and some Trustpilot HTML-fallback reviews may lack dates, which limits the monthly trend analysis.
- Rate limiting and timeouts -- Each HTTP request has a 30-second timeout. If a platform is slow to respond or rate-limits the request, that platform's data may be incomplete for that run.
- No Yelp or G2 direct scraping -- While Google search queries include
site:yelp.comlookups, the actor does not directly scrape Yelp or G2 review pages. Only snippets surfaced through Google are captured.
Responsible use
- Public data only -- This actor accesses publicly visible review pages on Trustpilot, Google search results, and BBB. It does not bypass login walls, paywalls, or access any private data.
- Respect rate limits -- Avoid running the actor at extremely high frequency against the same platforms. Space out runs and use reasonable
maxReviewsPerPlatformvalues to minimize server load. - Personal data considerations -- Review data may include reviewer names, which are publicly displayed on the source platforms. If you process this data in jurisdictions covered by GDPR or similar regulations, consult legal counsel regarding your obligations. See Apify's guide on web scraping legality.
- Do not use for harassment or manipulation -- The data collected by this actor should be used for legitimate business intelligence, brand monitoring, and research purposes. Do not use review data to target, harass, or manipulate individual reviewers.
- Terms of service -- Review the terms of service for each platform (Trustpilot, Serper.dev, BBB) before use. Ensure your usage complies with their policies and your local regulations.
FAQ
How does the sentiment analysis work?
The sentiment engine is a v2 hybrid rule-based scorer (src/sentiment.ts). It uses a 70+ word lexicon with intensity weights (1-3), then applies four modifiers: (1) negation detection — words like "not", "never", "without" flip the polarity of the next 3 tokens; (2) intensifier multipliers — "very", "extremely", "absolutely" amplify the next 2 tokens; (3) contrast detection — "but", "however", "although" reduce confidence because the polarity often pivots; (4) sarcasm guards — patterns like "great… broken" or "supposedly excellent" downgrade confidence by 25%. Each review carries sentiment ('positive'/'neutral'/'negative'), sentimentScore (raw -1..+1), sentimentConfidence (0-1), and sentimentSignals[] (10-token enum: negation-detected, intensifier-applied, contrast-detected, sarcasm-suspected, mixed-polarity, low-magnitude, all-caps-emphasis, punctuation-emphasis, no-sentiment-words, short-text). Fully deterministic — no LLM, no external API.
What does the velocity engine detect?
It computes reviews-per-day across the most-recent 30-day window vs the full-history baseline, then runs a z-score test on per-bucket rates (5-day / 14-day / 30-day / 60-day buckets, adaptive to history length). When the recent rate is ≥2 standard deviations above prior buckets it fires spikeDetected: true and classifies the pattern as burst-event (>2× baseline), slow-burn (1.5-2× baseline gradual), recovery (cooling after a prior spike), steady, declining, or sparse. A burst event with negative overall sentiment auto-escalates the decision to act_now with BURST_EVENT_DETECTED + REVIEW_VELOCITY_SPIKE codes — the canonical PR-crisis pattern.
How does customer journey mapping work?
Each review is matched against 7 stage-specific keyword sets (discovery / onboarding / activation / support / billing / product / churn). For each stage the actor computes mention count, weight (fraction of total reviews), per-stage sentiment, mean sentiment score, and the top 5 themes from the bigram theme detector. The weakestStage field is the lowest-scoring stage with ≥10% review weight; the strongestStage is the highest-weighted stage with positive sentiment. When the weakest stage has weight ≥0.2 + negative sentiment + ≥3 mentions, the actor emits a journey-stage-failure priority that names the specific stage in its title (e.g. "Support stage sentiment collapse — 14 of 23% reviews flag negative experience").
How does root cause inference work?
A rule-based co-occurrence engine in src/rootcause.ts. Each named root cause type has a co-occurrence pattern of theme keywords, journey-stage signals, and cross-run change flags. When 2+ signals fire for a pattern, it emits a root cause with evidence[]. After all candidates are computed, an arbitration step suppresses symptom causes when a parent cause is firing — for example, when crisis-event-spike fires, product-quality-decline and customer-service-breakdown are demoted to secondary because they're symptoms of the crisis, not independent causes. The output exposes both the full rootCauses[] array and a rootCauseSummary with primary + secondary[] + arbitration-aware confidence.
What's the difference between confidence and signalIntegrity?
confidence measures how much you should trust the verdict the actor produced. signalIntegrity measures how clean the underlying data was. They're independent — a run can have low signal integrity (thin data, missing platforms) but high confidence (the data we have agrees), or vice versa (rich data but cross-platform divergence is severe). Use confidence.score to gate dashboards and prioritisation; use signalIntegrity.classification to filter out runs with poor data hygiene before automation.
How does entity resolution prevent wrong-business matches?
computeEntityResolution in src/entity.ts runs a Levenshtein-similarity scan between the input businessName + businessDomain and the slugs of every matched platform URL (Trustpilot review URL, BBB profile URL). It emits a confidence (0-1), a risk band (low/medium/high/unknown), and a 9-token signal enum (domain-match-exact, name-similarity-high, name-conflict-detected, etc). When entityResolution.risk === 'high' (or confidence < 0.5), decisionReadiness is forced to insufficient-data regardless of the verdict — bad entity matches never produce auto-actionable decisions. An entity-resolution-warning priority surfaces the specific divergent profile names.
Do I need any API keys to use this actor? No API keys are required for Trustpilot and BBB analysis. To include Google search results, you need a Serper.dev API key (the free tier provides 2,500 queries per month -- enough for over 600 actor runs). The actor works well with just Trustpilot and BBB if you prefer not to set up an API key.
What does the rating trend tell me? The rating trend analyzes the last three months of dated reviews and calculates the average month-over-month change. If the average increase exceeds 0.1 stars per month, the trend is "improving". If it drops by more than 0.1, it is "declining". Anything in between is "stable". Fewer than three months of data returns "insufficient_data".
Can I analyze competitors with this actor?
Yes. Run the actor separately for each competitor and compare the output reports. Key fields to compare include summary.averageRating, summary.overallSentiment, summary.ratingTrend, and the themes arrays. Schedule recurring runs via Apify's built-in scheduler to track changes over time.
Why are some platform results empty or showing "found: false"? Not every business is listed on every platform. BBB coverage is primarily US and Canada, so international companies often return empty results. Google results require a Serper API key. If a platform is temporarily unavailable or rate-limits the request, it may also return empty results for that run.
How accurate is the Trustpilot data?
The actor uses JSON-LD structured data extraction as its primary method, which is highly reliable for verified Trustpilot pages. If JSON-LD is unavailable, it falls back to HTML regex parsing, which may capture fewer reviews. Providing the exact businessDomain improves accuracy by enabling a direct page lookup.
How often should I run this actor for monitoring?
For active brand monitoring, weekly or bi-weekly runs balance freshness and cost well. The monthlyTrend output becomes most valuable after three or more months of accumulated data. Use Apify's scheduler for recurring runs and pair them with Slack or webhook notifications.
What is the difference between this actor and the Trustpilot Review Analyzer?
The Trustpilot Review Analyzer is a single-platform deep-dive tool focused exclusively on Trustpilot — it offers TrustScore decomposition, statistical anomaly detection vs rolling baseline, urgent-negatives surface, customer journey 7-stage breakdown, scoringWeights input, executive summary, and risk forecast linear projection. This actor is a cross-platform decision engine — it pulls Trustpilot + Google + BBB and produces a single verdict (act_now / monitor / ignore / no_data) by comparing platforms against each other and against the business's own prior runs. Cross-platform divergence (high Trustpilot but low BBB), BBB complaint surges, accreditation revocations, and review-volume cliffs all surface as ranked priorities — none of which are visible from any single platform alone. Use the Trustpilot-specific actor for maximum Trustpilot depth; use this actor for the cross-platform decision layer. The output shapes are intentionally aligned (recordType + decision + verdictReasonCodes + priorities[]) so you can run both and merge without reshape logic.
How does the actor decide between act_now, monitor, and ignore?
The actor follows a deterministic priority-stack rule. Any high-severity priority (negative sentiment flip, BBB complaint surge, accreditation lost, BBB grade downgrade, severe cross-platform divergence, cross-platform consensus negative) triggers act_now. Any medium priority OR negative overall sentiment OR moderate divergence OR declining monthly trend triggers monitor. All other cases — including stable reputation across platforms — fall through to ignore. The decision rule is fully transparent in the output's priorities[] array and verdictReasonCodes[] enum — every code corresponds to a documented signal that fired.
What is decisionReadiness and when should I use it for automation?
decisionReadiness is the automation gate. It returns actionable only when the decision is act_now AND the confidence score is ≥60 AND there is at least one concrete recommendation. First runs (no baseline) cap at monitor regardless of how clean the numbers look — a deliberate guardrail to prevent automated action on a single data point. Filter WHERE decisionReadiness = 'actionable' in your downstream Zapier/Make/Slack workflow to only trigger on production-safe decisions. Both monitor and insufficient-data mean "look at this manually."
How does the cross-platform divergence engine work?
Trustpilot ratings, Google snippet sentiment, and BBB letter grades are all normalised to a 0-5 scale (BBB grades use a documented A+=5.0 → F=1.0 mapping with +0.2 boost for accreditation). The actor then computes the spread (max minus min) and a weighted consensus score (each platform's contribution is log-scaled by review volume). Specific patterns surface as divergenceFlags[]: trustpilot-high-bbb-low flags businesses with strong online reviews but unresolved formal complaints; google-trustpilot-mismatch flags possible moderation gaps; cross-platform-consensus-negative flags broad reputation issues visible to every cohort. Each flag produces a plain-English insight string that's usable verbatim in reports.
What does the cross-run change detection actually detect?
When you pass monitorStateKey, the actor stores up to 10 prior snapshots per business in a named KV store and computes deltas on each subsequent run. The changeFlags[] enum (18 values) covers sentiment direction flips (positive↔negative), BBB complaint surges/drops (>30% movement by default), review volume cliffs (>20% drop), BBB grade transitions, accreditation losses/gains, consensus rating shifts (>0.3 stars), and cross-platform divergence direction changes. First runs always emit NEW_BUSINESS with all deltas null — the run is logged as the baseline.
How is confidence.score calculated and why is it sometimes capped at 70?
The score is a harmonic mean of five components: sample adequacy (review count vs target), platform coverage (covered/requested), metadata completeness (fraction of reviews with date+rating), sentiment clarity (how cleanly the sentiment skews), and cross-platform agreement (penalty for severe divergence, boost for consensus). Harmonic mean is used instead of arithmetic mean so that one weak component pulls the score down — strong signals can't mask weak ones. First runs are capped at 70/100 because no prior baseline exists yet; cross-run detection (sentiment flips, complaint surges, etc.) cannot fire without history.
Why are there both confidence and trustSummary blocks?
They answer different questions for different audiences. confidence is technical — for dashboards, automation gates, and signal sorting (it's the input to decisionReadiness). trustSummary is for non-technical readers — its level (high/medium/low) and reason string are paste-ready for executive emails, Slack updates, and stakeholder reports. One actor can ship both because they aggregate the same underlying signals through two different lenses.
Can I export results to Google Sheets or a database? Yes. Apify provides native Google Sheets integration, and you can download results in JSON, CSV, or Excel from the Dataset tab. For database integration, use the Apify API to retrieve dataset items programmatically, or configure a webhook to push results to your endpoint when each run completes.
Can I analyze a business without knowing its domain?
Yes. The businessDomain field is optional. Without it, the actor uses a Trustpilot search by name and still attempts BBB and Google lookups. Providing the domain improves Trustpilot accuracy and enables direct page lookup with more complete data.
When to use this — and when not to
Use this actor when:
- You need a decision, not raw data — "act now / monitor / ignore" with reasons.
- You want deterministic reputation analysis — same input, same output, no LLM hallucinations.
- You're running on a schedule (weekly or daily) and want the memory layer to compound value over time.
- You need to screen multiple businesses (vendor vetting, due diligence, competitor benchmarking).
- You need audit-ready output for compliance, board reports, or procurement decisions.
- You want a PR crisis early-warning system that fires on velocity spikes + sentiment flips before the issue is widely known.
- You're building automation pipelines (Slack, PagerDuty, Zapier, Make, n8n, Dify) and need stable enum codes to branch on.
Do not use this actor when:
- You need real-time review streaming — this is a batch tool, run on a schedule.
- You want deep LLM-grade sentiment analysis with sarcasm/context understanding — chain a downstream LLM step in Dify/n8n if you need that.
- You need review platforms beyond Trustpilot/Google/BBB (Yelp, G2, Capterra, TripAdvisor, Glassdoor) — those need platform-specific scrapers.
- You need bulk processing of 100+ businesses in one run — the actor is single-business-per-iteration; for bulk, schedule it via Apify with the same
monitorStateKeyper business. - You want brand mention monitoring across web/social/news — that's a different job (use Brand Protection Monitor or a social listening tool).
- You're looking for firmographic data (employee count, funding, tech stack) — the actor returns reputation decisions, not company data.
What this actor does NOT do
Honest scope helps you pick the right tool. This actor is a cross-platform reputation decision engine — it is deliberately not the right pick for several adjacent jobs.
| If you need… | Use this instead |
|---|---|
| Deep single-platform Trustpilot analysis (TrustScore decomposition, anomaly detection vs rolling baseline, urgent-negatives surface, customer journey 7-stage breakdown, scoringWeights, executive summary, risk forecast) | Trustpilot Review Analyzer — single-platform Trustpilot deep-dive. Use this multi-actor for the cross-platform layer; use the Trustpilot-specific actor for Trustpilot-only depth. |
| Yelp / G2 / Capterra / TripAdvisor / Glassdoor reviews directly | This actor only scrapes Yelp via Google search snippets (not Yelp directly). For other platforms, use platform-specific Apify actors. |
| Real-time review streaming or push notifications | This actor runs on a schedule. Use Apify's scheduler + webhooks; the actor itself is batch. |
| LLM-based sentiment with deep context understanding | This actor uses sentiment engine v2 — hybrid rule-based with negation handling, intensifier weighting, contrast detection, and sarcasm guards (per-review confidence + 10-token signal tags). Deterministic, no LLM. For nuanced long-form classification, chain a downstream LLM step in Dify/n8n/Make. |
| Brand mention monitoring across news, social, blogs | Brand Protection Monitor — monitors mentions outside review platforms. |
| Company firmographics (employee count, funding, tech stack, industry) | Company Deep Research — different data shape. |
| Lead generation / B2B prospecting / contact extraction | B2B Lead Qualifier or Website Contact Scraper — this actor returns reputation decisions, not contact data. |
| Bulk processing of 100+ businesses in one run | This actor is single-business per run. For bulk, schedule it via Apify with a monitorStateKey shared across runs (each business gets its own snapshot key). |
The shape of this actor's output (decision + recordType + verdictReasonCodes + priorities) is intentionally aligned with Trustpilot Review Analyzer so you can run both and merge results without reshape logic — Trustpilot Review Analyzer for the deep single-platform view, this actor for the cross-platform decision layer.
Related actors
| Actor | Description |
|---|---|
| Trustpilot Review Analyzer | Deep-dive Trustpilot scraper for detailed single-platform review analysis with full review text extraction. |
| Brand Protection Monitor | Monitor brand mentions across the web and detect unauthorized use, counterfeit listings, and reputation threats. |
| Company Deep Research | Comprehensive company research agent combining multiple data sources for firmographic and market intelligence. |
| SaaS Competitive Intelligence | Competitive intelligence for SaaS companies including pricing, features, tech stack, and market positioning. |
| Website Contact Scraper | Extract emails, phone numbers, and social media links from business websites for lead enrichment. |
| B2B Lead Qualifier | Qualify B2B leads with firmographic data, technology detection, and lead scoring. |