Actor Regression Suite
Pricing
$350.00 / 1,000 regression suite runs
Actor Regression Suite
Actor Regression Suite. Available on the Apify Store with pay-per-event pricing.
Pricing
$350.00 / 1,000 regression suite runs
Rating
0.0
(0)
Developer
ryan clinton
Actor stats
0
Bookmarked
1
Total users
0
Monthly active users
6 hours ago
Last modified
Categories
Share
Actor Regression Suite -- Detect Output Regressions Between Builds
Run test suites against any Apify actor and automatically compare results to previous runs. Actor Regression Suite detects regressions (tests that were passing but now fail), resolved issues (tests that were failing but now pass), and new test results. Get a clear view of what changed between actor versions so you can ship with confidence.
Actor Regression Suite extends the test runner pattern with historical comparison. Provide test cases and (optionally) results from a previous run. The suite runs all tests, classifies each as pass, fail, regression, resolved, new_pass, or new_fail, and produces a report showing exactly what changed. Integrate into CI/CD to block deploys when regressions appear.
What data can you extract?
| Data Point | Source | Example |
|---|---|---|
| Regression count | Comparison to previous run | 1 regression |
| Resolved count | Comparison to previous run | 2 resolved |
| Per-test status | Classification | Domain extraction: REGRESSION |
| Previous vs. current | Diff comparison | was: PASS, now: FAIL |
| Assertion details | Per-test evaluation | field 'domain' exists: FAIL |
| Suite version | Auto-generated datestamp | 2026-03-18 |
Why use Regression Suite?
A test suite tells you what's broken right now. A regression suite tells you what broke since last time. This distinction matters:
- A test that has always failed is a known issue. A test that was passing yesterday and fails today is an urgent regression.
- A test that was failing last week but passes now is a resolved issue worth celebrating (or at least noting).
- A brand new test with no prior history needs different attention than an established test.
Without historical comparison, you can't tell the difference. Regression Suite classifies every test result so you can focus on what actually changed.
Features
- Six-state classification for every test case:
pass,fail,regression(was passing, now fails),resolved(was failing, now passes),new_pass,new_fail - Automatic previous result injection -- when used through the ApifyForge dashboard, previous results are automatically loaded from your last run. No manual tracking needed.
- Same assertion engine as Test Runner with six assertion types:
minResults,maxResults,requiredFields,fieldTypes,maxDuration,noEmptyFields - Sequential execution to avoid overwhelming target actors
- Per-case error isolation -- one crashed test doesn't block the rest
- Regression-first reporting -- regressions are highlighted at the top for immediate attention
- Suite versioning with automatic datestamp for tracking changes over time
- Single PPE charge ($0.35) per suite, not per test case
Use cases for regression testing
Post-deploy verification
After pushing a new actor version, run the regression suite with the same test cases used before the deploy. Regressions mean your update broke something. Block the release, fix the issue, re-deploy, re-test.
Upstream change detection
Websites change their HTML structure. APIs deprecate endpoints. Government databases update schemas. Schedule the regression suite weekly to catch upstream changes that break your actor's output without any code changes on your end.
Migration validation
Switching from one scraping approach to another (e.g., Cheerio to Playwright)? Run the regression suite before and after. Any regressions indicate the new approach doesn't cover all the same scenarios.
Team coordination
When multiple developers work on the same actor, regressions from one person's changes are immediately visible. The "resolved" status shows when someone fixes a previously broken test.
Release notes automation
The regression report is structured JSON. Parse it in CI/CD to auto-generate release notes: "2 regressions fixed, 1 new test added, 0 regressions introduced."
How to run a regression suite
- Enter the target actor -- Provide the actor ID or
username/actor-nameslug. - Define test cases -- Same format as Actor Test Runner: name, input, assertions.
- Provide previous results (optional) -- Pass results from a prior run. If omitted, all results are classified as
new_passornew_fail. - Run the suite -- Click "Start" and wait for all test cases to complete.
- Review regressions -- Check the report for regression and resolved statuses.
When used through the ApifyForge dashboard, previous results are automatically injected from your last cached run -- no manual tracking needed.
Input parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
targetActorId | string | Yes | -- | Actor ID or username/actor-name slug to test. |
testCases | array | Yes | -- | Array of test case objects (name, input, assertions). |
previousResults | array | No | [] | Previous run results for comparison: [{ name, passed }]. |
timeout | integer | No | 120 | Maximum seconds per test case run. |
memory | integer | No | 512 | Memory in MB for each test case run. |
Previous results format
[{ "name": "Basic search", "passed": true },{ "name": "Empty input", "passed": false },{ "name": "Special characters", "passed": true }]
The name field must match the test case name exactly. Unmatched previous results are ignored. Test cases without a matching previous result are classified as new_pass or new_fail.
Input example
{"targetActorId": "ryanclinton/website-contact-scraper","testCases": [{"name": "Basic scan","input": { "urls": ["https://example.com"], "maxPagesPerDomain": 3 },"assertions": { "minResults": 1, "requiredFields": ["url", "domain", "emails"] }},{"name": "Multiple domains","input": { "urls": ["https://example.com", "https://httpbin.org"], "maxPagesPerDomain": 2 },"assertions": { "minResults": 2, "noEmptyFields": ["url", "domain"] }}],"previousResults": [{ "name": "Basic scan", "passed": true },{ "name": "Multiple domains", "passed": true }]}
Output example
{"actorName": "ryanclinton/website-contact-scraper","actorId": "abc123def456","suiteVersion": "2026-03-18","totalTests": 2,"passed": 1,"failed": 1,"regressions": 1,"resolved": 0,"newTests": 0,"totalDuration": 30.5,"details": [{"name": "Basic scan","status": "pass","previousStatus": "pass","currentStatus": "pass","duration": 12.1,"resultCount": 1,"assertions": [{ "assertion": "minResults >= 1", "passed": true, "expected": 1, "actual": 1 },{ "assertion": "field 'url' exists", "passed": true, "expected": "present", "actual": "present" }]},{"name": "Multiple domains","status": "regression","previousStatus": "pass","currentStatus": "fail","duration": 18.4,"resultCount": 1,"error": null,"assertions": [{ "assertion": "minResults >= 2", "passed": false, "expected": 2, "actual": 1 },{ "assertion": "field 'url' not empty", "passed": true, "expected": "non-empty", "actual": "non-empty" }]}],"testedAt": "2026-03-18T14:30:00.000Z"}
Output fields
| Field | Type | Description |
|---|---|---|
actorName | string | Display name of the tested actor |
actorId | string | Apify actor ID |
suiteVersion | string | Date-based version stamp (YYYY-MM-DD) |
totalTests | number | Total test cases run |
passed | number | Test cases that passed |
failed | number | Test cases that failed |
regressions | number | Tests that were passing before but now fail |
resolved | number | Tests that were failing before but now pass |
newTests | number | Tests with no previous result for comparison |
totalDuration | number | Total suite execution time in seconds |
details | array | Per-test regression details (see below) |
details[].status | string | One of: pass, fail, regression, resolved, new_pass, new_fail |
details[].previousStatus | string | Previous status: pass, fail, or new |
details[].currentStatus | string | Current status: pass or fail |
testedAt | string | ISO 8601 timestamp |
Status classification matrix
| Previous | Current | Classification |
|---|---|---|
| pass | pass | pass (stable) |
| pass | fail | regression |
| fail | pass | resolved |
| fail | fail | fail (stable) |
| (none) | pass | new_pass |
| (none) | fail | new_fail |
How much does it cost?
Regression Suite uses pay-per-event pricing at $0.35 per suite run. Target actor runs are billed separately.
| Scenario | Suites | Orchestration Cost |
|---|---|---|
| One-off check | 1 | $0.35 |
| Weekly regression (4/mo) | 4 | $1.40 |
| Daily CI/CD (30/mo) | 30 | $10.50 |
| Post-deploy (10/mo) | 10 | $3.50 |
Run regression suites using the API
Python
from apify_client import ApifyClientclient = ApifyClient("YOUR_API_TOKEN")run = client.actor("ryanclinton/actor-regression-suite").call(run_input={"targetActorId": "ryanclinton/website-contact-scraper","testCases": [{"name": "Basic scan","input": {"urls": ["https://example.com"], "maxPagesPerDomain": 3},"assertions": {"minResults": 1, "requiredFields": ["url", "domain"]},},],"previousResults": [{"name": "Basic scan", "passed": True},],})for item in client.dataset(run["defaultDatasetId"]).iterate_items():print(f"Regressions: {item['regressions']} | Resolved: {item['resolved']}")for d in item["details"]:print(f" [{d['status'].upper()}] {d['name']} (was: {d['previousStatus']}, now: {d['currentStatus']})")
JavaScript
import { ApifyClient } from "apify-client";const client = new ApifyClient({ token: "YOUR_API_TOKEN" });const run = await client.actor("ryanclinton/actor-regression-suite").call({targetActorId: "ryanclinton/website-contact-scraper",testCases: [{name: "Basic scan",input: { urls: ["https://example.com"], maxPagesPerDomain: 3 },assertions: { minResults: 1, requiredFields: ["url", "domain"] },}],previousResults: [{ name: "Basic scan", passed: true }],});const { items } = await client.dataset(run.defaultDatasetId).listItems();const report = items[0];console.log(`Regressions: ${report.regressions} | Resolved: ${report.resolved}`);
FAQ
How do I get previous results for comparison?
Three ways: (1) The ApifyForge dashboard auto-injects them from your last run. (2) Save the details array from a prior run and map it to previousResults. (3) On first run, skip previousResults -- all results will be new_pass or new_fail, establishing your baseline.
What if I add a new test case?
New test cases without a matching entry in previousResults are classified as new_pass or new_fail. They establish a baseline for future comparisons.
What if I rename a test case?
Matching is by exact name. A renamed test case appears as new_pass/new_fail (new name) while the old name is simply absent from the report. Keep names stable for meaningful regression tracking.
What's the difference between Regression Suite and Test Runner? Test Runner gives you pass/fail per test case. Regression Suite adds historical comparison -- it tells you whether failures are new (regressions) or pre-existing. Use Test Runner for one-off testing, Regression Suite for ongoing quality tracking.
Can I track regressions across multiple actors? Run a separate regression suite for each actor. The structured output can be aggregated in a dashboard or spreadsheet.
Related actors
| Actor | How to combine |
|---|---|
| Actor Test Runner | Same test engine without regression comparison. Use Test Runner for one-off suites, Regression Suite for ongoing tracking. |
| Cloud Staging Test | Quick single-input validation. Use before Regression Suite for a fast smoke test. |
| Schema Validator | Schema compliance checking. Schema Validator checks structure; Regression Suite checks functional behavior over time. |
| Actor Health Monitor | Runtime failure monitoring. Health Monitor catches crashes; Regression Suite catches output quality degradation. |
Support
Found a bug or have a feature request? Open an issue in the Issues tab on this actor's page.