Actor Quality Audit
Pricing
$50.00 / 1,000 quality scanneds
Actor Quality Audit
Score each actor's quality: README, pricing, output schema, reliability, and popularity. Get actionable issues and recommendations to improve your Apify Store rankings.
Pricing
$50.00 / 1,000 quality scanneds
Rating
0.0
(0)
Developer
ryan clinton
Actor stats
0
Bookmarked
1
Total users
0
Monthly active users
2 days ago
Last modified
Categories
Share
ApifyForge Quality Monitor -- Actor Quality Scorer
Score the quality of every actor in your Apify account across five weighted dimensions: README completeness, pricing configuration, output schema, reliability, and popularity. ApifyForge Quality Monitor produces a 0-100 quality score for each actor, along with specific issues found and actionable recommendations to improve. It also computes a fleet-wide average quality score so you can track improvements over time. Built to power the quality panel of the ApifyForge dashboard, this actor tells you exactly where each actor falls short and what to fix first.
Why use ApifyForge Quality Monitor?
- Objective quality scoring. Every actor gets a 0-100 score based on five measurable dimensions, removing guesswork from quality assessment.
- Specific, actionable feedback. Does not just give you a number -- tells you exactly what is wrong ("Description too short", "No PPE pricing configured") and what to do about it ("Expand actor description to at least 500 characters").
- Five quality dimensions. Evaluates README/description (25%), pricing setup (20%), output schema (15%), run reliability (30%), and user popularity (10%).
- Worst-first sorting. Results are sorted by quality score ascending so the actors that need the most work appear at the top.
- Fleet-wide tracking. The
fleetQualityScoregives you a single number to track over time as you improve your actors. - Schema validation. Checks whether your actors have defined output dataset schemas, which is required for Apify Store listing quality and API documentation.
- Dashboard-ready output. Structured JSON with per-dimension breakdowns designed for visualization in ApifyForge.
Key Features
- Fetches all actors from your account with full pagination support
- Evaluates description/README length and checks for usage examples
- Detects PPE pricing configuration from actor detail endpoint
- Checks for output dataset schema via the latest tagged build
- Samples last 100 runs to compute 30-day reliability score
- Normalizes popularity score against the most popular actor in your fleet
- Produces per-actor breakdown scores across all five dimensions
- Generates specific issue descriptions and fix recommendations
- Sorts results worst-first for efficient quality improvement workflows
How to Use
- Go to ApifyForge Quality Monitor on the Apify Store.
- Click Try for free.
- Enter your Apify API Token (find it at Settings > Integrations).
- Click Start.
- Wait for the run to complete (typically 30-120 seconds depending on fleet size).
- Review per-actor quality scores and recommendations in the Dataset tab.
Input Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
apifyToken | string | Yes | -- | Your Apify API token. Used to authenticate all API calls. Find it at https://console.apify.com/settings/integrations |
Input Examples
Standard quality scan:
{"apifyToken": "apify_api_xxxxxxxxxxxxxxxxxxxxxxxxxxxx"}
Automated daily quality tracking via API:
{"apifyToken": "apify_api_xxxxxxxxxxxxxxxxxxxxxxxxxxxx"}
Output Example
{"fleetQualityScore": 68,"actors": [{"name": "quick-prototype-scraper","id": "low001","qualityScore": 22,"breakdown": {"readme": 10,"pricing": 0,"schema": 20,"reliability": 50,"popularity": 0},"issues": ["Description very short or missing","No README content","No PPE pricing configured","No tagged build found","No recent runs to assess reliability"],"recommendations": ["Add a detailed description explaining what the actor does","Create a README with description, usage examples, and output format","Set up Pay-Per-Event pricing to monetize this actor","Define a dataset schema in .actor/dataset_schema.json"]},{"name": "google-maps-scraper","id": "best01","qualityScore": 95,"breakdown": {"readme": 100,"pricing": 100,"schema": 100,"reliability": 99,"popularity": 100},"issues": [],"recommendations": []}],"scannedAt": "2026-03-16T14:30:00.000Z"}
Output Fields
| Field | Type | Description |
|---|---|---|
fleetQualityScore | number | Average quality score across all actors (0-100) |
actors | array | Per-actor quality details, sorted by quality score ascending (worst first) |
actors[].name | string | Actor name |
actors[].id | string | Actor ID |
actors[].qualityScore | number | Composite quality score (0-100), weighted sum of five dimensions |
actors[].breakdown | object | Per-dimension scores (each 0-100) |
actors[].breakdown.readme | number | README/description quality score. 100 = 500+ char description with examples. |
actors[].breakdown.pricing | number | Pricing configuration score. 100 = PPE pricing set up, 0 = no PPE pricing. |
actors[].breakdown.schema | number | Output schema score. 100 = dataset schema defined, 30 = build exists but no schema, 20 = no tagged build. |
actors[].breakdown.reliability | number | Run reliability score based on 30-day success rate. Requires 5+ runs for full confidence. 50 = neutral (no data). |
actors[].breakdown.popularity | number | Popularity score normalized against the most popular actor in your fleet (0-100). |
actors[].issues | array | List of specific quality issues found for this actor |
actors[].recommendations | array | Actionable fix recommendations for each issue |
scannedAt | string | ISO 8601 timestamp of when the quality scan was performed |
Programmatic Access
Python
from apify_client import ApifyClientclient = ApifyClient("apify_api_xxxxxxxxxxxxxxxxxxxxxxxxxxxx")run = client.actor("ryanclinton/apifyforge-quality-monitor").call(run_input={"apifyToken": "apify_api_xxxxxxxxxxxxxxxxxxxxxxxxxxxx"})dataset_items = client.dataset(run["defaultDatasetId"]).list_items().itemsquality = dataset_items[0]print(f"Fleet quality score: {quality['fleetQualityScore']}/100")# Show the 10 worst actorsprint("\nLowest quality actors (fix these first):")for actor in quality["actors"][:10]:print(f" {actor['name']}: {actor['qualityScore']}/100")for issue in actor["issues"]:print(f" - {issue}")for rec in actor["recommendations"]:print(f" > {rec}")
JavaScript
import { ApifyClient } from "apify-client";const client = new ApifyClient({token: "apify_api_xxxxxxxxxxxxxxxxxxxxxxxxxxxx",});const run = await client.actor("ryanclinton/apifyforge-quality-monitor").call({apifyToken: "apify_api_xxxxxxxxxxxxxxxxxxxxxxxxxxxx",});const { items } = await client.dataset(run.defaultDatasetId).listItems();const quality = items[0];console.log(`Fleet quality: ${quality.fleetQualityScore}/100`);// Find actors missing PPE pricingconst noPricing = quality.actors.filter((a) => a.breakdown.pricing === 0);console.log(`Actors without PPE pricing: ${noPricing.length}`);// Find actors with poor READMEsconst poorReadme = quality.actors.filter((a) => a.breakdown.readme < 50);console.log(`Actors with poor documentation: ${poorReadme.length}`);
cURL
# Start the quality scancurl -X POST "https://api.apify.com/v2/acts/ryanclinton~apifyforge-quality-monitor/runs?token=YOUR_API_TOKEN" \-H "Content-Type: application/json" \-d '{"apifyToken": "apify_api_xxxxxxxxxxxxxxxxxxxxxxxxxxxx"}'# Fetch results from the default datasetcurl "https://api.apify.com/v2/datasets/DATASET_ID/items?token=YOUR_API_TOKEN"
How It Works
ApifyForge Quality Monitor evaluates each actor through a five-dimension scoring pipeline:
-
Actor enumeration. Calls
GET /v2/acts?my=truewith pagination to retrieve every actor in your account. -
Detail fetching. For each actor, calls
GET /v2/acts/{actorId}to retrieve description, README, pricing configuration, tagged builds, and user statistics. -
README scoring (25% weight). Evaluates the actor description length:
- 500+ characters = 100 points
- 200-499 characters = 70 points
- 50-199 characters = 40 points
- Under 50 characters = 10 points
- Bonus: +10 points if the README contains "example" or "usage" keywords
-
Pricing scoring (20% weight). Checks for PAY_PER_EVENT entries in the
pricingInfosarray:- PPE pricing configured = 100 points
- No PPE pricing = 0 points
-
Schema scoring (15% weight). Checks the latest tagged build for a dataset schema definition:
- Dataset schema defined = 100 points
- Build exists but no schema = 30 points
- No tagged build found = 20 points
-
Reliability scoring (30% weight). Fetches the last 100 runs, filters to the 30-day window, and computes success rate:
- 5+ runs: score = success rate percentage (capped at 100)
- 1-4 runs: score = success rate (flagged as low sample size)
- 0 runs: score = 50 (neutral -- no data to assess)
-
Popularity scoring (10% weight). Normalizes the actor's 30-day user count against the most popular actor in the fleet:
- Score = (actor users / max fleet users) * 100
-
Composite score. Weighted sum:
readme * 0.25 + pricing * 0.20 + schema * 0.15 + reliability * 0.30 + popularity * 0.10 -
Output. Sorts actors by quality score ascending (worst first), computes fleet average, pushes to dataset, and charges one PPE event.
How Much Does It Cost?
ApifyForge Quality Monitor uses Pay-Per-Event pricing at $0.05 per scan.
| Scenario | Events | Cost |
|---|---|---|
| One-time quality audit | 1 | $0.05 |
| Weekly monitoring (4x/month) | 4 | $0.20 |
| Daily monitoring (30x/month) | 30 | $1.50 |
Platform compute costs also apply. A typical quality scan of 200 actors completes in under 2 minutes using 256 MB of memory.
Tips
- Work from the bottom up. Results are sorted worst-first. Focus on the lowest-scoring actors for the biggest fleet-wide quality improvement.
- README is the easiest win. Adding a 500+ character description with usage examples can improve an actor's quality score by up to 25 points.
- Add PPE pricing to everything. Even a small price ($0.01) gives you 20 points and starts generating revenue. The Quality Monitor flags every actor without pricing.
- Define dataset schemas. Adding
.actor/dataset_schema.jsonimproves your schema score from 30 to 100 (15 points on the composite score) and makes your actor's output more discoverable. - Track fleet quality over time. Schedule weekly runs and monitor
fleetQualityScore. Aim for 80+ across your fleet. - Cross-reference with revenue. Low-quality actors with high traffic (from Revenue Tracker) are your highest-impact improvement targets.
Limitations
- Description vs. README distinction. The scoring primarily evaluates description length. A very long README with a short description may score lower than expected. The actor checks both fields but weights description length more heavily.
- Popularity is relative. The popularity score is normalized against the most popular actor in your fleet, not against the entire Apify Store. A score of 100 means "most popular in your fleet," not "most popular globally."
- Build schema check requires a tagged build. If your actor has no "latest" tagged build, the schema score defaults to 20 regardless of whether a schema file exists in the source code.
- Binary pricing score. Pricing is scored as 100 or 0 (PPE configured or not). There is no differentiation between well-priced and poorly-priced actors.
- Run sample cap. Only the last 100 runs are sampled for reliability scoring. High-volume actors may have runs outside this window.
Frequently Asked Questions
Why is reliability weighted the most (30%)? Because actor reliability directly impacts user trust and retention. An actor with a great README but a 50% failure rate will lose users quickly. Reliability is the foundation that all other quality dimensions build on.
How can I improve my fleet quality score the fastest? Focus on three quick wins: (1) Add PPE pricing to all actors without it (+20 points each). (2) Expand descriptions to 500+ characters (+15-25 points each). (3) Fix any actors with high failure rates. These three actions typically move the fleet score by 15-30 points.
What does a quality score of 50 mean for reliability when there are no runs? A score of 50 is a neutral default when there is no run data to evaluate. It means "unknown reliability" rather than "average reliability." Once the actor has 5+ runs, the score will reflect the actual success rate.
Can I set custom weights for the five dimensions?
Not currently. The weights (README 25%, Pricing 20%, Schema 15%, Reliability 30%, Popularity 10%) are fixed. If you need custom weighting, download the raw data and apply your own formula to the breakdown scores.
Integration with ApifyForge Dashboard
This actor powers the quality panel of the ApifyForge dashboard. When connected, quality data is visualized with radar charts showing per-dimension scores, sortable actor tables, and a fleet quality trend line. The dashboard highlights "quick wins" -- actors where a single improvement (like adding pricing) would yield the biggest score jump. Schedule this actor to run weekly and track your fleet quality trajectory as you improve your actors.