Etsy Sales Intelligence
Pricing
from $50.00 / 1,000 listing scrapeds
Etsy Sales Intelligence
Estimate hidden Etsy lifetime sales and revenue from any listing, shop, or keyword. Reverse-engineers per-listing units sold using public signals, AI translation, and adaptive confidence bands. $0.05 per listing analysed. The Etsy seller intelligence estimator.
Pricing
from $50.00 / 1,000 listing scrapeds
Rating
0.0
(0)
Developer
Marielise
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
9 hours ago
Last modified
Categories
Share
The Etsy seller intelligence estimator. Etsy hides per-listing sales counts; this actor reverse-engineers them from public signals — and bills you $0.05 per listing analysed. No setup. No external accounts. No infrastructure.
💸 $0.05 per listing. All inclusive. Bright Data anti-bot bypass, Claude Haiku translation, multilingual extraction, proxies, retries, compute — every cost is baked in. You pay only when a listing record is successfully delivered to your dataset.
Pricing fine print. The headline rate is $0.05 per listing (event
listing-scraped). Apify auto-adds two tiny platform events: $0.00005 once per run start (first 5 seconds of compute waived) and $0.00001 per dataset record. These add ~$0.00006 extra per listing — a 0.1% uplift, effectively rounding error. The tables below use the headline $0.05.
Worked example
A listing has 5 reviews. The shop reports 1,200 sales / 257 reviews (a 21% review rate). Estimated lifetime units sold = 5 / 0.21 ≈ 23. At $24.50 USD that is roughly $564 lifetime revenue with a low/high band of $282 – $846. Every record carries a confidenceNote explaining the math, plus pre-formatted display strings (estimatedRevenueDisplay: "$564") for spreadsheets and reports.
Built for
- Etsy sellers benchmarking competitors before a price/photo refresh
- Niche researchers hunting hidden winners ahead of the crowd
- Print-on-demand operators sizing demand before paying for a sample run
- Dropshippers validating product ideas with real revenue numbers
- E-commerce analysts feeding raw JSON into BI tools, LLMs, or dashboards
Subscription tools (eRank, EtsyHunt, Sale Samurai) charge $20–$30 per month for the same reverse-engineered estimate locked behind a UI. This actor returns raw structured data into your Apify dataset on demand — pay only for what you scrape.
Just run it
- Open the actor in Apify Console
- Choose a mode:
listing— paste one or more Etsy listing URLs to analyse specific productsshop— paste an Etsy shop URL to audit all listings in that shopsearch— type a keyword to research a niche (top results)
- Set
maxResults(default 50) - Click Run
That's it. No API keys, no Bright Data signup, no Anthropic key. The actor handles every layer of infrastructure for you.
Pricing
| Run size | You pay | Time at default concurrency=5 |
|---|---|---|
| 1 listing | $0.05 | ~30 s |
| 10 listings | $0.50 | ~1 min |
| 50 listings | $2.50 | ~5 min |
| 100 listings | $5.00 | ~10 min |
| 500 listings | $25.00 | ~50 min |
| 1,000 listings | $50.00 | ~1.7 h |
| 5,000 listings | $250.00 | ~8 h |
Linear per-listing — no subscription, no setup fee, no minimum. Your maxResults input directly caps your spend.
What's NOT charged
- Failed scrapes (target page 404, target down) → free
- Listings filtered out by your
minReviews/minEstimatedRevenuethresholds → free - Listings cancelled by
maxResultscap → free - Empty preview runs → free
You pay only for records successfully pushed to your dataset.
Output
One record per analysed listing pushed to your dataset, plus a RUN_SUMMARY.json aggregate written to the run's key-value store.
Units (read this once)
- All
price*and*Revenue*USDnumbers are whole US dollars, NOT cents, rounded to 2 decimals.estimatedRevenueUSD: 84656means $84,656.00. - Every numeric field has a pre-formatted
*Displaysibling for table UIs and reports (estimatedRevenueDisplay: "$84,656"). shopReviewRateis a0..1ratio.shopReviewRatePctis the same value as integer percentage0..100.- Counts are integers. Timestamps are ISO 8601 UTC.
Per-listing fields
| Field | Type | Unit | Description |
|---|---|---|---|
listingId | string | — | Etsy numeric listing ID |
listingUrl | string | URL | Canonical listing URL |
title | string | — | Listing title (auto-translated to English when Etsy serves a non-English variant) |
price | number|null | original currency | Whole units, e.g. 24.50 |
priceDisplay | string|null | — | "€24.50" |
priceUSD | number|null | USD whole dollars | Live FX-converted, 2 dp |
priceUSDDisplay | string|null | — | "$24.50" |
currency | string|null | ISO 4217 | Original currency code |
category | string|null | — | Listing category breadcrumb |
tags | string[] | — | Listing tags. Best-effort: often empty on English-locale renders (Etsy lazy-loads them via XHR). See Limitations. |
images | string[] | URLs | Listing image URLs |
listingReviews | integer | count | Reviews on this specific listing |
viewsLast24h | integer|null | count | When Etsy exposes a counter |
inCarts | integer|null | count | When Etsy shows the cart-count badge |
badges | string[] | — | ["Bestseller", "Star Seller", "Quick Replies"] etc |
reviewDates | string[] | ISO datetime | Recent listing review timestamps |
trending | boolean | — | True when 6+ of the last 10 reviews fall within 90 days |
dormant | boolean | — | True when no review in the last 180 days |
shopName | string | — | Etsy shop slug |
shopUrl | string | URL | Etsy shop URL |
shopTotalSales | integer|null | count | Lifetime sales reported by the shop page |
shopTotalReviews | integer|null | count | Aggregated shop reviews |
shopReviewRate | number|null | ratio 0..1 | shopTotalReviews / shopTotalSales, clamped 0.05–0.50 |
shopReviewRatePct | integer|null | percent 0..100 | Same value × 100 for table views |
effectiveReviewRate | number | ratio 0..1 | Shop rate after niche blend AND price-tier adjustment — the value actually used in estimation |
effectiveReviewRatePct | integer | percent 0..100 | Same × 100 |
nicheBlendApplied | boolean | — | True when the shop rate was blended with the niche median (run had 3+ distinct shops AND shop totals were measured) |
observedReviewRate | number|null | ratio 0..1 | Bayesian observed shop rate from 4+ trackOverTime snapshots. Null until accumulated. |
observedReviewRatePct | integer|null | percent 0..100 | Same × 100 |
observedRateSampleCount | integer | count | How many snapshots produced the observed rate |
plausibility | enum|null | "plausible" / "suspicious" / null | AI sanity-check flag; null when disabled or check failed |
plausibilityReason | string|null | — | Short explanation when plausibility = "suspicious" |
salesPerYearShop | integer|null | count/yr | Shop average annual sales (lifetime ÷ age) |
salesPerYearShopDisplay | string|null | — | "427 / yr" |
listingAgeYears | number|null | years | Listing age inferred from oldest review date |
listingSalesPerYearEstimate | integer|null | count/yr | Per-listing velocity = estimatedUnitsSold / listingAgeYears |
listingSalesPerYearDisplay | string|null | — | "35 / yr" |
shopAge | string|null | — | "6 years" |
shopLocation | string|null | — | Free-form location of the shop, e.g. "Miami, Florida" |
shipsFromCountry | string|null | ISO-3166-1 alpha-2 | Country the listing ships from, e.g. "US" |
shipsFromRegion | string|null | — | Region/state, e.g. "FL" |
estimatedUnitsSold | integer | count | Central lifetime estimate |
estimatedUnitsSoldDisplay | string | — | "1,480" |
estimatedSalesLow / estimatedSalesHigh | integer | count | Adaptive ±25–50% band (see "How the estimate works") |
estimatedSalesRangeDisplay | string | — | "740 – 2,220" |
estimatedRevenueUSD | number | USD whole dollars | Central × priceUSD |
estimatedRevenueDisplay | string | — | "$84,656" |
estimatedRevenueLowUSD / estimatedRevenueHighUSD | number | USD whole dollars | Same adaptive band, applied to revenue |
estimatedRevenueRangeDisplay | string | — | "$42,328 – $126,984" |
confidence | enum | "high" / "medium" / "low" | Sortable confidence tier mirroring the band |
confidenceNote | string | — | Plain-English explanation including price-tier adjustment, trending uplift, and band tier |
competingListings | integer|null | count | Search-mode only |
scrapedAt | string | ISO datetime | Record timestamp |
delta | object|null | — | Populated only when trackOverTime is on and a previous snapshot exists |
Run aggregate (KV RUN_SUMMARY.json)
A single object summarising the run, written every time:
{"mode": "search","scrapedAt": "2026-04-30T08:00:00.000Z","totalCandidates": 50,"successfullyScraped": 48,"pushedToDataset": 42,"skippedFilters": 6,"failedScrapes": 2,"totalEstimatedRevenueUSD": 1284500,"totalEstimatedRevenueDisplay": "$1,284,500","totalEstimatedUnitsSold": 36420,"totalEstimatedUnitsSoldDisplay": "36,420","averagePriceUSD": 28.40,"averagePriceDisplay": "$28","trendingCount": 14,"dormantCount": 7,"topByRevenue": [{"listingId": "1570282475","title": "Custom Hand-Painted Pet Portrait Leather Keyring","shopName": "BeanieBaeArt","estimatedRevenueUSD": 84656,"estimatedRevenueDisplay": "$84,656","listingUrl": "https://www.etsy.com/listing/1570282475/..."}],"topByUnits": [{"listingId": "1234567890","title": "Sticker Pack — 10 Pieces","shopName": "StickerWorld","estimatedUnitsSold": 12500,"estimatedUnitsSoldDisplay": "12,500","listingUrl": "https://www.etsy.com/listing/1234567890/..."}],"shipsFromBreakdown": [{ "country": "US", "listings": 28, "share": 0.667, "sharePct": 67, "sharePctDisplay": "67%", "totalEstimatedRevenueUSD": 856200, "totalEstimatedRevenueDisplay": "$856,200" },{ "country": "GB", "listings": 9, "share": 0.214, "sharePct": 21, "sharePctDisplay": "21%", "totalEstimatedRevenueUSD": 312800, "totalEstimatedRevenueDisplay": "$312,800" },{ "country": "CA", "listings": 5, "share": 0.119, "sharePct": 12, "sharePctDisplay": "12%", "totalEstimatedRevenueUSD": 115500, "totalEstimatedRevenueDisplay": "$115,500" }]}
Caveat on
shipsFromBreakdown. This is a seller-side distribution: which countries the listings ship from. It is not a buyer-country / sales-destination split. Etsy does not expose buyer-country sales mix on public pages; only the shop owner can see that via Etsy Stats. The breakdown is still useful for niche research (e.g. "is this niche dominated by US sellers or international?") — just don't market it as buyer demographics.
Input examples
One specific listing
{"mode": "listing","listingUrls": [{ "url": "https://www.etsy.com/listing/1234567890/example-product" }],"maxResults": 1}
Cost: $0.05.
Audit a competitor's full shop
{"mode": "shop","shopUrl": "https://www.etsy.com/shop/CompetitorShopName","maxResults": 50,"minReviews": 1}
Cost: up to $2.50 (50 × $0.05).
Niche research
{"mode": "search","searchQuery": "minimalist wall art","minReviews": 5,"minEstimatedRevenue": 500,"maxResults": 100}
Cost: up to $5.00. Filters drop low-signal listings before they reach the dataset, so actual charge is usually lower.
Track a shop weekly
{"mode": "shop","shopUrl": "https://www.etsy.com/shop/MyShopToTrack","trackOverTime": true,"maxResults": 50}
Run on a weekly schedule. Each record gains a delta block: reviewsDelta, estimatedSalesDelta, salesPerDay, daysSinceLast. First run establishes the baseline; subsequent runs compute growth.
How the estimate works
-
Fetch the listing page → extract
listingReviews, badges, recent review timestamps, ships-from country. -
Fetch the shop page (cached once per run per shop) → extract
shopTotalSales,shopTotalReviews,shopAge,shopLocation. -
shopReviewRate = shopTotalReviews / shopTotalSales, clamped to[0.05, 0.50]to suppress wild estimates from tiny samples. Falls back to0.20(Etsy-wide observed midpoint) when shop totals are unavailable. -
Price-tier adjustment. Cheap items get fewer reviews per sale than expensive ones, even within the same shop. The shop rate is multiplied by a price-tier coefficient:
Listing price (USD) Coefficient Why < $20 × 0.70 Impulse buyers leave fewer reviews $20 – $100 × 1.00 Baseline $100 – $300 × 1.20 More engaged buyers > $300 × 1.40 High-ticket reviewers most diligent Result is
effectiveReviewRate, also clamped to[0.05, 0.50]. -
estimatedUnitsSold = round(listingReviews / effectiveReviewRate). -
estimatedRevenueUSD = estimatedUnitsSold × priceUSD. -
Adaptive confidence band:
- ±25 % when shop totals are real (measured) AND listing has 50+ reviews →
confidence: "high" - ±35 % when shop totals are real but listing has 10–49 reviews →
confidence: "medium" - ±50 % when shop totals are assumed (20 % default) OR listing has < 10 reviews →
confidence: "low"
The
confidencefield is sortable for filtering;confidenceNoteexplains the math in plain English. - ±25 % when shop totals are real (measured) AND listing has 50+ reviews →
-
Trending uplift. When the listing's review velocity flags it as
trending = true(6+ of the last 10 reviews within 90 days), the band is shifted asymmetrically: the low edge tightens by × 0.95 and the high edge widens by × 1.15. Reflects the empirical observation that trending listings tend to over-perform their historical-average estimate. -
Velocity signals. Two derived fields turn lifetime numbers into per-year flows:
salesPerYearShop = shopTotalSales / parseShopAgeYears— average annual shop pace.listingSalesPerYearEstimate = estimatedUnitsSold / listingAgeYears, wherelistingAgeYearsis derived from the oldest review date in JSON-LD. Tells you whether a high-revenue listing is an active winner or a legacy product winding down.
-
Niche calibration (two-pass estimation). When a run touches 3+ distinct shops, the actor switches to a two-pass estimator:
- Pass 1 scrapes every listing + shop in parallel and stashes the raw data — no estimates yet.
- Between passes, it computes
nicheReviewRate= median shop review rate across distinct shops in the run. - Pass 2 estimates each listing with the shop rate blended toward the niche:
blendedShopRate = 0.7 × shopRate + 0.3 × nicheRate. Outlier shops (e.g. one at 8 % in a niche where median is 22 %) are pulled toward the niche norm, reducing per-listing estimation error. - The blend is only applied when shop totals are measured (not assumed). Per-listing records flag
nicheBlendApplied: trueand theconfidenceNotedescribes the blend. - With fewer than 3 distinct shops (typical for
listingmode with one URL), the blend is skipped — rate stays as-is.
-
Bayesian observed rate (only with
trackOverTime: true, kicks in after 4+ weekly snapshots per shop). Each run appends a per-shop snapshot of(shopTotalSales, shopTotalReviews)to KV. Once 4+ snapshots exist, the actor computesobservedRate = (lastReviews − firstReviews) / (lastSales − firstSales)— i.e. the actual review rate during the tracked period, not the lifetime average. This observed rate is then blended with the lifetime rate asposteriorShopRate = 0.6 × observed + 0.4 × lifetimeand used as the shop's effective rate for the rest of the pipeline. Most accurate signal we have. Per-record fieldsobservedReviewRate,observedReviewRatePct, andobservedRateSampleCountexpose what fired. -
AI plausibility check (default on; toggle via
plausibilityCheck: falseto skip). After estimation, each record is sanity-checked by Claude Haiku 4.5 against red flags like estimatedUnitsSold > shopTotalSales, revenue > $5M, price ≤ 0, etc. Output fieldsplausibility("plausible"/"suspicious"/null) andplausibilityReasonflag outliers for human review without blocking them from the dataset. -
trending = truewhen 6+ of the last 10 reviews fall within 90 days. -
dormant = truewhen no review timestamps fall within the last 180 days.
What we deliberately did NOT build (and why)
Two improvements were considered for v2 but skipped on engineering principle, not laziness:
- Calibration corpus. Validating the algorithm's coefficients against a curated set of Etsy shops with publicly disclosed lifetime sales would let us tune coefficients with statistical confidence. Skipped because the realistic public corpus is ~10–15 shops (interviews, podcasts, indie-maker tweets) — too small for confident tuning, and self-reported numbers are noisy. The right way to do this is to collect anonymised real customer run outputs over months, backtest against any sellers who later publicly disclose, and refine quarterly. Pre-launch this is impossible.
- Category-specific default rates. Same data problem: without ground truth or large samples, hard-coding per-category rates would just be more wrong-confidence numbers stacked on existing ones. Price-tier adjustment (already implemented) captures most of what category would, more reliably.
If you're a researcher with verified shop sales data and would like to help calibrate, the source is open on Apify — open an issue.
Limitations
- Estimates, not facts. Etsy never reveals true per-listing sales. Treat output as directional intelligence with a stated
confidenceNote. - Mixed-price shops skew per-listing estimates. A shop selling one $5 sticker and one $500 sculpture distorts both estimates toward the average review rate. The clamp
[0.05, 0.50]cushions this but does not eliminate it. - Currency rates refresh once per run and are persisted to KV
CURRENCY_RATES.jsonfor transparency. tagsis best-effort and frequently empty. Etsy serves tag data inline only on some locale variants. On the English (US) variant tags are lazy-loaded via XHR after page render, and the underlying scrape captures pre-hydration HTML — so the tags array is empty in that case. The translator never invents tags from the title or description; what you see is what Etsy actually shipped in the initial HTML. Estimates do not depend on tags.listingFavouritesis no longer exposed by Etsy publicly (deprecated by Etsy ~2024). The field has been removed from the actor.- Buyer-country distribution is not available.
shipsFromBreakdownreports where listings ship from, not where buyers are. Etsy only exposes buyer-country sales mix to the shop owner via Etsy Stats.
FAQ
How accurate is the estimate?
The band is adaptive: ±25% when both shop totals are measured and the listing has 50+ reviews (high confidence), ±35% when the listing has 10–49 reviews (medium), and ±50% when shop totals are assumed or the listing has fewer than 10 reviews (low). Real-world tests show the central estimate falls within the reported band ~80% of the time across shops with measured totals. For tiny shops or zero-review listings accuracy collapses; confidenceNote always reports the tier (high / medium / low) so you can filter accordingly.
Why doesn't Etsy show sales counts directly?
Etsy removed per-listing sales counts to discourage clones. Public reviews, shop totals, and badges remain — exactly the signals this actor consumes.
Do I need a Bright Data account or Anthropic key?
No. Everything is included in the $0.05 per-listing price. The publisher absorbs all infrastructure (anti-bot bypass, translation, proxies, compute). You just run.
Why "$0.05" and not a tiny number with a subscription?
The actor uses Apify's Pay-Per-Event model — you pay only when a complete listing record lands in your dataset. No subscription, no setup fee, no compute charges. A failed scrape, a filtered-out listing, or a run that hits its maxResults cap costs you nothing. Compared with subscription tools like eRank ($10/mo) or EtsyHunt ($20/mo), this actor breaks even at ~200-400 listings per month — and gives you raw structured data instead of a locked-in dashboard.
Can I run this against thousands of listings?
Yes. A 5,000-listing run costs $75 flat. Increase the concurrency input (default 5, max 20) to speed up — each step adds parallel scraping requests. Split into batches if a single Apify run exceeds memory or time limits.
Does the actor scrape in parallel?
Yes. Listings are processed in batches of concurrency (default 5) using Promise.allSettled, with shared shop-fetch deduping (concurrent listings from the same shop reuse a single in-flight shop request). At default concurrency a 50-listing run finishes in roughly 4–6 minutes versus ~25 minutes sequentially.
Can the actor tell me which countries are buying most?
No — Etsy does not expose buyer-country sales splits on public pages. Only the shop owner sees that via Etsy Stats. The shipsFromBreakdown aggregate in RUN_SUMMARY.json is a seller-side distribution (where the listings ship from), not a buyer-country mix.
Why is tags sometimes empty?
Etsy A/B tests its render path. On the English (US) variant tags are lazy-loaded via XHR after the page loads; the scrape captures the pre-hydration HTML and tags are missing. On other locale variants Etsy inlines tags. The actor does not invent tags from the title — what you see is what Etsy shipped in the HTML. Estimates do not depend on tags.
How do I track changes over time?
Set trackOverTime: true and re-run with the same input on a schedule. Each record gains a delta block once a previous snapshot exists.
What if a scrape fails?
Failed scrapes are NOT charged. You only pay for records successfully delivered to your dataset.
Can I export to Google Sheets / Airtable / a webhook?
Yes — every Apify dataset can be exported via the Console UI or piped through Apify Integrations to Google Sheets, Airtable, Slack, Make, Zapier, or a custom webhook. See Apify Integrations docs.
How this is built
The actor stack — fully managed, you don't pay or configure any of it:
- Bright Data Web Unlocker for Akamai/Cloudflare/PerimeterX bypass on Etsy listing and shop pages.
- Claude Haiku 4.5 for translating cosmetic fields (title, category, tags, badges) when Etsy serves a non-English variant. Numeric fields are never sent to the LLM.
- Multilingual regex in the shop scraper extracts numeric stats (sales, reviews, shop age) using patterns covering English, Spanish, French, German, Italian, Portuguese, and Dutch — so estimates work regardless of routing locale.
- JSON-LD parsing for
Product,BreadcrumbList, andOrganizationschemas — locale-independent and survives Etsy's frequent DOM redesigns. - Live currency rates fetched once per run from
open.er-api.com(free, no key needed). - Shop-level cache prevents fetching the same shop twice per run, and concurrent listings from the same shop reuse a single in-flight request.
If you're a developer who wants to fork this actor and run your own infrastructure, the source is on Apify and accepts custom brightDataApiKey and anthropicApiKey overrides via the input form (advanced).
Support
Questions, feature requests, or issues: open a ticket in the actor's Issues tab on Apify, or message the publisher directly.