Etsy Sales Intelligence avatar

Etsy Sales Intelligence

Pricing

from $50.00 / 1,000 listing scrapeds

Go to Apify Store
Etsy Sales Intelligence

Etsy Sales Intelligence

Estimate hidden Etsy lifetime sales and revenue from any listing, shop, or keyword. Reverse-engineers per-listing units sold using public signals, AI translation, and adaptive confidence bands. $0.05 per listing analysed. The Etsy seller intelligence estimator.

Pricing

from $50.00 / 1,000 listing scrapeds

Rating

0.0

(0)

Developer

Marielise

Marielise

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

9 hours ago

Last modified

Share

The Etsy seller intelligence estimator. Etsy hides per-listing sales counts; this actor reverse-engineers them from public signals — and bills you $0.05 per listing analysed. No setup. No external accounts. No infrastructure.

💸 $0.05 per listing. All inclusive. Bright Data anti-bot bypass, Claude Haiku translation, multilingual extraction, proxies, retries, compute — every cost is baked in. You pay only when a listing record is successfully delivered to your dataset.

Pricing fine print. The headline rate is $0.05 per listing (event listing-scraped). Apify auto-adds two tiny platform events: $0.00005 once per run start (first 5 seconds of compute waived) and $0.00001 per dataset record. These add ~$0.00006 extra per listing — a 0.1% uplift, effectively rounding error. The tables below use the headline $0.05.

Worked example

A listing has 5 reviews. The shop reports 1,200 sales / 257 reviews (a 21% review rate). Estimated lifetime units sold = 5 / 0.21 ≈ 23. At $24.50 USD that is roughly $564 lifetime revenue with a low/high band of $282 – $846. Every record carries a confidenceNote explaining the math, plus pre-formatted display strings (estimatedRevenueDisplay: "$564") for spreadsheets and reports.

Built for

  • Etsy sellers benchmarking competitors before a price/photo refresh
  • Niche researchers hunting hidden winners ahead of the crowd
  • Print-on-demand operators sizing demand before paying for a sample run
  • Dropshippers validating product ideas with real revenue numbers
  • E-commerce analysts feeding raw JSON into BI tools, LLMs, or dashboards

Subscription tools (eRank, EtsyHunt, Sale Samurai) charge $20–$30 per month for the same reverse-engineered estimate locked behind a UI. This actor returns raw structured data into your Apify dataset on demand — pay only for what you scrape.

Just run it

  1. Open the actor in Apify Console
  2. Choose a mode:
    • listing — paste one or more Etsy listing URLs to analyse specific products
    • shop — paste an Etsy shop URL to audit all listings in that shop
    • search — type a keyword to research a niche (top results)
  3. Set maxResults (default 50)
  4. Click Run

That's it. No API keys, no Bright Data signup, no Anthropic key. The actor handles every layer of infrastructure for you.

Pricing

Run sizeYou payTime at default concurrency=5
1 listing$0.05~30 s
10 listings$0.50~1 min
50 listings$2.50~5 min
100 listings$5.00~10 min
500 listings$25.00~50 min
1,000 listings$50.00~1.7 h
5,000 listings$250.00~8 h

Linear per-listing — no subscription, no setup fee, no minimum. Your maxResults input directly caps your spend.

What's NOT charged

  • Failed scrapes (target page 404, target down) → free
  • Listings filtered out by your minReviews / minEstimatedRevenue thresholds → free
  • Listings cancelled by maxResults cap → free
  • Empty preview runs → free

You pay only for records successfully pushed to your dataset.

Output

One record per analysed listing pushed to your dataset, plus a RUN_SUMMARY.json aggregate written to the run's key-value store.

Units (read this once)

  • All price* and *Revenue*USD numbers are whole US dollars, NOT cents, rounded to 2 decimals. estimatedRevenueUSD: 84656 means $84,656.00.
  • Every numeric field has a pre-formatted *Display sibling for table UIs and reports (estimatedRevenueDisplay: "$84,656").
  • shopReviewRate is a 0..1 ratio. shopReviewRatePct is the same value as integer percentage 0..100.
  • Counts are integers. Timestamps are ISO 8601 UTC.

Per-listing fields

FieldTypeUnitDescription
listingIdstringEtsy numeric listing ID
listingUrlstringURLCanonical listing URL
titlestringListing title (auto-translated to English when Etsy serves a non-English variant)
pricenumber|nulloriginal currencyWhole units, e.g. 24.50
priceDisplaystring|null"€24.50"
priceUSDnumber|nullUSD whole dollarsLive FX-converted, 2 dp
priceUSDDisplaystring|null"$24.50"
currencystring|nullISO 4217Original currency code
categorystring|nullListing category breadcrumb
tagsstring[]Listing tags. Best-effort: often empty on English-locale renders (Etsy lazy-loads them via XHR). See Limitations.
imagesstring[]URLsListing image URLs
listingReviewsintegercountReviews on this specific listing
viewsLast24hinteger|nullcountWhen Etsy exposes a counter
inCartsinteger|nullcountWhen Etsy shows the cart-count badge
badgesstring[]["Bestseller", "Star Seller", "Quick Replies"] etc
reviewDatesstring[]ISO datetimeRecent listing review timestamps
trendingbooleanTrue when 6+ of the last 10 reviews fall within 90 days
dormantbooleanTrue when no review in the last 180 days
shopNamestringEtsy shop slug
shopUrlstringURLEtsy shop URL
shopTotalSalesinteger|nullcountLifetime sales reported by the shop page
shopTotalReviewsinteger|nullcountAggregated shop reviews
shopReviewRatenumber|nullratio 0..1shopTotalReviews / shopTotalSales, clamped 0.05–0.50
shopReviewRatePctinteger|nullpercent 0..100Same value × 100 for table views
effectiveReviewRatenumberratio 0..1Shop rate after niche blend AND price-tier adjustment — the value actually used in estimation
effectiveReviewRatePctintegerpercent 0..100Same × 100
nicheBlendAppliedbooleanTrue when the shop rate was blended with the niche median (run had 3+ distinct shops AND shop totals were measured)
observedReviewRatenumber|nullratio 0..1Bayesian observed shop rate from 4+ trackOverTime snapshots. Null until accumulated.
observedReviewRatePctinteger|nullpercent 0..100Same × 100
observedRateSampleCountintegercountHow many snapshots produced the observed rate
plausibilityenum|null"plausible" / "suspicious" / nullAI sanity-check flag; null when disabled or check failed
plausibilityReasonstring|nullShort explanation when plausibility = "suspicious"
salesPerYearShopinteger|nullcount/yrShop average annual sales (lifetime ÷ age)
salesPerYearShopDisplaystring|null"427 / yr"
listingAgeYearsnumber|nullyearsListing age inferred from oldest review date
listingSalesPerYearEstimateinteger|nullcount/yrPer-listing velocity = estimatedUnitsSold / listingAgeYears
listingSalesPerYearDisplaystring|null"35 / yr"
shopAgestring|null"6 years"
shopLocationstring|nullFree-form location of the shop, e.g. "Miami, Florida"
shipsFromCountrystring|nullISO-3166-1 alpha-2Country the listing ships from, e.g. "US"
shipsFromRegionstring|nullRegion/state, e.g. "FL"
estimatedUnitsSoldintegercountCentral lifetime estimate
estimatedUnitsSoldDisplaystring"1,480"
estimatedSalesLow / estimatedSalesHighintegercountAdaptive ±25–50% band (see "How the estimate works")
estimatedSalesRangeDisplaystring"740 – 2,220"
estimatedRevenueUSDnumberUSD whole dollarsCentral × priceUSD
estimatedRevenueDisplaystring"$84,656"
estimatedRevenueLowUSD / estimatedRevenueHighUSDnumberUSD whole dollarsSame adaptive band, applied to revenue
estimatedRevenueRangeDisplaystring"$42,328 – $126,984"
confidenceenum"high" / "medium" / "low"Sortable confidence tier mirroring the band
confidenceNotestringPlain-English explanation including price-tier adjustment, trending uplift, and band tier
competingListingsinteger|nullcountSearch-mode only
scrapedAtstringISO datetimeRecord timestamp
deltaobject|nullPopulated only when trackOverTime is on and a previous snapshot exists

Run aggregate (KV RUN_SUMMARY.json)

A single object summarising the run, written every time:

{
"mode": "search",
"scrapedAt": "2026-04-30T08:00:00.000Z",
"totalCandidates": 50,
"successfullyScraped": 48,
"pushedToDataset": 42,
"skippedFilters": 6,
"failedScrapes": 2,
"totalEstimatedRevenueUSD": 1284500,
"totalEstimatedRevenueDisplay": "$1,284,500",
"totalEstimatedUnitsSold": 36420,
"totalEstimatedUnitsSoldDisplay": "36,420",
"averagePriceUSD": 28.40,
"averagePriceDisplay": "$28",
"trendingCount": 14,
"dormantCount": 7,
"topByRevenue": [
{
"listingId": "1570282475",
"title": "Custom Hand-Painted Pet Portrait Leather Keyring",
"shopName": "BeanieBaeArt",
"estimatedRevenueUSD": 84656,
"estimatedRevenueDisplay": "$84,656",
"listingUrl": "https://www.etsy.com/listing/1570282475/..."
}
],
"topByUnits": [
{
"listingId": "1234567890",
"title": "Sticker Pack — 10 Pieces",
"shopName": "StickerWorld",
"estimatedUnitsSold": 12500,
"estimatedUnitsSoldDisplay": "12,500",
"listingUrl": "https://www.etsy.com/listing/1234567890/..."
}
],
"shipsFromBreakdown": [
{ "country": "US", "listings": 28, "share": 0.667, "sharePct": 67, "sharePctDisplay": "67%", "totalEstimatedRevenueUSD": 856200, "totalEstimatedRevenueDisplay": "$856,200" },
{ "country": "GB", "listings": 9, "share": 0.214, "sharePct": 21, "sharePctDisplay": "21%", "totalEstimatedRevenueUSD": 312800, "totalEstimatedRevenueDisplay": "$312,800" },
{ "country": "CA", "listings": 5, "share": 0.119, "sharePct": 12, "sharePctDisplay": "12%", "totalEstimatedRevenueUSD": 115500, "totalEstimatedRevenueDisplay": "$115,500" }
]
}

Caveat on shipsFromBreakdown. This is a seller-side distribution: which countries the listings ship from. It is not a buyer-country / sales-destination split. Etsy does not expose buyer-country sales mix on public pages; only the shop owner can see that via Etsy Stats. The breakdown is still useful for niche research (e.g. "is this niche dominated by US sellers or international?") — just don't market it as buyer demographics.

Input examples

One specific listing

{
"mode": "listing",
"listingUrls": [
{ "url": "https://www.etsy.com/listing/1234567890/example-product" }
],
"maxResults": 1
}

Cost: $0.05.

Audit a competitor's full shop

{
"mode": "shop",
"shopUrl": "https://www.etsy.com/shop/CompetitorShopName",
"maxResults": 50,
"minReviews": 1
}

Cost: up to $2.50 (50 × $0.05).

Niche research

{
"mode": "search",
"searchQuery": "minimalist wall art",
"minReviews": 5,
"minEstimatedRevenue": 500,
"maxResults": 100
}

Cost: up to $5.00. Filters drop low-signal listings before they reach the dataset, so actual charge is usually lower.

Track a shop weekly

{
"mode": "shop",
"shopUrl": "https://www.etsy.com/shop/MyShopToTrack",
"trackOverTime": true,
"maxResults": 50
}

Run on a weekly schedule. Each record gains a delta block: reviewsDelta, estimatedSalesDelta, salesPerDay, daysSinceLast. First run establishes the baseline; subsequent runs compute growth.

How the estimate works

  1. Fetch the listing page → extract listingReviews, badges, recent review timestamps, ships-from country.

  2. Fetch the shop page (cached once per run per shop) → extract shopTotalSales, shopTotalReviews, shopAge, shopLocation.

  3. shopReviewRate = shopTotalReviews / shopTotalSales, clamped to [0.05, 0.50] to suppress wild estimates from tiny samples. Falls back to 0.20 (Etsy-wide observed midpoint) when shop totals are unavailable.

  4. Price-tier adjustment. Cheap items get fewer reviews per sale than expensive ones, even within the same shop. The shop rate is multiplied by a price-tier coefficient:

    Listing price (USD)CoefficientWhy
    < $20× 0.70Impulse buyers leave fewer reviews
    $20 – $100× 1.00Baseline
    $100 – $300× 1.20More engaged buyers
    > $300× 1.40High-ticket reviewers most diligent

    Result is effectiveReviewRate, also clamped to [0.05, 0.50].

  5. estimatedUnitsSold = round(listingReviews / effectiveReviewRate).

  6. estimatedRevenueUSD = estimatedUnitsSold × priceUSD.

  7. Adaptive confidence band:

    • ±25 % when shop totals are real (measured) AND listing has 50+ reviews → confidence: "high"
    • ±35 % when shop totals are real but listing has 10–49 reviews → confidence: "medium"
    • ±50 % when shop totals are assumed (20 % default) OR listing has < 10 reviews → confidence: "low"

    The confidence field is sortable for filtering; confidenceNote explains the math in plain English.

  8. Trending uplift. When the listing's review velocity flags it as trending = true (6+ of the last 10 reviews within 90 days), the band is shifted asymmetrically: the low edge tightens by × 0.95 and the high edge widens by × 1.15. Reflects the empirical observation that trending listings tend to over-perform their historical-average estimate.

  9. Velocity signals. Two derived fields turn lifetime numbers into per-year flows:

    • salesPerYearShop = shopTotalSales / parseShopAgeYears — average annual shop pace.
    • listingSalesPerYearEstimate = estimatedUnitsSold / listingAgeYears, where listingAgeYears is derived from the oldest review date in JSON-LD. Tells you whether a high-revenue listing is an active winner or a legacy product winding down.
  10. Niche calibration (two-pass estimation). When a run touches 3+ distinct shops, the actor switches to a two-pass estimator:

    • Pass 1 scrapes every listing + shop in parallel and stashes the raw data — no estimates yet.
    • Between passes, it computes nicheReviewRate = median shop review rate across distinct shops in the run.
    • Pass 2 estimates each listing with the shop rate blended toward the niche: blendedShopRate = 0.7 × shopRate + 0.3 × nicheRate. Outlier shops (e.g. one at 8 % in a niche where median is 22 %) are pulled toward the niche norm, reducing per-listing estimation error.
    • The blend is only applied when shop totals are measured (not assumed). Per-listing records flag nicheBlendApplied: true and the confidenceNote describes the blend.
    • With fewer than 3 distinct shops (typical for listing mode with one URL), the blend is skipped — rate stays as-is.
  11. Bayesian observed rate (only with trackOverTime: true, kicks in after 4+ weekly snapshots per shop). Each run appends a per-shop snapshot of (shopTotalSales, shopTotalReviews) to KV. Once 4+ snapshots exist, the actor computes observedRate = (lastReviews − firstReviews) / (lastSales − firstSales) — i.e. the actual review rate during the tracked period, not the lifetime average. This observed rate is then blended with the lifetime rate as posteriorShopRate = 0.6 × observed + 0.4 × lifetime and used as the shop's effective rate for the rest of the pipeline. Most accurate signal we have. Per-record fields observedReviewRate, observedReviewRatePct, and observedRateSampleCount expose what fired.

  12. AI plausibility check (default on; toggle via plausibilityCheck: false to skip). After estimation, each record is sanity-checked by Claude Haiku 4.5 against red flags like estimatedUnitsSold > shopTotalSales, revenue > $5M, price ≤ 0, etc. Output fields plausibility ("plausible" / "suspicious" / null) and plausibilityReason flag outliers for human review without blocking them from the dataset.

  13. trending = true when 6+ of the last 10 reviews fall within 90 days.

  14. dormant = true when no review timestamps fall within the last 180 days.

What we deliberately did NOT build (and why)

Two improvements were considered for v2 but skipped on engineering principle, not laziness:

  • Calibration corpus. Validating the algorithm's coefficients against a curated set of Etsy shops with publicly disclosed lifetime sales would let us tune coefficients with statistical confidence. Skipped because the realistic public corpus is ~10–15 shops (interviews, podcasts, indie-maker tweets) — too small for confident tuning, and self-reported numbers are noisy. The right way to do this is to collect anonymised real customer run outputs over months, backtest against any sellers who later publicly disclose, and refine quarterly. Pre-launch this is impossible.
  • Category-specific default rates. Same data problem: without ground truth or large samples, hard-coding per-category rates would just be more wrong-confidence numbers stacked on existing ones. Price-tier adjustment (already implemented) captures most of what category would, more reliably.

If you're a researcher with verified shop sales data and would like to help calibrate, the source is open on Apify — open an issue.

Limitations

  • Estimates, not facts. Etsy never reveals true per-listing sales. Treat output as directional intelligence with a stated confidenceNote.
  • Mixed-price shops skew per-listing estimates. A shop selling one $5 sticker and one $500 sculpture distorts both estimates toward the average review rate. The clamp [0.05, 0.50] cushions this but does not eliminate it.
  • Currency rates refresh once per run and are persisted to KV CURRENCY_RATES.json for transparency.
  • tags is best-effort and frequently empty. Etsy serves tag data inline only on some locale variants. On the English (US) variant tags are lazy-loaded via XHR after page render, and the underlying scrape captures pre-hydration HTML — so the tags array is empty in that case. The translator never invents tags from the title or description; what you see is what Etsy actually shipped in the initial HTML. Estimates do not depend on tags.
  • listingFavourites is no longer exposed by Etsy publicly (deprecated by Etsy ~2024). The field has been removed from the actor.
  • Buyer-country distribution is not available. shipsFromBreakdown reports where listings ship from, not where buyers are. Etsy only exposes buyer-country sales mix to the shop owner via Etsy Stats.

FAQ

How accurate is the estimate?

The band is adaptive: ±25% when both shop totals are measured and the listing has 50+ reviews (high confidence), ±35% when the listing has 10–49 reviews (medium), and ±50% when shop totals are assumed or the listing has fewer than 10 reviews (low). Real-world tests show the central estimate falls within the reported band ~80% of the time across shops with measured totals. For tiny shops or zero-review listings accuracy collapses; confidenceNote always reports the tier (high / medium / low) so you can filter accordingly.

Why doesn't Etsy show sales counts directly?

Etsy removed per-listing sales counts to discourage clones. Public reviews, shop totals, and badges remain — exactly the signals this actor consumes.

Do I need a Bright Data account or Anthropic key?

No. Everything is included in the $0.05 per-listing price. The publisher absorbs all infrastructure (anti-bot bypass, translation, proxies, compute). You just run.

Why "$0.05" and not a tiny number with a subscription?

The actor uses Apify's Pay-Per-Event model — you pay only when a complete listing record lands in your dataset. No subscription, no setup fee, no compute charges. A failed scrape, a filtered-out listing, or a run that hits its maxResults cap costs you nothing. Compared with subscription tools like eRank ($10/mo) or EtsyHunt ($20/mo), this actor breaks even at ~200-400 listings per month — and gives you raw structured data instead of a locked-in dashboard.

Can I run this against thousands of listings?

Yes. A 5,000-listing run costs $75 flat. Increase the concurrency input (default 5, max 20) to speed up — each step adds parallel scraping requests. Split into batches if a single Apify run exceeds memory or time limits.

Does the actor scrape in parallel?

Yes. Listings are processed in batches of concurrency (default 5) using Promise.allSettled, with shared shop-fetch deduping (concurrent listings from the same shop reuse a single in-flight shop request). At default concurrency a 50-listing run finishes in roughly 4–6 minutes versus ~25 minutes sequentially.

Can the actor tell me which countries are buying most?

No — Etsy does not expose buyer-country sales splits on public pages. Only the shop owner sees that via Etsy Stats. The shipsFromBreakdown aggregate in RUN_SUMMARY.json is a seller-side distribution (where the listings ship from), not a buyer-country mix.

Why is tags sometimes empty?

Etsy A/B tests its render path. On the English (US) variant tags are lazy-loaded via XHR after the page loads; the scrape captures the pre-hydration HTML and tags are missing. On other locale variants Etsy inlines tags. The actor does not invent tags from the title — what you see is what Etsy shipped in the HTML. Estimates do not depend on tags.

How do I track changes over time?

Set trackOverTime: true and re-run with the same input on a schedule. Each record gains a delta block once a previous snapshot exists.

What if a scrape fails?

Failed scrapes are NOT charged. You only pay for records successfully delivered to your dataset.

Can I export to Google Sheets / Airtable / a webhook?

Yes — every Apify dataset can be exported via the Console UI or piped through Apify Integrations to Google Sheets, Airtable, Slack, Make, Zapier, or a custom webhook. See Apify Integrations docs.

How this is built

The actor stack — fully managed, you don't pay or configure any of it:

  • Bright Data Web Unlocker for Akamai/Cloudflare/PerimeterX bypass on Etsy listing and shop pages.
  • Claude Haiku 4.5 for translating cosmetic fields (title, category, tags, badges) when Etsy serves a non-English variant. Numeric fields are never sent to the LLM.
  • Multilingual regex in the shop scraper extracts numeric stats (sales, reviews, shop age) using patterns covering English, Spanish, French, German, Italian, Portuguese, and Dutch — so estimates work regardless of routing locale.
  • JSON-LD parsing for Product, BreadcrumbList, and Organization schemas — locale-independent and survives Etsy's frequent DOM redesigns.
  • Live currency rates fetched once per run from open.er-api.com (free, no key needed).
  • Shop-level cache prevents fetching the same shop twice per run, and concurrent listings from the same shop reuse a single in-flight request.

If you're a developer who wants to fork this actor and run your own infrastructure, the source is on Apify and accepts custom brightDataApiKey and anthropicApiKey overrides via the input form (advanced).

Support

Questions, feature requests, or issues: open a ticket in the actor's Issues tab on Apify, or message the publisher directly.