Product Hunt Scraper (/w EMAILS)
Pricing
from $3.50 / 1,000 scraped products
Product Hunt Scraper (/w EMAILS)
Scrapes Product Hunt's launches for a specific date. Extracts the product names, descriptions, makers info (name + links), and emails.
Pricing
from $3.50 / 1,000 scraped products
Rating
5.0
(2)
Developer
Maxime
Actor stats
4
Bookmarked
110
Total users
22
Monthly active users
5.3 hours
Issues response
5 hours ago
Last modified
Categories
Share
Product Hunt launch intelligence for daily, category, and topic pages
Product Hunt Scraper turns Product Hunt list pages into a Product Hunt-specific flat launch-intelligence dataset. It supports daily, weekly, monthly, yearly, category, and topic runs, keeps cache-first modes cheap, and can optionally enrich the product website with title, meta description, visible text, and emails.

What this actor is for
- ๐ง Cheap cached pulls for a single Product Hunt list page such as one daily leaderboard page, one category page, or one topic page.
- ๐๏ธ Historical runs across daily, weekly, monthly, or yearly Product Hunt periods.
- ๐ท๏ธ Broader browsing across Product Hunt categories and topics without collecting raw URLs by hand.
- ๐ฌ Deeper launch research with optional comments, reviews, built-with tools, launch history, and website enrichment.
This actor does not use the repository's shared scraper output shape. It emits a Product Hunt-specific flat contract instead.
Fastest way to try it
- Leave every target section empty in the Input tab to scrape today's Product Hunt leaderboard in California time, or fill exactly one target section if you want a different daily, weekly, monthly, yearly, category, or topic run.
- The example input starts with
maxNbItemsToScrape: 25. Lower it if you want a smaller first run, or leave the field blank if you want all available rows from each resolved page. - Start with
scrapeMode: cachedfor the cheapest single-page pull, or keep the defaultfallbackmode if you want cache first but still want the actor to fill gaps live.
Today's daily leaderboard cache is normally topped up each day by a scheduled daily run that fills only the basic Product Hunt fields, not optional enrichments such as comments, reviews, built-with tools, launches, or website title/description/raw text/emails.
Inputs, defaults, and behavior
Leave every target section empty to scrape today's Product Hunt leaderboard in California time. If you fill a target section, fill exactly one and leave the other target sections empty.
Target sections
๐ Target: Daily pages- Use
startDateand optionalendDateinYYYY-MM-DDformat. - Leave
endDateblank to scrape one day.
- Use
๐ Target: Weekly pages- Use
startWeekand optionalendWeekinYYYY-WWformat. - Weekly ranges may cross into a later year, such as
2026-52through2027-02.
- Use
๐๏ธ Target: Monthly pages- Use
startMonthand optionalendMonthinYYYY-MMformat. - Monthly ranges may cross into a later year, such as
2026-11through2027-02.
- Use
๐ฐ๏ธ Target: Yearly pages- Use integer
startYearand optional integerendYearvalues such as2026.
- Use integer
๐ท๏ธ Target: Category pages- Use
categorySlugswith one or more category slugs such asai-code-editors. - Category pages do not support ranges.
- Use
๐งญ Target: Topic pages- Use
topicSlugswith one or more topic slugs such asdeveloper-tools. - Topic pages do not support ranges.
- Use
Options section
maxNbItemsToScrape- Defaults to
0, which means all available items for each resolved Product Hunt page. - The example input starts at
25so your first run stays smaller and faster. - The cap applies per resolved page, not once for the whole run.
- Only items from the main Product Hunt leaderboard feed count toward this cap. Sidebar widgets such as trending or top-reviewed modules are never emitted.
- Defaults to
scrapeModecached: return reusable cached Product Hunt rows only. This is the cheapest mode for a single page. It never live-crawls Product Hunt rows on a miss, but ifshouldScrapeWebsiteis on it may still fetch missing website fields without revisiting the Product Hunt product page.fallback: use reusable cached Product Hunt rows first, then fill only the missing work. Missing Product Hunt sections such as comments, reviews, built-with, or launches trigger a Product Hunt live crawl. Missing website enrichment can still be filled without a Product Hunt recrawl.fresh: ignore Product Hunt cache for the current result set, crawl live, and refresh cache for later runs. IfshouldScrapeWebsiteis on, website enrichment is refreshed too.
shouldIncludePromotedListings- Defaults to
false. - When off, sponsored Product Hunt placements stay out of the output.
- Defaults to
shouldScrapeComments- Defaults to
false. - Scrapes all available Product Hunt comment pages.
- Defaults to
shouldScrapeReviews- Defaults to
false. - Scrapes Product Hunt reviews when available.
- Defaults to
shouldScrapeBuiltWith- Defaults to
false. - Scrapes the Product Hunt
Built Withsection when the product page exposes it.
- Defaults to
shouldScrapeLaunches- Defaults to
false. - Scrapes visible Product Hunt launch history for the product.
- Defaults to
shouldScrapeWebsite- Defaults to
false. - Fetches the external website's title, meta description, visible text, and emails.
- This website enrichment is separate from the Product Hunt crawl path, so cached and fallback runs can still enrich website fields without reloading the Product Hunt product page.
- Defaults to
Cache behavior in plain English
- Missing non-toggleable Product Hunt fields such as
descriptiondo not make a cached row unusable. - Missing toggle-controlled Product Hunt sections such as
comments,reviews,builtWith, orlaunchesdo make that cached row unusable for acachedrun. - In
fallback, those Product Hunt-side gaps trigger a live Product Hunt recrawl for those rows. - Missing website enrichment fields do not require a Product Hunt recrawl. Cached and fallback runs can fill
website.title,website.description,website.emails, andwebsite.rawTextthrough website enrichment alone.
Input examples
Example: no-target quick start for today's leaderboard
{"scrapeMode": "cached","shouldIncludePromotedListings": false,"shouldScrapeComments": false,"shouldScrapeReviews": false,"shouldScrapeBuiltWith": false,"shouldScrapeLaunches": false,"shouldScrapeWebsite": false}
Example: cheap same-day daily pull with an explicit date
{"startDate": "2026-03-27","maxNbItemsToScrape": 25,"scrapeMode": "cached","shouldIncludePromotedListings": false,"shouldScrapeComments": false,"shouldScrapeReviews": false,"shouldScrapeBuiltWith": false,"shouldScrapeLaunches": false,"shouldScrapeWebsite": false}
Example: category browsing with deeper research
{"categorySlugs": ["ai-code-editors", "developer-tools"],"maxNbItemsToScrape": 25,"scrapeMode": "fallback","shouldIncludePromotedListings": false,"shouldScrapeComments": true,"shouldScrapeReviews": true,"shouldScrapeBuiltWith": true,"shouldScrapeLaunches": true,"shouldScrapeWebsite": true}
Input screenshot

Output shape
This actor emits a Product Hunt-specific flat contract. Every emitted row always includes every required field. Missing scalar values are null. Missing list values are [].
Top-level fields on every row:
cachedAtisPromotedthumbnailUrlnameurltaglinecategoriesfollowerscommentsCountreviewsCountlaunchesCountdayRankweekRankmonthRankyearRanklaunchDatedescriptionupvotesCountlinksxAccountHandleimageUrlstagsteambuiltWithlaunchescommentsreviewswebsite
Nested contract highlights:
isPromotedis alwaystrueorfalseand reflects whether the row came from a promoted slot in the main Product Hunt feedteamentries always includeavatarUrl,type,name,username,headline,about,followersCount, andlinksbuiltWithentries always includename,url,tagline, andthumbnailUrllaunchesentries always includetitle,url,tagline,launchDate, rank fields,upvotesCount,commentsCount, andthumbnailUrlcomments[].userandreviews[].useralways include avatar URLs, names, usernames, headlines, links, and count fields when availablewebsitealways includesurl,title,description,emails, andrawText
Output example
[{"cachedAt": "2026-03-27T19:34:12.000Z","isPromoted": false,"thumbnailUrl": "https://ph-files.imgix.net/example.png","name": "Agentation","url": "https://www.producthunt.com/products/agentation","tagline": "The visual feedback tool for AI agents","categories": ["AI Coding Agents"],"followers": 420,"commentsCount": 17,"reviewsCount": 0,"launchesCount": 1,"dayRank": 1,"weekRank": 19,"monthRank": 490,"yearRank": null,"launchDate": "2026-03-27","description": "Readable plain-text Product Hunt description","upvotesCount": 317,"links": ["https://www.agentation.com/"],"xAccountHandle": "@benjitaylor","imageUrls": ["https://ph-files.imgix.net/example-gallery.png"],"tags": ["Productivity", "Developer Tools", "Artificial Intelligence"],"team": [],"builtWith": [],"launches": [],"comments": [],"reviews": [],"website": {"url": "https://www.agentation.com","title": null,"description": null,"emails": [],"rawText": null}}]
Output screenshot

Pricing
This actor uses four pay-per-event pricing events:
scraped-product-cachedscraped-product-freshadd-on-email-enrichmentapify-actor-start
Pricing semantics:
scraped-product-cachedfires for each successfully emitted cached Product Hunt row.scraped-product-freshfires for each successfully emitted fresh Product Hunt row.add-on-email-enrichmentfires whenshouldScrapeWebsiteis on and the emitted row ends with one or more email addresses inwebsite.emails, including when those emails came entirely from reused cache.apify-actor-startis a synthetic Apify event charged automatically by the platform when the actor starts. The number of start events scales with the run memory.
| Event name | When it triggers | Repo-configured price |
|---|---|---|
scraped-product-cached | One cached Product Hunt row is emitted | $0.0035 |
scraped-product-fresh | One fresh Product Hunt row is emitted | $0.02 |
add-on-email-enrichment | Website scraping ends with one or more email addresses in website.emails | $0.01 |
apify-actor-start | Automatic Apify start event | $0.00005 per event |
Why run it on Apify?
- Run cheap cached daily pulls or broader category/topic discovery from the Console or API.
- Schedule ongoing Product Hunt monitoring without reopening the site by hand.
- Keep outputs in datasets and inspect logs when a specific Product Hunt page needs review.
- Connect finished runs to webhooks, automations, or your own workers without wrapping the scraper yourself.
FAQ
Which mode should I start with?
Use cached for the cheapest single-page pull when reusable cache is likely ready, especially for today's daily leaderboard. Use fallback when you want cache first but still want the actor to fill missing rows live. Use fresh when you explicitly want a live crawl and accept the higher cost.
Do I need to paste Product Hunt URLs?
No. Normal runs are driven by dates, periods, category slugs, or topic slugs. The actor builds the Product Hunt list-page URLs for you.
What happens if cached rows are missing launches, reviews, comments, or built-with data?
In cached, those rows are skipped and the actor warns clearly that their cached snapshot does not include the requested section. In fallback, the actor live-crawls Product Hunt for those rows.
What happens if cached rows are missing website fields?
If shouldScrapeWebsite is on, the actor can still fetch missing website title, description, visible text, or emails without reloading the Product Hunt product page.
Does this actor still use the shared scraper output contract?
No. Product Hunt now uses its own flat output contract because it returns much richer Product Hunt-specific data such as ranks, tags, launches, comments, reviews, and a nested website object.
Where do I report a Product Hunt change or missing field?
Open the Issues page with the Product Hunt page you targeted, the input you used, and the output you expected.
Explore the rest of the collection
- Uneed Scraper - daily Uneed ladder scraping with promoted-listing filtering, maker links, and optional email enrichment
- TinySeed Scraper - TinySeed portfolio scraping with company descriptions and optional website emails
- Tiny Startups Scraper - Tiny Startups homepage scraping with promoted-card filtering and email enrichment
- Website Emails Scraper - shallow-crawl any list of URLs and emit one row per unique email found
Missing a feature or data?
File an issue and I'll add it in less than 24h ๐ซก