Product Finder Crawler & Extractor avatar
Product Finder Crawler & Extractor

Pricing

from $0.50 / 1,000 product details

Go to Apify Store
Product Finder Crawler & Extractor

Product Finder Crawler & Extractor

The Product Finder Crawler & Extractor is a versatile e-commerce scraper designed to extract product information from virtually any website but with a focus on e-commerce. Comprehensive Product Discovery, Up-to-Date Pricing, Multi-Country Price Comparison

Pricing

from $0.50 / 1,000 product details

Rating

0.0

(0)

Developer

Datavault

Datavault

Maintained by Community

Actor stats

0

Bookmarked

3

Total users

2

Monthly active users

20 hours ago

Last modified

Categories

Share

The Product Finder Crawler & Extractor is a versatile e-commerce scraper designed to extract product information from virtually any website with a focus on e-commerce. It leverages structured data formats such as Schema.org (JSON-LD, Microdata), and can work with some data embedded directly in HTML scripts. This crawler prioritizes speed and efficiency by not rendering full JavaScript, making it ideal for sites where product data is available in the initial HTML response.

Features

  • Comprehensive Product Discovery: Automatically identifies and extracts all products available on a target website.
  • Up-to-Date Pricing: Tracks and retrieves the latest price updates for products.
  • Multi-Country Price Comparison: Use proxy configuration to analyze product and price differences across countries.
  • Generic Extraction Engine: Compatible with any site using standard structured data (JSON-LD, Microdata) or basic HTML patterns.
  • Deep Crawling Capabilities: Optionally follows internal links to uncover additional products across the domain.
  • Configurable Crawl Limits: Control the maximum number of pages to manage depth and operational costs.
  • Request Delay Management: Adjust crawling speed to minimize server load and prevent throttling.

Input Parameters

  • startUrls: An array of URLs to start the crawl.
  • crawlSubpages: If checked (default: true), the crawler will follow links found on the pages. If unchecked, only the Start URLs will be scraped.
  • maxPagesPerCrawl: The maximum number of pages to visit in a single run. Default is 100.
  • minRequestDelay: Minimum time in milliseconds to wait between requests (rate limiting). Default is 1000ms.
  • roam: If checked (default: false), the crawler will follow links to other domains.
  • allowSubdomains: If checked (default: false), the crawler will follow links to subdomains of the start URLs (e.g., blog.example.com).
  • proxyConfiguration: Apify Proxy configuration. Recommended for most e-commerce sites and crucial for avoiding blocking on sites like Amazon.

Output

The scraper outputs a dataset where each item represents a found product. Fields include:

  • url: The product page URL.
  • name: Product name.
  • description: Product description.
  • sku: Stock Keeping Unit.
  • brand: Brand name.
  • price: Product price.
  • currency: Currency code (e.g., USD, NOK).
  • image: URL of the product image.
  • availability: Availability status (e.g., InStock).
  • gtin: Global Trade Item Number (GTIN) such as EAN, UPC, ISBN.
  • rawSchema: The full extracted JSON-LD object for debugging or extra fields.

Sample Input

{
"startUrls": [
{ "url": "https://www.example-store.com" }
],
"crawlSubpages": true,
"maxPagesPerCrawl": 200,
"minRequestDelay": 500,
"proxyConfiguration": {
"useApifyProxy": true
}
}

How it works

  1. The crawler visits the startUrls.
  2. It downloads the raw HTML content of the page.
  3. It parses the page content using various strategies:
    • Schema.org (JSON-LD, Microdata)
    • Specific HTML selectors (e.g., for Amazon, Temu if data is in initial HTML)
    • Global JavaScript objects embedded directly in <script> tags
  4. If a product is found, it extracts relevant fields and saves the item to the dataset.
  5. Based on crawlSubpages, roam, and allowSubdomains settings, it finds and adds new links to the crawl queue.

Common issue when there is no result

  • Many websites require the use of a proxy; without it, requests may be redirected to other pages or blocked entirely.
  • JavaScript rendering: This crawler does not execute client-side JavaScript. If product data is loaded dynamically after the initial page load (e.g., through AJAX calls or complex React/Vue applications that only send a skeleton HTML), this crawler might not find the data.
  • Some sites have strict anti-scraping measures that prevent generic crawlers from working. In these cases, a custom scraper may be necessary. This tool is designed as a general-purpose scraper that works on a wide range of sites with products like ecommerce sites.

Tip

Try setting just one url of your site that you want to scrape in the list of startUrls and set crawlSubpages to false. See if you get any result before going all in.