
Free Amazon Product Scrapper
Pricing
$15.00/month + usage

Free Amazon Product Scrapper
Scrape Amazon product data using URLs or ASINs. Extract price, stock, reviews, ratings, and more. Ideal for eCommerce research, pricing analysis, and competitor tracking. JSON/CSV output.
5.0 (2)
Pricing
$15.00/month + usage
1
Total users
6
Monthly users
6
Runs succeeded
60%
Last modified
a day ago
Amazon Product Scraper
What is Amazon Product Scraper and How Does It Work?
Amazon Product Scraper is a powerful web scraping tool that extracts product data from Amazon using product URLs. It collects product details like price, stock availability, ratings, reviews, seller info, and more — all exportable in JSON or CSV format.
Why Use Amazon Product Scraper?
Use this scraper to:
- Monitor pricing and stock for competitor products
- Track product ratings and reviews for market analysis
- Benchmark product performance within categories
- Automate data extraction for E-Commerce analytics and advertising optimization
How to Use Amazon Product Scraper
- Enter one or more Amazon product URLs.
- Set the maximum number of products URLs to scrape.
- Run the scraper and download the extracted data.
- Optionally, access the data via API without logging into Apify.
Features
- Scrapes product details from Amazon product URLs
- Supports both single URL and multiple URLs as input
- Extracts comprehensive product data:
- Title
- Price
- Images (all product images)
- Variants and their prices
- Ratings (average rating)
- Reviews (configurable number of reviews)
- Product details and specifications
- Product description
Usage
Input Configuration
The actor accepts the following input parameters:
{"urls": ["https://www.amazon.com/product-url-1","https://www.amazon.com/product-url-2"],"maxReviews": 10}
Running on Apify
To run this scraper on Apify, follow these steps:
-
Create an Apify account if you don't have one already at apify.com
-
Create a new actor in your Apify Console:
- Go to Apify Console
- Click on "Actors" in the left sidebar
- Click "Create new" button
- Choose "Custom actor"
-
Upload your code:
- You can either upload a ZIP file containing all the files in this repository
- Or connect your GitHub repository if you've pushed this code there
-
Build the actor:
- Once your code is uploaded, click the "Build" button
- This will build a Docker image based on your Dockerfile
-
Run the actor:
- Click the "Run" button
- Input your configuration (URLs and maxReviews)
- Start the run
-
Access the results:
- Once the run is complete, you can access the scraped data in the "Dataset" tab
- You can download the data in various formats (JSON, CSV, Excel, etc.)
Running Locally with Apify CLI
You can also run this actor locally using Apify CLI:
- Install Apify CLI:
npm install -g apify-cli
- Navigate to the project directory
- Run
apify run
to start the actor locally - Access the results in the
apify_storage
directory
urls
(required): One or more Amazon product URLs to scrape. Can be a single URL string or an array of URLs.maxReviews
(optional): Maximum number of reviews to scrape per product (default: 10, max: 100)
Example Input
{"urls": ["https://www.amazon.com/dp/B08N5KWB9H","https://www.amazon.com/dp/B08N5LNQCX"],"maxReviews": 20}
Output
The actor outputs a dataset with detailed information about each product. Each item in the dataset contains:
url
: The URL of the product pagetitle
: Product titleprice
: Current product priceimages
: Array of product image URLsdetails
: Object containing product specifications and detailsdescription
: Product descriptionaverage_rating
: Average product rating (out of 5)review_count
: Total number of reviewsvariants
: Array of product variants with their names and pricesreviews
: Array of product reviews, each containing:title
: Review titlerating
: Individual review ratingdate
: Review datetext
: Review contentreviewer
: Reviewer name
Local Development
Installation
- Clone this repository
- Install the required dependencies:
pip install -r requirements.txt
- Install Playwright browsers:
playwright install
Running Locally
To run the actor locally:
python main.py
You can provide input by creating a file named INPUT.json
in the same directory with your input configuration.
Deployment to Apify
To deploy this actor to the Apify platform:
-
Install the Apify CLI:
npm -g install apify-cli -
Log in to your Apify account:
apify login -
Deploy the actor:
apify push
Anti-blocking Measures
This actor uses several techniques to avoid being blocked by Amazon:
- Random user agent rotation
- Headless browser with realistic behavior
- Proper request throttling and timing
Limitations
- The actor may not work for all Amazon regional domains
- Some product details may vary depending on the product category
- Amazon's website structure changes frequently, which may require updates to the scraping logic
License
This project is licensed under the MIT License - see the LICENSE file for details.