Amazon review crawler avatar

Amazon review crawler

Under maintenance
Try for free

Pay $5.00 for 1,000 results

Go to Store
This Actor is under maintenance.

This Actor may be unreliable while under maintenance. Would you like to try a similar Actor instead?

See alternative Actors
Amazon review crawler

Amazon review crawler

webscrapewizard/amazon-review-crawler
Try for free

Pay $5.00 for 1,000 results

Extract detailed Amazon product reviews without the API, including ratings, descriptions, and reactions. Enter product URLs to scrape and download data in formats like JSON, CSV, or Excel. Optionally, filter by keywords for comprehensive results.

BeautifulSoup and HTTPX template

A template for web scraping data from websites enqueued from starting URL using Python. The URL of the web page is passed in via input, which is defined by the input schema. The template uses the HTTPX to get the HTML of the page and the Beautiful Soup to parse the data from it. Enqueued URLs are available in request queue. The data are then stored in a dataset where you can easily access them.

Included features

  • Apify SDK for Python - a toolkit for building Apify Actors and scrapers in Python
  • Input schema - define and easily validate a schema for your Actor's input
  • Request queue - queues into which you can put the URLs you want to scrape
  • Dataset - store structured data where each object stored has the same attributes
  • HTTPX - library for making asynchronous HTTP requests in Python
  • Beautiful Soup - a Python library for pulling data out of HTML and XML files

How it works

This code is a Python script that uses HTTPX and Beautiful Soup to scrape web pages and extract data from them. Here's a brief overview of how it works:

  • The script reads the input data from the Actor instance, which is expected to contain a start_urls key with a list of URLs to scrape and a max_depth key with the maximum depth of nested links to follow.
  • The script enqueues the starting URLs in the default request queue and sets their depth to 0.
  • The script processes the requests in the queue one by one, fetching the URL using HTTPX and parsing it using BeautifulSoup.
  • If the depth of the current request is less than the maximum depth, the script looks for nested links in the page and enqueues their targets in the request queue with an incremented depth.
  • The script extracts the desired data from the page (in this case, all the links) and pushes it to the default dataset using the push_data method of the Actor instance.
  • The script catches any exceptions that occur during the scraping process and logs an error message using the Actor.log.exception method.
  • This code demonstrates how to use Python and the Apify SDK to scrape web pages and extract specific data from them.

Resources

Getting started

For complete information see this article. In short, you will:

  1. Build the Actor
  2. Run the Actor

Pull the Actor for local development

If you would like to develop locally, you can pull the existing Actor from Apify console using Apify CLI:

  1. Install apify-cli

    Using Homebrew

    brew install apify-cli

    Using NPM

    npm -g install apify-cli
  2. Pull the Actor by its unique <ActorId>, which is one of the following:

    • unique name of the Actor to pull (e.g. "apify/hello-world")
    • or ID of the Actor to pull (e.g. "E2jjCZBezvAZnX8Rb")

    You can find both by clicking on the Actor title at the top of the page, which will open a modal containing both Actor unique name and Actor ID.

    This command will copy the Actor into the current directory on your local machine.

    apify pull <ActorId>

Documentation reference

To learn more about Apify and Actors, take a look at the following resources:

Developer
Maintained by Community

Actor Metrics

  • 31 monthly users

  • 2 stars

  • >99% runs succeeded

  • Created in May 2024

  • Modified a month ago