Google People Also Ask Scraper avatar
Google People Also Ask Scraper
Try for free

2 hours trial then $10.00/month - No credit card required now

View all Actors
Google People Also Ask Scraper

Google People Also Ask Scraper

ib4ngz/google-people-also-ask-scraper
Try for free

2 hours trial then $10.00/month - No credit card required now

Scrapes questions that appear in the `People also ask` section on the Google search results page according to the keyword entered with the max depth you want

Selenium & Chrome template

A template example built with Selenium and a headless Chrome browser to scrape a website and save the results to storage. The URL of the web page is passed in via input, which is defined by the input schema. The template uses the Selenium WebDriver to load and process the page. Enqueued URLs are stored in the default request queue. The data are then stored in the default dataset where you can easily access them.

Included features

  • Apify SDK for Python - a toolkit for building Apify Actors and scrapers in Python
  • Input schema - define and easily validate a schema for your Actor's input
  • Request queue - queues into which you can put the URLs you want to scrape
  • Dataset - store structured data where each object stored has the same attributes
  • Selenium - a browser automation library

How it works

This code is a Python script that uses Selenium to scrape web pages and extract data from them. Here's a brief overview of how it works:

  • The script reads the input data from the Actor instance, which is expected to contain a start_urls key with a list of URLs to scrape and a max_depth key with the maximum depth of nested links to follow.
  • The script enqueues the starting URLs in the default request queue and sets their depth to 1.
  • The script processes the requests in the queue one by one, fetching the URL using requests and parsing it using Selenium.
  • If the depth of the current request is less than the maximum depth, the script looks for nested links in the page and enqueues their targets in the request queue with an incremented depth.
  • The script extracts the desired data from the page (in this case, titles of each page) and pushes them to the default dataset using the push_data method of the Actor instance.
  • The script catches any exceptions that occur during the web scraping process and logs an error message using the Actor.log.exception method.

Resources

Getting started

For complete information see this article. To run the actor use the following command:

apify run

Deploy to Apify

Connect Git repository to Apify

If you've created a Git repository for the project, you can easily connect to Apify:

  1. Go to Actor creation page
  2. Click on Link Git Repository button

Push project on your local machine to Apify

You can also deploy the project on your local machine to Apify without the need for the Git repository.

  1. Log in to Apify. You will need to provide your Apify API Token to complete this action.

    apify login
  2. Deploy your Actor. This command will deploy and build the Actor on the Apify Platform. You can find your newly created Actor under Actors -> My Actors.

    apify push

Documentation reference

To learn more about Apify and Actors, take a look at the following resources:

Developer
Maintained by Community
Actor metrics
  • 4 monthly users
  • 1 star
  • 100.0% runs succeeded
  • Created in Jun 2024
  • Modified about 2 months ago