Selenium + Chrome
Scraper example built with Selenium and headless Chrome browser to scrape a website and save the results to storage. A popular alternative to Playwright.
from urllib.parse import urljoin
from apify import Actor
from selenium import webdriver
from selenium.webdriver.chrome.options import Options as ChromeOptions
from selenium.webdriver.common.by import By
# To run this Actor locally, you need to have the Selenium Chromedriver installed.
# https://www.selenium.dev/documentation/webdriver/getting_started/install_drivers/
# When running on the Apify platform, it is already included in the Actor's Docker image.
async def main():
async with Actor:
# Read the Actor input
actor_input = await Actor.get_input() or {}
start_urls = actor_input.get('start_urls', [{ 'url': 'https://apify.com' }])
max_depth = actor_input.get('max_depth', 1)
if not start_urls:
Actor.log.info('No start URLs specified in actor input, exiting...')
await Actor.exit()
# Enqueue the starting URLs in the default request queue
default_queue = await Actor.open_request_queue()
for start_url in start_urls:
url = start_url.get('url')
Actor.log.info(f'Enqueuing {url} ...')
await default_queue.add_request({ 'url': url, 'userData': { 'depth': 0 }})
# Launch a new Selenium Chrome WebDriver
Actor.log.info('Launching Chrome WebDriver...')
chrome_options = ChromeOptions()
if Actor.config.headless:
chrome_options.add_argument('--headless')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-dev-shm-usage')
driver = webdriver.Chrome(options=chrome_options)
driver.get('http://www.example.com')
assert driver.title == 'Example Domain'
# Process the requests in the queue one by one
while request := await default_queue.fetch_next_request():
url = request['url']
depth = request['userData']['depth']
Actor.log.info(f'Scraping {url} ...')
try:
# Open the URL in the Selenium WebDriver
driver.get(url)
# If we haven't reached the max depth,
# look for nested links and enqueue their targets
if depth < max_depth:
for link in driver.find_elements(By.TAG_NAME, 'a'):
link_href = link.get_attribute('href')
link_url = urljoin(url, link_href)
if link_url.startswith(('http://', 'https://')):
Actor.log.info(f'Enqueuing {link_url} ...')
await default_queue.add_request({
'url': link_url,
'userData': {'depth': depth + 1 },
})
# Push the title of the page into the default dataset
title = driver.title
await Actor.push_data({ 'url': url, 'title': title })
except:
Actor.log.exception(f'Cannot extract data from {url}.')
finally:
await default_queue.mark_request_as_handled(request)
driver.quit()
Selenium & Chrome template
A template example built with Selenium and a headless Chrome browser to scrape a website and save the results to storage. The URL of the web page is passed in via input, which is defined by the input schema. The template uses the Selenium WebDriver to load and process the page. Enqueued URLs are stored in the default request queue. The data are then stored in the default dataset where you can easily access them.
Included features
- Apify SDK for Python - a toolkit for building Apify Actors and scrapers in Python
- Input schema - define and easily validate a schema for your Actor's input
- Request queue - queues into which you can put the URLs you want to scrape
- Dataset - store structured data where each object stored has the same attributes
How it works
This code is a Python script that uses Selenium to scrape web pages and extract data from them. Here's a brief overview of how it works:
- The script reads the input data from the Actor instance, which is expected to contain a
start_urls
key with a list of URLs to scrape and amax_depth
key with the maximum depth of nested links to follow. - The script enqueues the starting URLs in the default request queue and sets their depth to 1.
- The script processes the requests in the queue one by one, fetching the URL using requests and parsing it using Selenium.
- If the depth of the current request is less than the maximum depth, the script looks for nested links in the page and enqueues their targets in the request queue with an incremented depth.
- The script extracts the desired data from the page (in this case, titles of each page) and pushes them to the default dataset using the
push_data
method of the Actor instance. - The script catches any exceptions that occur during the web scraping process and logs an error message using the
Actor.log.exception
method.
Resources
- Selenium controlled Chrome example
- Selenium Grid: what it is and how to set it up
- Web scraping with Selenium and Python
- Cypress vs. Selenium for web testing
- Python tutorials in Academy
- Video guide on getting scraped data using Apify API
A short guide on how to build web scrapers using code templates: web scraper template
Scrape single page with provided URL with Requests and extract data from page's HTML with Beautiful Soup.
Example of a web scraper that uses Python Requests to scrape HTML from URLs provided on input, parses it using BeautifulSoup and saves results to storage.
Crawler example that uses headless Chrome driven by Playwright to scrape a website. Headless browsers render JavaScript and can help when getting blocked.
This example Scrapy spider scrapes page titles from URLs defined in input parameter. It shows how to use Apify SDK for Python and Scrapy pipelines to save results.
Empty template with basic structure for the Actor with Apify SDK that allows you to easily add your own functionality.
Example of a Puppeteer and headless Chrome web scraper. Headless browsers render JavaScript and are harder to block, but they're slower than plain HTTP.