BeautifulSoup
Example of a web scraper that uses Python HTTPX to scrape HTML from URLs provided on input, parses it using BeautifulSoup and saves results to storage.
src/main.py
src/__main__.py
1"""This module defines the main entry point for the Apify Actor.
2
3Feel free to modify this file to suit your specific needs.
4
5To build Apify Actors, utilize the Apify SDK toolkit, read more at the official documentation:
6https://docs.apify.com/sdk/python
7"""
8
9from urllib.parse import urljoin
10
11from bs4 import BeautifulSoup
12from httpx import AsyncClient
13
14from apify import Actor, Request
15
16
17async def main() -> None:
18 """Main entry point for the Apify Actor.
19
20 This coroutine is executed using `asyncio.run()`, so it must remain an asynchronous function for proper execution.
21 Asynchronous execution is required for communication with Apify platform, and it also enhances performance in
22 the field of web scraping significantly.
23 """
24 async with Actor:
25 # Retrieve the Actor input, and use default values if not provided.
26 actor_input = await Actor.get_input() or {}
27 start_urls = actor_input.get('start_urls', [{'url': 'https://apify.com'}])
28 max_depth = actor_input.get('max_depth', 1)
29
30 # Exit if no start URLs are provided.
31 if not start_urls:
32 Actor.log.info('No start URLs specified in Actor input, exiting...')
33 await Actor.exit()
34
35 # Open the default request queue for handling URLs to be processed.
36 request_queue = await Actor.open_request_queue()
37
38 # Enqueue the start URLs with an initial crawl depth of 0.
39 for start_url in start_urls:
40 url = start_url.get('url')
41 Actor.log.info(f'Enqueuing {url} ...')
42 request = Request.from_url(url, user_data={'depth': 0})
43 await request_queue.add_request(request)
44
45 # Process the URLs from the request queue.
46 while request := await request_queue.fetch_next_request():
47 url = request.url
48 depth = request.user_data['depth']
49 Actor.log.info(f'Scraping {url} ...')
50
51 try:
52 # Fetch the HTTP response from the specified URL using HTTPX.
53 async with AsyncClient() as client:
54 response = await client.get(url, follow_redirects=True)
55
56 # Parse the HTML content using Beautiful Soup.
57 soup = BeautifulSoup(response.content, 'html.parser')
58
59 # If the current depth is less than max_depth, find nested links and enqueue them.
60 if depth < max_depth:
61 for link in soup.find_all('a'):
62 link_href = link.get('href')
63 link_url = urljoin(url, link_href)
64
65 if link_url.startswith(('http://', 'https://')):
66 Actor.log.info(f'Enqueuing {link_url} ...')
67 request = Request.from_url(link_url, user_data={'depth': depth + 1})
68 await request_queue.add_request(request)
69
70 # Extract the desired data.
71 data = {
72 'url': url,
73 'title': soup.title.string if soup.title else None,
74 'h1s': [h1.text for h1 in soup.find_all('h1')],
75 'h2s': [h2.text for h2 in soup.find_all('h2')],
76 'h3s': [h3.text for h3 in soup.find_all('h3')],
77 }
78
79 # Store the extracted data to the default dataset.
80 await Actor.push_data(data)
81
82 except Exception:
83 Actor.log.exception(f'Cannot extract data from {url}.')
84
85 finally:
86 # Mark the request as handled to ensure it is not processed again.
87 await request_queue.mark_request_as_handled(request)
Python BeautifulSoup template
A template for web scraping data from websites enqueued from starting URL using Python. The URL of the web page is passed in via input, which is defined by the input schema. The template uses the HTTPX to get the HTML of the page and the Beautiful Soup to parse the data from it. Enqueued URLs are available in request queue. The data are then stored in a dataset where you can easily access them.
Included features
- Apify SDK for Python - a toolkit for building Apify Actors and scrapers in Python
- Input schema - define and easily validate a schema for your Actor's input
- Request queue - queues into which you can put the URLs you want to scrape
- Dataset - store structured data where each object stored has the same attributes
- HTTPX - library for making asynchronous HTTP requests in Python
- Beautiful Soup - a Python library for pulling data out of HTML and XML files
How it works
This code is a Python script that uses HTTPX and Beautiful Soup to scrape web pages and extract data from them. Here's a brief overview of how it works:
- The script reads the input data from the Actor instance, which is expected to contain a
start_urls
key with a list of URLs to scrape and amax_depth
key with the maximum depth of nested links to follow. - The script enqueues the starting URLs in the default request queue and sets their depth to 0.
- The script processes the requests in the queue one by one, fetching the URL using HTTPX and parsing it using BeautifulSoup.
- If the depth of the current request is less than the maximum depth, the script looks for nested links in the page and enqueues their targets in the request queue with an incremented depth.
- The script extracts the desired data from the page (in this case, all the links) and pushes it to the default dataset using the
push_data
method of the Actor instance. - The script catches any exceptions that occur during the scraping process and logs an error message using the
Actor.log.exception
method. - This code demonstrates how to use Python and the Apify SDK to scrape web pages and extract specific data from them.
Resources
- BeautifulSoup Scraper
- Beautifulsoup Scraper tutorial
- Python tutorials in Academy
- Web scraping with Beautiful Soup and Requests
- Beautiful Soup vs. Scrapy for web scraping
- Integration with Make, GitHub, Zapier, Google Drive, and other apps
- Video guide on getting scraped data using Apify API
- Video introduction to Python SDK
- A short guide on how to build web scrapers using code templates:
This example Scrapy spider scrapes page titles from URLs defined in input parameter. It shows how to use Apify SDK for Python and Scrapy pipelines to save results.
Scrape single page with provided URL with HTTPX and extract data from page's HTML with Beautiful Soup.
Crawler example that uses headless Chrome driven by Playwright to scrape a website. Headless browsers render JavaScript and can help when getting blocked.
Scraper example built with Selenium and headless Chrome browser to scrape a website and save the results to storage. A popular alternative to Playwright.
Empty template with basic structure for the Actor with Apify SDK that allows you to easily add your own functionality.
Template with basic structure for an Actor using Standby mode that allows you to easily add your own functionality.