Back to template gallery

BeautifulSoup + HTTPX

Example of a web scraper that uses Python HTTPX to scrape HTML from URLs provided on input, parses it using BeautifulSoup and saves results to storage.

Language

python

Tools

beautifulsoup

Use cases

Web scraping

src/main.py

src/__main__.py

1"""
2This module defines the `main()` coroutine for the Apify Actor, executed from the `__main__.py` file.
3
4Feel free to modify this file to suit your specific needs.
5
6To build Apify Actors, utilize the Apify SDK toolkit, read more at the official documentation:
7https://docs.apify.com/sdk/python
8"""
9
10from urllib.parse import urljoin
11
12from bs4 import BeautifulSoup
13from httpx import AsyncClient
14
15from apify import Actor
16
17
18async def main() -> None:
19    """
20    The main coroutine is being executed using `asyncio.run()`, so do not attempt to make a normal function
21    out of it, it will not work. Asynchronous execution is required for communication with Apify platform,
22    and it also enhances performance in the field of web scraping significantly.
23    """
24    async with Actor:
25        # Read the Actor input
26        actor_input = await Actor.get_input() or {}
27        start_urls = actor_input.get('start_urls', [{'url': 'https://apify.com'}])
28        max_depth = actor_input.get('max_depth', 1)
29
30        if not start_urls:
31            Actor.log.info('No start URLs specified in actor input, exiting...')
32            await Actor.exit()
33
34        # Enqueue the starting URLs in the default request queue
35        default_queue = await Actor.open_request_queue()
36        for start_url in start_urls:
37            url = start_url.get('url')
38            Actor.log.info(f'Enqueuing {url} ...')
39            await default_queue.add_request({'url': url, 'userData': {'depth': 0}})
40
41        # Process the requests in the queue one by one
42        while request := await default_queue.fetch_next_request():
43            url = request['url']
44            depth = request['userData']['depth']
45            Actor.log.info(f'Scraping {url} ...')
46
47            try:
48                # Fetch the URL using `httpx`
49                async with AsyncClient() as client:
50                    response = await client.get(url, follow_redirects=True)
51
52                # Parse the response using `BeautifulSoup`
53                soup = BeautifulSoup(response.content, 'html.parser')
54
55                # If we haven't reached the max depth,
56                # look for nested links and enqueue their targets
57                if depth < max_depth:
58                    for link in soup.find_all('a'):
59                        link_href = link.get('href')
60                        link_url = urljoin(url, link_href)
61                        if link_url.startswith(('http://', 'https://')):
62                            Actor.log.info(f'Enqueuing {link_url} ...')
63                            await default_queue.add_request({
64                                'url': link_url,
65                                'userData': {'depth': depth + 1},
66                            })
67
68                # Push the title of the page into the default dataset
69                title = soup.title.string if soup.title else None
70                await Actor.push_data({'url': url, 'title': title})
71            except Exception:
72                Actor.log.exception(f'Cannot extract data from {url}.')
73            finally:
74                # Mark the request as handled so it's not processed again
75                await default_queue.mark_request_as_handled(request)

BeautifulSoup and HTTPX template

A template for web scraping data from websites enqueued from starting URL using Python. The URL of the web page is passed in via input, which is defined by the input schema. The template uses the HTTPX to get the HTML of the page and the Beautiful Soup to parse the data from it. Enqueued URLs are available in request queue. The data are then stored in a dataset where you can easily access them.

Included features

  • Apify SDK for Python - a toolkit for building Apify Actors and scrapers in Python
  • Input schema - define and easily validate a schema for your Actor's input
  • Request queue - queues into which you can put the URLs you want to scrape
  • Dataset - store structured data where each object stored has the same attributes
  • HTTPX - library for making asynchronous HTTP requests in Python
  • Beautiful Soup - a Python library for pulling data out of HTML and XML files

How it works

This code is a Python script that uses HTTPX and Beautiful Soup to scrape web pages and extract data from them. Here's a brief overview of how it works:

  • The script reads the input data from the Actor instance, which is expected to contain a start_urls key with a list of URLs to scrape and a max_depth key with the maximum depth of nested links to follow.
  • The script enqueues the starting URLs in the default request queue and sets their depth to 0.
  • The script processes the requests in the queue one by one, fetching the URL using HTTPX and parsing it using BeautifulSoup.
  • If the depth of the current request is less than the maximum depth, the script looks for nested links in the page and enqueues their targets in the request queue with an incremented depth.
  • The script extracts the desired data from the page (in this case, all the links) and pushes it to the default dataset using the push_data method of the Actor instance.
  • The script catches any exceptions that occur during the scraping process and logs an error message using the Actor.log.exception method.
  • This code demonstrates how to use Python and the Apify SDK to scrape web pages and extract specific data from them.

Resources

Already have a solution in mind?

Sign up for a free Apify account and deploy your code to the platform in just a few minutes! If you want a head start without coding it yourself, browse our Store of existing solutions.