Start with Python
Scrape single page with provided URL with HTTPX and extract data from page's HTML with Beautiful Soup.
src/main.py
src/__main__.py
1"""This module defines the main entry point for the Apify Actor.
2
3Feel free to modify this file to suit your specific needs.
4
5To build Apify Actors, utilize the Apify SDK toolkit, read more at the official documentation:
6https://docs.apify.com/sdk/python
7"""
8
9# Beautiful Soup - A library for pulling data out of HTML and XML files. Read more at:
10# https://www.crummy.com/software/BeautifulSoup/bs4/doc
11from bs4 import BeautifulSoup
12
13# HTTPX - A library for making asynchronous HTTP requests in Python. Read more at:
14# https://www.python-httpx.org/
15from httpx import AsyncClient
16
17# Apify SDK - A toolkit for building Apify Actors. Read more at:
18# https://docs.apify.com/sdk/python
19from apify import Actor
20
21
22async def main() -> None:
23 """Main entry point for the Apify Actor.
24
25 This coroutine is executed using `asyncio.run()`, so it must remain an asynchronous function for proper execution.
26 Asynchronous execution is required for communication with Apify platform, and it also enhances performance in
27 the field of web scraping significantly.
28 """
29 async with Actor:
30 # Retrieve the input object for the Actor. The structure of input is defined in input_schema.json.
31 actor_input = await Actor.get_input() or {'url': 'https://apify.com/'}
32 url = actor_input.get('url')
33
34 # Create an asynchronous HTTPX client for making HTTP requests.
35 async with AsyncClient() as client:
36 # Fetch the HTML content of the page, following redirects if necessary.
37 Actor.log.info(f'Sending a request to {url}')
38 response = await client.get(url, follow_redirects=True)
39
40 # Parse the HTML content using Beautiful Soup and lxml parser.
41 soup = BeautifulSoup(response.content, 'lxml')
42
43 # Extract all headings from the page (tag name and text).
44 headings = []
45 for heading in soup.find_all(['h1', 'h2', 'h3', 'h4', 'h5', 'h6']):
46 heading_object = {'level': heading.name, 'text': heading.text}
47 Actor.log.info(f'Extracted heading: {heading_object}')
48 headings.append(heading_object)
49
50 # Save the extracted headings to the dataset, which is a table-like storage.
51 await Actor.push_data(headings)
Scrape single-page in Python template
A template for web scraping data from a single web page in Python. The URL of the web page is passed in via input, which is defined by the input schema. The template uses the HTTPX to get the HTML of the page and the Beautiful Soup to parse the data from it. The data are then stored in a dataset where you can easily access them.
The scraped data in this template are page headings but you can easily edit the code to scrape whatever you want from the page.
Included features
- Apify SDK for Python - a toolkit for building Apify Actors and scrapers in Python
- Input schema - define and easily validate a schema for your Actor's input
- Request queue - queues into which you can put the URLs you want to scrape
- Dataset - store structured data where each object stored has the same attributes
- HTTPX - library for making asynchronous HTTP requests in Python
- Beautiful Soup - library for pulling data out of HTML and XML files
How it works
Actor.get_input()
gets the input where the page URL is definedhttpx.AsyncClient().get(url)
fetches the pageBeautifulSoup(response.content, 'lxml')
loads the page data and enables parsing the headings- This parses the headings from the page and here you can edit the code to parse whatever you need from the page
for heading in soup.find_all(["h1", "h2", "h3", "h4", "h5", "h6"]):
Actor.push_data(headings)
stores the headings in the dataset
Resources
- BeautifulSoup Scraper
- Python tutorials in Academy
- Web scraping with Beautiful Soup and Requests
- Beautiful Soup vs. Scrapy for web scraping
- Integration with Make, GitHub, Zapier, Google Drive, and other apps
- Video guide on getting scraped data using Apify API
- A short guide on how to build web scrapers using code templates:
This example Scrapy spider scrapes page titles from URLs defined in input parameter. It shows how to use Apify SDK for Python and Scrapy pipelines to save results.
Example of a web scraper that uses Python HTTPX to scrape HTML from URLs provided on input, parses it using BeautifulSoup and saves results to storage.
Crawler example that uses headless Chrome driven by Playwright to scrape a website. Headless browsers render JavaScript and can help when getting blocked.
Scraper example built with Selenium and headless Chrome browser to scrape a website and save the results to storage. A popular alternative to Playwright.
Empty template with basic structure for the Actor with Apify SDK that allows you to easily add your own functionality.
Template with basic structure for an Actor using Standby mode that allows you to easily add your own functionality.