Back to template gallery

Crawlee + BeautifulSoup (Quick start)

Crawl and scrape websites using Crawlee and BeautifulSoup. Start from a given start URLs, and store results to Apify dataset.

Language

python

Tools

crawlee

beautifulsoup

Use cases

Starter

Web scraping

src/main.py

src/__main__.py

1"""Module defines the main entry point for the Apify Actor.
2
3Feel free to modify this file to suit your specific needs.
4
5To build Apify Actors, utilize the Apify SDK toolkit, read more at the official documentation:
6https://docs.apify.com/sdk/python
7"""
8
9from __future__ import annotations
10
11from apify import Actor
12from crawlee.crawlers import BeautifulSoupCrawler, BeautifulSoupCrawlingContext
13
14
15async def main() -> None:
16 """Define a main entry point for the Apify Actor.
17
18 This coroutine is executed using `asyncio.run()`, so it must remain an asynchronous function for proper execution.
19 Asynchronous execution is required for communication with Apify platform, and it also enhances performance in
20 the field of web scraping significantly.
21 """
22 # Enter the context of the Actor.
23 async with Actor:
24 # Retrieve the Actor input, and use default values if not provided.
25 actor_input = await Actor.get_input() or {}
26 start_urls = [
27 url.get('url')
28 for url in actor_input.get(
29 'start_urls',
30 [{'url': 'https://apify.com'}],
31 )
32 ]
33
34 # Exit if no start URLs are provided.
35 if not start_urls:
36 Actor.log.info('No start URLs specified in Actor input, exiting...')
37 await Actor.exit()
38
39 # Create a crawler.
40 crawler = BeautifulSoupCrawler(
41 # Limit the crawl to max requests. Remove or increase it for crawling all links.
42 max_requests_per_crawl=10,
43 )
44
45 # Define a request handler, which will be called for every request.
46 @crawler.router.default_handler
47 async def request_handler(context: BeautifulSoupCrawlingContext) -> None:
48 url = context.request.url
49 Actor.log.info(f'Scraping {url}...')
50
51 # Extract the desired data.
52 data = {
53 'url': context.request.url,
54 'title': context.soup.title.string if context.soup.title else None,
55 'h1s': [h1.text for h1 in context.soup.find_all('h1')],
56 'h2s': [h2.text for h2 in context.soup.find_all('h2')],
57 'h3s': [h3.text for h3 in context.soup.find_all('h3')],
58 }
59
60 # Store the extracted data to the default dataset.
61 await context.push_data(data)
62
63 # Enqueue additional links found on the current page.
64 await context.enqueue_links()
65
66 # Run the crawler with the starting requests.
67 await crawler.run(start_urls)

Python Crawlee & BeautifulSoup Actor Template

This template example was built with Crawlee for Python to scrape data from a website using Beautiful Soup wrapped into BeautifulSoupCrawler.

Quick Start

Once you've installed the dependencies, start the Actor:

$apify run

Once your Actor is ready, you can push it to the Apify Console:

apify login # first, you need to log in if you haven't already done so
apify push

Project Structure

.actor/
├── actor.json # Actor config: name, version, env vars, runtime settings
├── dataset_schena.json # Structure and representation of data produced by an Actor
├── input_schema.json # Input validation & Console form definition
└── output_schema.json # Specifies where an Actor stores its output
src/
└── main.py # Actor entry point and orchestrator
storage/ # Local storage (mirrors Cloud during development)
├── datasets/ # Output items (JSON objects)
├── key_value_stores/ # Files, config, INPUT
└── request_queues/ # Pending crawl requests
Dockerfile # Container image definition

For more information, see the Actor definition documentation.

How it works

This code is a Python script that uses BeautifulSoup to scrape data from a website. It then stores the website titles in a dataset.

  • The crawler starts with URLs provided from the input startUrls field defined by the input schema. Number of scraped pages is limited by maxPagesPerCrawl field from the input schema.
  • The crawler uses requestHandler for each URL to extract the data from the page with the BeautifulSoup library and to save the title and URL of each page to the dataset. It also logs out each result that is being saved.

What's included

Resources

Creating Actors with templates

Already have a solution in mind?

Sign up for a free Apify account and deploy your code to the platform in just a few minutes! If you want a head start without coding it yourself, browse our Store of existing solutions.