Crawlee + BeautifulSoup (Quick start)
Crawl and scrape websites using Crawlee and BeautifulSoup. Start from a given start URLs, and store results to Apify dataset.
src/main.py
src/__main__.py
1"""Module defines the main entry point for the Apify Actor.2
3Feel free to modify this file to suit your specific needs.4
5To build Apify Actors, utilize the Apify SDK toolkit, read more at the official documentation:6https://docs.apify.com/sdk/python7"""8
9from __future__ import annotations10
11from apify import Actor12from crawlee.crawlers import BeautifulSoupCrawler, BeautifulSoupCrawlingContext13
14
15async def main() -> None:16 """Define a main entry point for the Apify Actor.17
18 This coroutine is executed using `asyncio.run()`, so it must remain an asynchronous function for proper execution.19 Asynchronous execution is required for communication with Apify platform, and it also enhances performance in20 the field of web scraping significantly.21 """22 # Enter the context of the Actor.23 async with Actor:24 # Retrieve the Actor input, and use default values if not provided.25 actor_input = await Actor.get_input() or {}26 start_urls = [27 url.get('url')28 for url in actor_input.get(29 'start_urls',30 [{'url': 'https://apify.com'}],31 )32 ]33
34 # Exit if no start URLs are provided.35 if not start_urls:36 Actor.log.info('No start URLs specified in Actor input, exiting...')37 await Actor.exit()38
39 # Create a crawler.40 crawler = BeautifulSoupCrawler(41 # Limit the crawl to max requests. Remove or increase it for crawling all links.42 max_requests_per_crawl=10,43 )44
45 # Define a request handler, which will be called for every request.46 @crawler.router.default_handler47 async def request_handler(context: BeautifulSoupCrawlingContext) -> None:48 url = context.request.url49 Actor.log.info(f'Scraping {url}...')50
51 # Extract the desired data.52 data = {53 'url': context.request.url,54 'title': context.soup.title.string if context.soup.title else None,55 'h1s': [h1.text for h1 in context.soup.find_all('h1')],56 'h2s': [h2.text for h2 in context.soup.find_all('h2')],57 'h3s': [h3.text for h3 in context.soup.find_all('h3')],58 }59
60 # Store the extracted data to the default dataset.61 await context.push_data(data)62
63 # Enqueue additional links found on the current page.64 await context.enqueue_links()65
66 # Run the crawler with the starting requests.67 await crawler.run(start_urls)Python Crawlee & BeautifulSoup Actor Template
This template example was built with Crawlee for Python to scrape data from a website using Beautiful Soup wrapped into BeautifulSoupCrawler.
Quick Start
Once you've installed the dependencies, start the Actor:
$apify run
Once your Actor is ready, you can push it to the Apify Console:
apify login # first, you need to log in if you haven't already done soapify push
Project Structure
.actor/├── actor.json # Actor config: name, version, env vars, runtime settings├── dataset_schena.json # Structure and representation of data produced by an Actor├── input_schema.json # Input validation & Console form definition└── output_schema.json # Specifies where an Actor stores its outputsrc/└── main.py # Actor entry point and orchestratorstorage/ # Local storage (mirrors Cloud during development)├── datasets/ # Output items (JSON objects)├── key_value_stores/ # Files, config, INPUT└── request_queues/ # Pending crawl requestsDockerfile # Container image definition
For more information, see the Actor definition documentation.
How it works
This code is a Python script that uses BeautifulSoup to scrape data from a website. It then stores the website titles in a dataset.
- The crawler starts with URLs provided from the input
startUrlsfield defined by the input schema. Number of scraped pages is limited bymaxPagesPerCrawlfield from the input schema. - The crawler uses
requestHandlerfor each URL to extract the data from the page with the BeautifulSoup library and to save the title and URL of each page to the dataset. It also logs out each result that is being saved.
What's included
- Apify SDK - toolkit for building Actors
- Crawlee for Python - web scraping and browser automation library
- Input schema - define and easily validate a schema for your Actor's input
- Dataset - store structured data where each object stored has the same attributes
- Beautiful Soup - a library for pulling data out of HTML and XML files
- Proxy configuration - rotate IP addresses to prevent blocking
Resources
- Quick Start guide for building your first Actor
- Video introduction to Python SDK
- Webinar introducing to Crawlee for Python
- Apify Python SDK documentation
- Crawlee for Python documentation
- Python tutorials in Academy
- Integration with Zapier, Make, Google Drive and others
- Video guide on getting data using Apify API
Creating Actors with templates
Empty Python project
Empty template with basic structure for the Actor with Apify SDK that allows you to easily add your own functionality.
One‑Page HTML Scraper with BeautifulSoup
Scrape single page with provided URL with HTTPX and extract data from page's HTML with Beautiful Soup.
BeautifulSoup
Example of a web scraper that uses Python HTTPX to scrape HTML from URLs provided on input, parses it using BeautifulSoup and saves results to storage.
Playwright + Chrome
Crawler example that uses headless Chrome driven by Playwright to scrape a website. Headless browsers render JavaScript and can help when getting blocked.
Selenium + Chrome
Scraper example built with Selenium and headless Chrome browser to scrape a website and save the results to storage. A popular alternative to Playwright.
Standby Python project
Template with basic structure for an Actor using Standby mode that allows you to easily add your own functionality.