Crawlee + Playwright + Chrome
Crawl and scrape websites using Crawlee and Playwright. Start from a given start URLs, and store results to Apify dataset.
src/main.py
src/__main__.py
1"""This module defines the main entry point for the Apify Actor.
2
3Feel free to modify this file to suit your specific needs.
4
5To build Apify Actors, utilize the Apify SDK toolkit, read more at the official documentation:
6https://docs.apify.com/sdk/python
7"""
8
9from apify import Actor, Request
10from crawlee.playwright_crawler import PlaywrightCrawler, PlaywrightCrawlingContext
11
12
13async def main() -> None:
14 """Main entry point for the Apify Actor.
15
16 This coroutine is executed using `asyncio.run()`, so it must remain an asynchronous function for proper execution.
17 Asynchronous execution is required for communication with Apify platform, and it also enhances performance in
18 the field of web scraping significantly.
19 """
20 async with Actor:
21 # Retrieve the Actor input, and use default values if not provided.
22 actor_input = await Actor.get_input() or {}
23 start_urls = [url.get('url') for url in actor_input.get('start_urls', [{'url': 'https://apify.com'}])]
24
25 # Exit if no start URLs are provided.
26 if not start_urls:
27 Actor.log.info('No start URLs specified in Actor input, exiting...')
28 await Actor.exit()
29
30 # Create a crawler.
31 crawler = PlaywrightCrawler(
32 # Limit the crawl to max requests. Remove or increase it for crawling all links.
33 max_requests_per_crawl=50,
34 headless=True,
35 )
36
37 # Define a request handler, which will be called for every request.
38 @crawler.router.default_handler
39 async def request_handler(context: PlaywrightCrawlingContext) -> None:
40 url = context.request.url
41 Actor.log.info(f'Scraping {url}...')
42
43 # Extract the desired data.
44 data = {
45 'url': context.request.url,
46 'title': await context.page.title(),
47 'h1s': [await h1.text_content() for h1 in await context.page.locator('h1').all()],
48 'h2s': [await h2.text_content() for h2 in await context.page.locator('h2').all()],
49 'h3s': [await h3.text_content() for h3 in await context.page.locator('h3').all()],
50 }
51
52 # Store the extracted data to the default dataset.
53 await context.push_data(data)
54
55 # Enqueue additional links found on the current page.
56 await context.enqueue_links()
57
58 # Run the crawler with the starting requests.
59 await crawler.run(start_urls)
Python Crawlee with Playwright template
A template for web scraping data from websites starting from provided URLs using Python. The starting URLs are passed through the Actor's input schema, defined by the input schema. The template uses Crawlee for Python for efficient web crawling, making requests via headless browser managed by Playwright, and handling each request through a user-defined handler that uses Playwright API to extract data from the page. Enqueued URLs are managed in the request queue, and the extracted data is saved in a dataset for easy access.
Included features
- Apify SDK - a toolkit for building Apify Actors in Python.
- Crawlee for Python - a web scraping and browser automation library.
- Input schema - define and validate a schema for your Actor's input.
- Request queue - manage the URLs you want to scrape in a queue.
- Dataset - store and access structured data extracted from web pages.
- Playwright - a library for managing headless browsers.
Resources
- Video introduction to Python SDK
- Webinar introducing to Crawlee for Python
- Apify Python SDK documentation
- Crawlee for Python documentation
- Python tutorials in Academy
- Integration with Make, GitHub, Zapier, Google Drive, and other apps
- Video guide on getting scraped data using Apify API
- A short guide on how to build web scrapers using code templates:
This example Scrapy spider scrapes page titles from URLs defined in input parameter. It shows how to use Apify SDK for Python and Scrapy pipelines to save results.
Scrape single page with provided URL with HTTPX and extract data from page's HTML with Beautiful Soup.
Example of a web scraper that uses Python HTTPX to scrape HTML from URLs provided on input, parses it using BeautifulSoup and saves results to storage.
Crawler example that uses headless Chrome driven by Playwright to scrape a website. Headless browsers render JavaScript and can help when getting blocked.
Scraper example built with Selenium and headless Chrome browser to scrape a website and save the results to storage. A popular alternative to Playwright.
Empty template with basic structure for the Actor with Apify SDK that allows you to easily add your own functionality.