BeautifulSoup Scraper avatar
BeautifulSoup Scraper

Pricing

Pay per usage

Go to Store
BeautifulSoup Scraper

BeautifulSoup Scraper

Developed by

Apify

Apify

Maintained by Apify

Crawls websites using raw HTTP requests. It parses the HTML with the BeautifulSoup library and extracts data from the pages using Python code. Supports both recursive crawling and lists of URLs. This Actor is a Python alternative to Cheerio Scraper.

4.4 (5)

Pricing

Pay per usage

9

Total users

870

Monthly users

21

Runs succeeded

99%

Last modified

16 days ago

You can access the BeautifulSoup Scraper programmatically from your own applications by using the Apify API. You can also choose the language preference from below. To use the Apify API, you’ll need an Apify account and your API token, found in Integrations settings in Apify Console.

1from apify_client import ApifyClient
2
3# Initialize the ApifyClient with your Apify API token
4# Replace '<YOUR_API_TOKEN>' with your token.
5client = ApifyClient("<YOUR_API_TOKEN>")
6
7# Prepare the Actor input
8run_input = {
9 "startUrls": [{ "url": "https://crawlee.dev" }],
10 "maxCrawlingDepth": 1,
11 "requestTimeout": 10,
12 "linkSelector": "a[href]",
13 "linkPatterns": [".*crawlee\\.dev.*"],
14 "pageFunction": """from typing import Any
15from crawlee.crawlers import BeautifulSoupCrawlingContext
16
17# See the context section in readme to find out what fields you can access
18# https://apify.com/apify/beautifulsoup-scraper#context
19def page_function(context: BeautifulSoupCrawlingContext) -> Any:
20 url = context.request.url
21 title = context.soup.title.string if context.soup.title else None
22 return {'url': url, 'title': title}
23""",
24 "soupFeatures": "html.parser",
25 "proxyConfiguration": { "useApifyProxy": True },
26}
27
28# Run the Actor and wait for it to finish
29run = client.actor("apify/beautifulsoup-scraper").call(run_input=run_input)
30
31# Fetch and print Actor results from the run's dataset (if there are any)
32print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
33for item in client.dataset(run["defaultDatasetId"]).iterate_items():
34 print(item)
35
36# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

BeautifulSoup Scraper API in Python

The Apify API client for Python is the official library that allows you to use BeautifulSoup Scraper API in Python, providing convenience functions and automatic retries on errors.

Install the apify-client

$pip install apify-client

Other API clients include: