BeautifulSoup Scraper avatar
BeautifulSoup Scraper
Try for free

No credit card required

View all Actors
BeautifulSoup Scraper

BeautifulSoup Scraper

Try for free

No credit card required

Crawls websites using raw HTTP requests. It parses the HTML with the BeautifulSoup library and extracts data from the pages using Python code. Supports both recursive crawling and lists of URLs. This Actor is a Python alternative to Cheerio Scraper.

The code examples below show how to run the Actor and get its results. To run the code, you need to have an Apify account. Replace <YOUR_API_TOKEN> in the code with your API token, which you can find under Settings > Integrations in Apify Console. Learn more

1from apify_client import ApifyClient
3# Initialize the ApifyClient with your Apify API token
4client = ApifyClient("<YOUR_API_TOKEN>")
6# Prepare the Actor input
7run_input = {
8    "startUrls": [{ "url": "" }],
9    "maxCrawlingDepth": 1,
10    "requestTimeout": 10,
11    "linkSelector": "a[href]",
12    "linkPatterns": [".*crawlee\\.dev.*"],
13    "pageFunction": """from typing import Any
15# See the context section in readme to find out what fields you can access 
17def page_function(context: Context) -> Any:
18    url = context.request['url']
19    title = context.soup.title.string if context.soup.title else None
20    return {'url': url, 'title': title}
22    "soupFeatures": "html.parser",
23    "proxyConfiguration": { "useApifyProxy": True },
26# Run the Actor and wait for it to finish
27run ="apify/beautifulsoup-scraper").call(run_input=run_input)
29# Fetch and print Actor results from the run's dataset (if there are any)
30print("💾 Check your data here:" + run["defaultDatasetId"])
31for item in client.dataset(run["defaultDatasetId"]).iterate_items():
32    print(item)
34# 📚 Want to learn more 📖? Go to →
Maintained by Apify
Actor metrics
  • 36 monthly users
  • 0 stars
  • 90.6% runs succeeded
  • Created in Jul 2023
  • Modified 7 months ago