BeautifulSoup Scraper
Pricing
Pay per usage
BeautifulSoup Scraper
Crawls websites using raw HTTP requests. It parses the HTML with the BeautifulSoup library and extracts data from the pages using Python code. Supports both recursive crawling and lists of URLs. This Actor is a Python alternative to Cheerio Scraper.
4.4 (5)
Pricing
Pay per usage
9
Total users
826
Monthly users
18
Runs succeeded
98%
Last modified
6 months ago
You can access the BeautifulSoup Scraper programmatically from your own applications by using the Apify API. You can also choose the language preference from below. To use the Apify API, you’ll need an Apify account and your API token, found in Integrations settings in Apify Console.
$echo '{< "startUrls": [< {< "url": "https://crawlee.dev"< }< ],< "maxCrawlingDepth": 1,< "requestTimeout": 10,< "linkSelector": "a[href]",< "linkPatterns": [< ".*crawlee\\\\.dev.*"< ],< "pageFunction": "from typing import Any\\n\\n# See the context section in readme to find out what fields you can access \\n# https://apify.com/vdusek/beautifulsoup-scraper#context \\ndef page_function(context: Context) -> Any:\\n url = context.request['\''url'\'']\\n title = context.soup.title.string if context.soup.title else None\\n return {'\''url'\'': url, '\''title'\'': title}\\n",< "soupFeatures": "html.parser",< "proxyConfiguration": {< "useApifyProxy": true< }<}' |<apify call apify/beautifulsoup-scraper --silent --output-dataset
BeautifulSoup Scraper API through CLI
The Apify CLI is the official tool that allows you to use BeautifulSoup Scraper locally, providing convenience functions and automatic retries on errors.
Install the Apify CLI
$npm i -g apify-cli$apify login
Other API clients include: