Website Content Crawler avatar

Website Content Crawler

Try for free

No credit card required

View all Actors
Website Content Crawler

Website Content Crawler

apify/website-content-crawler
Try for free

No credit card required

Crawl websites and extract text content to feed AI models, LLM applications, vector databases, or RAG pipelines. The Actor supports rich formatting using Markdown, cleans the HTML, downloads files, and integrates well with 🦜🔗 LangChain, LlamaIndex, and the wider LLM ecosystem.

Do you want to learn more about this Actor?

Get a demo

You can access the Website Content Crawler programmatically from your own Python applications by using the Apify API. You can also choose the language preference from below. To use the Apify API, you’ll need an Apify account and your API token, found in Integrations settings in Apify Console.

1from apify_client import ApifyClient
2
3# Initialize the ApifyClient with your Apify API token
4# Replace '<YOUR_API_TOKEN>' with your token.
5client = ApifyClient("<YOUR_API_TOKEN>")
6
7# Prepare the Actor input
8run_input = {
9    "startUrls": [{ "url": "https://docs.apify.com/academy/web-scraping-for-beginners" }],
10    "useSitemaps": True,
11    "crawlerType": "playwright:adaptive",
12    "includeUrlGlobs": [],
13    "excludeUrlGlobs": [],
14    "initialCookies": [],
15    "proxyConfiguration": { "useApifyProxy": True },
16    "keepElementsCssSelector": "",
17    "removeElementsCssSelector": """nav, footer, script, style, noscript, svg, img[src^='data:'],
18[role=\"alert\"],
19[role=\"banner\"],
20[role=\"dialog\"],
21[role=\"alertdialog\"],
22[role=\"region\"][aria-label*=\"skip\" i],
23[aria-modal=\"true\"]""",
24    "clickElementsCssSelector": "[aria-expanded=\"false\"]",
25}
26
27# Run the Actor and wait for it to finish
28run = client.actor("apify/website-content-crawler").call(run_input=run_input)
29
30# Fetch and print Actor results from the run's dataset (if there are any)
31print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
32for item in client.dataset(run["defaultDatasetId"]).iterate_items():
33    print(item)
34
35# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

Website Content Crawler API in Python

The Apify API client for Python is the official library that allows you to use Website Content Crawler API in Python, providing convenience functions and automatic retries on errors.

Install the apify-client

pip install apify-client

Other API clients include:

Developer
Maintained by Apify
Actor metrics
  • 3.8k monthly users
  • 635 stars
  • 100.0% runs succeeded
  • 2.6 days response time
  • Created in Mar 2023
  • Modified 7 days ago