AI Web Scraper - Powered by Crawl4AI avatar

AI Web Scraper - Powered by Crawl4AI

Try for free

Pay $25.00 for 1,000 Results

Go to Store
AI Web Scraper - Powered by Crawl4AI

AI Web Scraper - Powered by Crawl4AI

raizen/ai-web-scraper
Try for free

Pay $25.00 for 1,000 Results

A blazing-fast AI web scraper powered by Crawl4AI. Perfect for LLMs, AI agents, AI automation, model training, sentiment analysis, and content generation. Supports deep crawling, multiple extraction strategies and flexible output (Markdown/JSON). Seamlessly integrates with Make.com, n8n, and Zapier.

Developer
Maintained by Community

Actor Metrics

  • 3 monthly users

  • No reviews yet

  • No bookmarks yet

  • Created in Mar 2025

  • Modified 2 hours ago

You can access the AI Web Scraper - Powered by Crawl4AI programmatically from your own applications by using the Apify API. You can also choose the language preference from below. To use the Apify API, you’ll need an Apify account and your API token, found in Integrations settings in Apify Console.

1from apify_client import ApifyClient
2
3# Initialize the ApifyClient with your Apify API token
4# Replace '<YOUR_API_TOKEN>' with your token.
5client = ApifyClient("<YOUR_API_TOKEN>")
6
7# Prepare the Actor input
8run_input = {
9    "browserConfig": {
10        "browser_type": "chromium",
11        "headless": True,
12        "verbose_logging": False,
13        "ignore_https_errors": True,
14        "user_agent": "random",
15        "proxy": "",
16        "viewport_width": 1280,
17        "viewport_height": 720,
18        "accept_downloads": False,
19        "extra_headers": {},
20    },
21    "crawlerConfig": {
22        "cache_mode": "BYPASS",
23        "page_timeout": 20000,
24        "simulate_user": True,
25        "remove_overlay_elements": True,
26        "delay_before_return_html": 1,
27        "wait_for": "1",
28        "screenshot": False,
29        "pdf": False,
30        "enable_rate_limiting": False,
31        "semaphore_count": 10,
32        "memory_threshold_percent": 70,
33        "word_count_threshold": 200,
34        "css_selector": "",
35        "excluded_tags": [],
36        "excluded_selector": "",
37        "only_text": False,
38        "prettify": False,
39        "keep_data_attributes": False,
40        "remove_forms": False,
41        "bypass_cache": False,
42        "disable_cache": False,
43        "no_cache_read": False,
44        "no_cache_write": False,
45        "wait_until": "domcontentloaded",
46        "wait_for_images": False,
47        "check_robots_txt": False,
48        "mean_delay": 0.1,
49        "max_range": 0.3,
50        "js_code": "",
51        "js_only": False,
52        "ignore_body_visibility": True,
53        "scan_full_page": False,
54        "scroll_delay": 0.2,
55        "process_iframes": False,
56        "override_navigator": False,
57        "magic": False,
58        "adjust_viewport_to_content": False,
59        "screenshot_wait_for": 0,
60        "screenshot_height_threshold": 20000,
61        "image_description_min_word_threshold": 50,
62        "image_score_threshold": 3,
63        "exclude_external_images": False,
64        "exclude_social_media_domains": [],
65        "exclude_external_links": False,
66        "exclude_social_media_links": False,
67        "exclude_domains": [],
68        "verbose": True,
69        "log_console": False,
70        "stream": False,
71    },
72    "deepCrawlConfig": {
73        "max_pages": 100,
74        "max_depth": 3,
75        "include_external": False,
76        "score_threshold": 0.5,
77        "filter_chain": [],
78        "url_scorer": {},
79    },
80    "markdownConfig": {
81        "ignore_links": False,
82        "ignore_images": False,
83        "escape_html": True,
84        "skip_internal_links": False,
85        "include_sup_sub": False,
86        "citations": False,
87        "body_width": 80,
88        "fit_markdown": False,
89    },
90    "contentFilterConfig": {
91        "type": "pruning",
92        "user_query": "",
93        "threshold": 0.45,
94        "min_word_threshold": 5,
95        "bm25_threshold": 1.2,
96        "apply_llm_filter": False,
97        "semantic_filter": "",
98        "word_count_threshold": 10,
99        "sim_threshold": 0.3,
100        "max_dist": 0.2,
101        "top_k": 3,
102        "linkage_method": "ward",
103    },
104    "userAgentConfig": {
105        "user_agent_mode": "random",
106        "device_type": "desktop",
107        "browser_type": "chrome",
108        "num_browsers": 3,
109    },
110    "llmConfig": {
111        "provider": "groq/deepseek-r1-distill-llama-70b",
112        "api_token": "",
113        "instruction": "Summarize content in clean markdown.",
114        "base_url": "",
115        "chunk_token_threshold": 2048,
116        "apply_chunking": True,
117        "input_format": "markdown",
118        "temperature": 0.7,
119        "max_tokens": 4096,
120    },
121    "extractionSchema": {
122        "name": "Custom Extraction",
123        "baseSelector": "div.article",
124        "fields": [
125            {
126                "name": "title",
127                "selector": "h1",
128                "type": "text",
129            },
130            {
131                "name": "link",
132                "selector": "a",
133                "type": "attribute",
134                "attribute": "href",
135            },
136        ],
137    },
138}
139
140# Run the Actor and wait for it to finish
141run = client.actor("raizen/ai-web-scraper").call(run_input=run_input)
142
143# Fetch and print Actor results from the run's dataset (if there are any)
144print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
145for item in client.dataset(run["defaultDatasetId"]).iterate_items():
146    print(item)
147
148# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

AI Web Scraper - Crawl4AI for LLMs, AI Agents & Automation API in Python

The Apify API client for Python is the official library that allows you to use AI Web Scraper - Powered by Crawl4AI API in Python, providing convenience functions and automatic retries on errors.

Install the apify-client

pip install apify-client

Other API clients include: