
Scrapeunblocker
Pricing
$0.90 / 1,000 results

Scrapeunblocker
ScrapeUnblocker allows to bypass anti-bot services and scrape the full page source of any given URL within seconds.
0.0 (0)
Pricing
$0.90 / 1,000 results
0
Total users
2
Monthly users
2
Runs succeeded
>99%
Last modified
2 days ago
ScrapeUnblocker – Bypass Anti-Bot Systems & Extract HTML Effortlessly
ScrapeUnblocker is a powerful Apify Actor designed to fetch the full HTML source of virtually any web page — even those protected by advanced anti-bot systems. Just provide a URL and ScrapeUnblocker returns clean, readable HTML, making it ideal for any crawler, scraper, or automation pipeline.
Whether you're targeting classifieds, marketplaces, or social networks, ScrapeUnblocker helps your requests succeed where normal HTTP libraries fail.
🛠️ Features
- ✅ Universal page retriever — input any URL, get the raw HTML
- ✅ Built-in support for modern bot protection systems
- ✅ Minimal setup — input only requires the target URL
- ✅ Premium rotating proxies
- ✅ Returns raw HTML as plain text (not JSON-wrapped)
- ✅ Compatible with Apify API and CLI
🔐 Bypasses Anti-Bot Systems Like
ScrapeUnblocker is designed to handle common and complex anti-bot services, including:
- Cloudflare (JavaScript challenges, CAPTCHA)
- PerimeterX (including behavioral fingerprinting)
- Akamai Bot Manager
- Amazon Bot Detection
- Custom JavaScript or cookie-based protections
- ✅ Datadome support included
⚙️ How It Works
Actor.get_input()
reads the target URL- The URL is passed to a backend API that performs browser-like scraping
- The response is returned as raw HTML
Actor.set_value('OUTPUT', html)
saves the result directly
📥 Input
{"url": "https://bot.sannysoft.com/"}
📤 Output
The full HTML of the requested page is returned directly (not wrapped in a JSON object).
Example output:
<!DOCTYPE html><html lang="en"><head>...</head><body>...</body></html>
🔁 Use Cases
- Scraping protected product or listing pages
- Feeding raw HTML into BeautifulSoup, Cheerio, or LLMs
- Monitoring changes on pages protected by JavaScript challenges
- Automating access to sites that typically block bots
🚀 How to Use
✅ Python (with requests
)
import requestsAPI_TOKEN = 'apify_api_token'ACTOR_ID = 'scrapeunblocker~scrapeunblocker'API_URL = f'https://api.apify.com/v2/acts/{ACTOR_ID}/run-sync?token={API_TOKEN}'def get_page_source(target_url):payload = {"url": target_url}response = requests.post(API_URL, json=payload)response.raise_for_status()return response.text # Raw HTML# Example usagehtml = get_page_source("https://bot.sannysoft.com/")print(html[:500]) # Print first 500 characters
✅ cURL Example
curl -X POST "https://api.apify.com/v2/acts/scrapeunblocker~scrapeunblocker/run-sync?token=apify_api_token" \-H "Content-Type: application/json" \-d '{"url": "https://bot.sannysoft.com/"}'
The response will contain the complete HTML of the target page.
📚 Resources
🧠 Pro Tip
Need to scrape 1000s of URLs? Integrate this Actor with Apify’s request queues and run it in parallel at scale!