Get started
Product
Back
Start here!
Get data with ready-made web scrapers for popular websites
Browse 7,000+ Actors
Apify platform
Apify Store
Pre-built web scraping tools
Actors
Build and run serverless programs
Integrations
Connect with apps and services
Anti-blocking
Scrape without getting blocked
Proxy
Rotate scraper IP addresses
Open source
Crawlee
Web scraping and crawling library
Solutions
MCP server configuration
Configure your Apify MCP server with Actors and tools for seamless integration with MCP clients.
Start building
Web data for
Enterprise
Startups
Universities
Nonprofits
Use cases
Data for generative AI
Data for AI agents
Lead generation
Market research
View more →
Consulting
Apify Professional Services
Apify Partners
Developers
Documentation
Full reference for the Apify platform
Code templates
Python, JavaScript, and TypeScript
Web scraping academy
Courses for beginners and experts
Monetize your code
Publish your scrapers and get paid
Learn
API reference
CLI
SDK
Earn from your code
$495k paid out in August alone. Many developers earn $3k+ every month.
Start earning now
Resources
Help and support
Advice and answers about Apify
Submit your ideas
Tell us the Actors you want
Changelog
See what’s new on Apify
Customer stories
Find out how others use Apify
Company
About Apify
Contact us
Blog
Live events
Partners
Jobs
We're hiring!
Join our Discord
Talk to scraping experts
Pricing
Contact sales
Pay per usage
undrtkr984/web-scraper-task
Developed by
Matt
0.0 (0)
1
124
4
Last modified
3 years ago
Automation
josejet/dynamic-web-scraper
Dynamic Web Scraper is an Apify Actor that gathers information online by simulating user browsing behavior on the web. It reduces the time and amount of scraped web pages by using a model (ChatGPT) to make decisions regarding browser navigation and results evaluation.
Pepa J W̚͠h̾̔̎̿͊͛̄͊e̢̦̲̰̦̋̇͗̾̑oi̟͈̯̝̊̉́̇͑̕ğ̆͘͡e͗͛o͊̔̇̄
215
curious_coder/instant-web-scraper
Scrape any public and private website data by providing just URL and optionally cookies and proxy information. This scraper is similar to instant data scraper but runs on cloud and can be used as API too!
Curious Coder
1.7K
3.6
apify/web-scraper
Crawls arbitrary websites using a web browser and extracts structured data from web pages using a provided JavaScript function. The Actor supports both recursive crawling and lists of URLs, and automatically manages concurrency for maximum performance.
Apify
98K
4.5
zeeb0t/web-scraping-api---scrape-any-website
Web Scraping API that quickly and reliably scrapes any website—no selectors required. Premium proxies, CAPTCHA solving, JavaScript rendering, and automated structured data extraction are all included. It’s just $2 per 1,000 web pages scraped, with no minimum spend.
Anthony Ziebell
1.4K
5.0
lukaskrivka/website-checker
Check any website you plan to scrape for expected Compute unit consumption, anti-scraping software, and reliability.
Lukáš Křivka
839
eloquent_mountain/ai-web-scraper-extract-data-with-ease
Ai Web Scraper enables scraping for everyone, including non-techies! It uses Google's Gemini LLM to scrape websites with natural language commands. It dynamically extracts data, no selector input needed, handles dynamic content and cookie consent, avoids bot detection, outputs JSON or other formats.
Paco
651
2.0
mnmkng/abort-actor-runs
This actor enables the aborting of all user's running actors with a single click or by a single API call. Scans all actors of the user, aborts all RUNNING and READY actors. It is set to minimize compute unit usage at the expense of speed. Scans the user's actors sequentially to prevent API abuse.
Ondra Urban
48
tri_angle/task-memory-orchestrator
Tri⟁angle
valek.josef/forward-dataset-to-actor-or-task
Forwards contents of specified dataset to a specified field on the input of another Actor or task.
Josef Válek
12
rigelbytes/webcrawler
This web crawler is designed to provide users with complete flexibility by allowing them to use their **own proxies**. The scraper collects all pages from the website and returns extracts the **MetaData**, **Title**, and **Content** of the page in MarkDown.
Rigel Bytes
43