Browse tools published by our community and use them for your projects right away
Vote for the actors you want to see added to Apify Store
Learn how web scraping and automation can help your business grow
Get inspired by awesome projects built with Apify
Get a custom web scraping or RPA solution at any scale
Tailor-made web scraping and automation solutions with premium support
Get a custom solution from certified Apify developers
E-commerce & Retail
Marketing & Media
Software & SaaS
Healthcare & Pharma
Research & Education
Fintech & Insurance
NGO & Government
Travel & Logistics
Build web scrapers and RPA robots with Apify and our open-source libraries
Web Scraping Academy
Join our Discord community to get the latest news and find plenty of people happy to help you
We ❤️ open source and contribute to it. See all our projects on GitHub
Learn about our mission to make the web more programmable
Product updates, tips and stories from the world of web scraping and automation
Help & Support
Find answers to frequently asked questions or get in touch with Apify experts
Web scraping guide
Read on to find out what web scraping is, why you should do it, and how you can get started!
Learn how to use the Apify platform, from your first steps to in-depth reference
Team up with Apify and start delivering web scraping and RPA solutions to your customers
Earn up to 100% commission for delivering and maintaining custom end-to-end web scraping and web automation solutions.
Become a certified Apify developer and start building solutions for our customers
Earn money by sending customers our way
Designed to be run from an ACTOR.RUN.SUCCEEDED webhook, this actor downloads a task run's default dataset and saves it to an S3 bucket.
No credit card required
Enter the access key ID for the AWS user
Enter the secret access key for the AWS user
Enter the AWS region your S3 bucket is located in
Enter the name of the S3 bucket to use
The key to use for the filename
The data format to download the dataset in
"json", "jsonl", "xml", "html", "csv", "xslx", "rss"
Crawler will ignore SSL certificate errors.
An object whose properties will be enumerated and added to the dataset get items API request. See https://apify.com/docs/api/v2#/reference/datasets/item-collection/get-items.
Debug messages will be included in the log. Use context.log.debug('message') to log your own debug messages.
Are you a developer? Build your own actors and run them on Apify.
Get a custom web scraping or RPA solution.