Apify documentation

This document provides detailed documentation for the Apify web scraping and automation platform. You might also want to check out the following resources:

Anything missing? Please let us know at support@apify.com

Table of contents

  • Scraping - Scrape and crawl websites using a few simple lines of JavaScript.
  • Actor - Runs arbitrary web scraping or automation tasks in the Apify cloud.
  • Tasks - Stores one or more configurations of an Actor.
  • Scheduler - Executes crawler or actor jobs at specific times.
  • Storage - Key-value store, dataset and request queue that enables storage of actor inputs and results.
  • Proxy - Provides access to proxy services that can be used in crawlers, actors or any other application that support HTTP proxies.
  • Webhooks - Provides an easy and reliable way to configure the Apify platform to carry out an action when a certain system event occurs.
  • API - REST API that enables integration with external applications.
  • SDKopen_in_new - Open-source libraries to simplify development of local web scraping and automation projects, crawl websites with headless Chrome and Puppeteer, simplify development of Apify actors and integrate with the Apify API.
  • CLI - Command line interface (CLI) to help you to create, develop, run and deploy Apify actors from your local computer.