Ultimate Keyword Research Tool
Pricing
$9.99/month + usage
Ultimate Keyword Research Tool
Elevate your keyword research to new heights with our comprehensive keyword research tool that fetches the latest trending keywords from Google, Bing, DuckDuckGo, Yahoo, and Ecosia simultaneously. With this tool you can get data about CPC, Search Volume and Ranking Difficulty.
Pricing
$9.99/month + usage
Rating
5.0
(2)
Developer

Eneiro Matos
Actor stats
24
Bookmarked
377
Total users
21
Monthly active users
31 days
Issues response
5 days ago
Last modified
Categories
Share
TypeScript Crawlee & CheerioCrawler Actor Template
This template example was built with Crawlee to scrape data from a website using Cheerio wrapped into CheerioCrawler.
Quick Start
Once you've installed the dependencies, start the Actor:
$apify run
Once your Actor is ready, you can push it to the Apify Console:
apify login # first, you need to log in if you haven't already done soapify push
Project Structure
.actor/├── actor.json # Actor config: name, version, env vars, runtime settings├── dataset_schena.json # Structure and representation of data produced by an Actor├── input_schema.json # Input validation & Console form definition└── output_schema.json # Specifies where an Actor stores its outputsrc/└── main.ts # Actor entry point and orchestratorstorage/ # Local storage (mirrors Cloud during development)├── datasets/ # Output items (JSON objects)├── key_value_stores/ # Files, config, INPUT└── request_queues/ # Pending crawl requestsDockerfile # Container image definition
For more information, see the Actor definition documentation.
How it works
This code is a TypeScript script that uses Cheerio to scrape data from a website. It then stores the website titles in a dataset.
- The crawler starts with URLs provided from the input
startUrlsfield defined by the input schema. Number of scraped pages is limited bymaxPagesPerCrawlfield from the input schema. - The crawler uses
requestHandlerfor each URL to extract the data from the page with the Cheerio library and to save the title and URL of each page to the dataset. It also logs out each result that is being saved.
What's included
- Apify SDK - toolkit for building Actors
- Crawlee - web scraping and browser automation library
- Input schema - define and easily validate a schema for your Actor's input
- Dataset - store structured data where each object stored has the same attributes
- Cheerio - a fast, flexible & elegant library for parsing and manipulating HTML and XML
- Proxy configuration - rotate IP addresses to prevent blocking
Resources
- Quick Start guide for building your first Actor
- Video tutorial on building a scraper using CheerioCrawler
- Written tutorial on building a scraper using CheerioCrawler
- Web scraping with Cheerio in 2023
- How to scrape a dynamic page using Cheerio
- Integration with Zapier, Make, Google Drive and others
- Video guide on getting data using Apify API
Creating Actors with templates
Getting started
For complete information see this article. To run the Actor use the following command:
$apify run
Deploy to Apify
Connect Git repository to Apify
If you've created a Git repository for the project, you can easily connect to Apify:
- Go to Actor creation page
- Click on Link Git Repository button
Push project on your local machine to Apify
You can also deploy the project on your local machine to Apify without the need for the Git repository.
-
Log in to Apify. You will need to provide your Apify API Token to complete this action.
$apify login -
Deploy your Actor. This command will deploy and build the Actor on the Apify Platform. You can find your newly created Actor under Actors -> My Actors.
$apify push
Documentation reference
To learn more about Apify and Actors, take a look at the following resources: