Ultimate Keyword Research Tool avatar
Ultimate Keyword Research Tool

Pricing

$9.99/month + usage

Go to Apify Store
Ultimate Keyword Research Tool

Ultimate Keyword Research Tool

Elevate your keyword research to new heights with our comprehensive keyword research tool that fetches the latest trending keywords from Google, Bing, DuckDuckGo, Yahoo, and Ecosia simultaneously. With this tool you can get data about CPC, Search Volume and Ranking Difficulty.

Pricing

$9.99/month + usage

Rating

5.0

(2)

Developer

Eneiro Matos

Eneiro Matos

Maintained by Community

Actor stats

24

Bookmarked

377

Total users

21

Monthly active users

31 days

Issues response

5 days ago

Last modified

Categories

Share

TypeScript Crawlee & CheerioCrawler Actor Template

This template example was built with Crawlee to scrape data from a website using Cheerio wrapped into CheerioCrawler.

Quick Start

Once you've installed the dependencies, start the Actor:

$apify run

Once your Actor is ready, you can push it to the Apify Console:

apify login # first, you need to log in if you haven't already done so
apify push

Project Structure

.actor/
├── actor.json # Actor config: name, version, env vars, runtime settings
├── dataset_schena.json # Structure and representation of data produced by an Actor
├── input_schema.json # Input validation & Console form definition
└── output_schema.json # Specifies where an Actor stores its output
src/
└── main.ts # Actor entry point and orchestrator
storage/ # Local storage (mirrors Cloud during development)
├── datasets/ # Output items (JSON objects)
├── key_value_stores/ # Files, config, INPUT
└── request_queues/ # Pending crawl requests
Dockerfile # Container image definition

For more information, see the Actor definition documentation.

How it works

This code is a TypeScript script that uses Cheerio to scrape data from a website. It then stores the website titles in a dataset.

  • The crawler starts with URLs provided from the input startUrls field defined by the input schema. Number of scraped pages is limited by maxPagesPerCrawl field from the input schema.
  • The crawler uses requestHandler for each URL to extract the data from the page with the Cheerio library and to save the title and URL of each page to the dataset. It also logs out each result that is being saved.

What's included

  • Apify SDK - toolkit for building Actors
  • Crawlee - web scraping and browser automation library
  • Input schema - define and easily validate a schema for your Actor's input
  • Dataset - store structured data where each object stored has the same attributes
  • Cheerio - a fast, flexible & elegant library for parsing and manipulating HTML and XML
  • Proxy configuration - rotate IP addresses to prevent blocking

Resources

Creating Actors with templates

Getting started

For complete information see this article. To run the Actor use the following command:

$apify run

Deploy to Apify

Connect Git repository to Apify

If you've created a Git repository for the project, you can easily connect to Apify:

  1. Go to Actor creation page
  2. Click on Link Git Repository button

Push project on your local machine to Apify

You can also deploy the project on your local machine to Apify without the need for the Git repository.

  1. Log in to Apify. You will need to provide your Apify API Token to complete this action.

    $apify login
  2. Deploy your Actor. This command will deploy and build the Actor on the Apify Platform. You can find your newly created Actor under Actors -> My Actors.

    $apify push

Documentation reference

To learn more about Apify and Actors, take a look at the following resources: