Monitoring
Pricing
Pay per usage
Monitoring
This actor monitors your actors' statuses, validates their datasets' data, and displays useful information in an interactive dashboard. And if something happens, you'll get notified via email or Slack.
Monitoring
Pricing
Pay per usage
This actor monitors your actors' statuses, validates their datasets' data, and displays useful information in an interactive dashboard. And if something happens, you'll get notified via email or Slack.
Select mode of the monitoring. Configuration run modes are Create, update and delete. Monitoring configuration will be created and all connected tools will be turned ON when 'Create configuration' is chosen. Choose 'Update configuration' mode when you already created the configuration and you just want to change some details of existing monitoring configuration. Option 'Delete configuration' turns the monitoring off and removes all the created tools and storages connected to the monitoring configuration.
Name of your monitoring suite. It will be used in notifications and to identify related targets in the Apify dashboard.
Only one type of target can be monitored by a single monitoring suite. If you want to watch more types, create more monitoring suites.
Regular expressions that will be matched against selected actors / tasks or datasets under your Apify account. All matching targets will then be monitored by this monitoring suite. This is typically also the fastest way to select a single target. Just type its full name. Datasets are going to be automatically group by these patterns when dashboard statistics is counted for dataset target type, this can be overriden by setting the Group targets by name patterns option in the Statistics dashboard section.
[]If for whatever reason the Target name pattern option does not suit you, targets can also be specified by providing their IDs, as found in your Apify dashboard.
[]It's the simplest checker we have. As the name suggests, it works only for actors and tasks and it will check for runs that either FAILED, TIMED-OUT or ABORTED, so you'll never miss a problem again.
It groups notifications to one notification report when more notifications should come within 5 minutes range instead of sending each of them separately. This can be useful when there are more actor/task runs that would finish all at once or close each other.
Collects statistics and produces a link to a dashboard with visualisations.
Choose how often should the dashboard statistics update. There are two basic options. Updating after each monitored run finishes or on a pre-set schedule. To update with every run, type: per run, each run or every run. To schedule updates, use plain English sentences such as every day at 13:30 or every Monday at noon or at 8pm every 1st day of the month. For more examples see: natural cron.
Regular expressions or name patterns that will be used to group your selected targets by it's name. Named datasets are by default grouped by targetPatternList, setting this option will override it. All matched targets will then be displayed as one data line in the dashboard charts. For example if you use the same group of scraping actors for different countries as actor-1-cz, actor-2-cz and actor-1-us, actor-2-us your patterns can be cz, us and all your your dashboard will display 2 data lines - one for each state.
[]You will be notified every time when the dashboard is updated with new data.
Datasets or default datasets will be validated using the provided validation options.
The validation options specify your constraints. They are always an array of objects. This is to enable use of different schemas for different targets. See README for details.
Choose how often will the schema checks run. There are two basic options. Updating after each monitored run finishes or on a pre-set schedule. To update with every run, type: per run, each run or every run. To schedule updates, use plain English sentences such as every day at 13:30 or every Monday at noon or at 8pm every 1st day of the month. For more examples see: natural cron.
Validation checking report is going to be generated and sent via notification even if your data is correct.
Datasets or default datasets will be checked for duplicates using the provided unique key.
You can define a list of unique keys for the duplication checking. Each unique key represents the dataset field that will be compared for uniqueness. You should use something that's guaranteed to be unique among items in your dataset. Like email for people, SKU for e-shop items, GUIDs and so on.
[]Represents acceptable number of duplicated items occurring in the dataset. The duplication check will pass and the notification won't be sent If the number of duplicated items is lower than the allowed number of duplicates.
Choose how often will the check for duplicates run. There are two basic options. Updating after each monitored run finishes or on a pre-set schedule. To update with every run, type: per run, each run or every run. To schedule updates, use plain English sentences such as every day at 13:30 or every Monday at noon or at 8pm every 1st day of the month. For more examples see: natural cron.
Duplicate checking report is going to be generated and sent via notification even if your data is correct.
If selected, email notification will not be sent.
Email address to override your account email address.
Subject to override the generated subject of your notification email.
Insert application or user OAuth Access Token to the Slack API. Ask your workspace admin for help if needed.