Dataset Validity Checker avatar

Dataset Validity Checker

Try for free

No credit card required

Go to Store
Dataset Validity Checker

Dataset Validity Checker

equidem/dataset-validity-checker
Try for free

No credit card required

Automatically checks, whether default datasets created by runs of an actor differ too much from the previously encountered ones, allowing it to warn you about web scraping problems caused by, e.g., a website layout changing, or other significant changes in the resulting data.

Dataset Validity Checker (DVC)

How it works

DVC goes through default datasets of the actor specified in its input and calculates indicators, that allow it to recognize, when something is amiss with datasets from later runs. When it notices such a case, it will send you an email to address specified in the input, as well as write a note to its console output. All datasets that pass the check are then also added to the history DVC uses to determine validity of the later datasets.

You can use DVC for an entire actor, or only for a specific task, depending on whether you provide it with a task id or actor id in its input. Moreover, DVC can be used simultaneously (and independently) for any number of actors/tasks. You only need to run it with different task/actor ids.

Please note, that DVC doesn't check individual items in a dataset, but the dataset as a whole, so it will not catch a mistake influencing only a small portion of the items. Also, it only checks datasets from successful runs.

How to use it

Before DVC can work properly, it needs to have historical information about the runs of your actor/task. Therefore, the first run of DVC (and perhaps several others, if you don't have enough runs of the checked actor/task yet) will go through the existing runs and obtain the information it needs. Because of this, the first run might take a relatively long time. For ways to decrease it, see the next section. Please, make sure, that all the runs the first DVC run processes are valid, otherwise, accuracy of the check will be lower. For ways to exclude invalid runs you know about, see the next section.

After the first run, I recommend running DVC after each run of the checked actor/task completes (best achieved using a webhook). If you specify a warning email in DVCs input, it will send you an email for each dataset it considers invalid. If you don't specify it, you can check the console logs from the runs - they contain the same information.

Useful tips

1. Restricting the scope

This tip will be useful (especially for the first run of DVC), if at least one of the following applies to you:

a) You already have a large number of runs (e.g. hundreds) in your actor/task and the first run would therefore take too long.

b) You know there is a mistake somewhere in the previous runs of your actor/task and don't want DVC to consider it normal.

c) You know the website you are scraping changed over time and you don't want DVC to consider the older state normal.

If any of those apply to you, you can use the parameter 'Starting At' (or 'startingAt' in the JSON input) to control what will be the earliest run DVC will process. If you need even more control, you can use the parameter 'Until' (or 'until' in the JSON input) to define, what will be the latest run DVC will process.

2. Clearing the history

Previous tip was mainly concerned with the first run, but what if the website changes significantly without causing an error in your scraping? This could for example happen, when an e-shop decides to widen the selection of items it offers. To prevent false positives (valid datasets flagged as invalid by DVC) in this case, you can use the parameter 'Clear History' (or 'clearHistory' in the JSON input) for a single run to delete all information about what a dataset should look like gathered before that run, allowing DVC to start anew for the particular actor/task.

3. Adjusting strictness

As with any similar algorithm, there has to be a tradeoff between false positives and false negatives (invalid datasets flagged as valid by DVC). If the default setting doesn't work well enough for you, you can use parameters 'Average Multiplying Coefficient' and 'Maximal Multiplying Coefficient' (or 'averageMultiplyingCoefficient' and 'maximalMultiplyingCoefficient' in the JSON input) to adjust the tradeoff, or even change them both using the 'Leniency Coefficient' parameter (or 'leniencyCoefficient' in the JSON input).

Developer
Maintained by Community

Actor Metrics

  • 1 monthly user

  • 3 stars

  • 98% runs succeeded

  • Created in Aug 2019

  • Modified 2 years ago

Categories