DVC goes through default datasets of the actor specified in its input and calculates indicators, that allow it to recognize, when something is amiss with datasets from later runs. When it notices such a case, it will send you an email to address specified in the input, as well as write a note to its console output. All datasets that pass the check are then also added to the history DVC uses to determine validity of the later datasets.
You can use DVC for an entire actor, or only for a specific task, depending on whether you provide it with a task id or actor id in its input. Moreover, DVC can be used simultaneously (and independently) for any number of actors/tasks. You only need to run it with different task/actor ids.
Please note, that DVC doesn't check individual items in a dataset, but the dataset as a whole, so it will not catch a mistake influencing only a small portion of the items. Also, it only checks datasets from successful runs.
Before DVC can work properly, it needs to have historical information about the runs of your actor/task. Therefore, the first run of DVC (and perhaps several others, if you don't have enough runs of the checked actor/task yet) will go through the existing runs and obtain the information it needs. Because of this, the first run might take a relatively long time. For ways to decrease it, see the next section. Please, make sure, that all the runs the first DVC run processes are valid, otherwise, accuracy of the check will be lower. For ways to exclude invalid runs you know about, see the next section.
After the first run, I recommend running DVC after each run of the checked actor/task completes (best achieved using a webhook). If you specify a warning email in DVCs input, it will send you an email for each dataset it considers invalid. If you don't specify it, you can check the console logs from the runs - they contain the same information.
This tip will be useful (especially for the first run of DVC), if at least one of the following applies to you:
a) You already have a large number of runs (e.g. hundreds) in your actor/task and the first run would therefore take too long.
b) You know there is a mistake somewhere in the previous runs of your actor/task and don't want DVC to consider it normal.
c) You know the website you are scraping changed over time and you don't want DVC to consider the older state normal.
If any of those apply to you, you can use the parameter 'Starting At' (or 'startingAt' in the JSON input) to control what will be the earliest run DVC will process. If you need even more control, you can use the parameter 'Until' (or 'until' in the JSON input) to define, what will be the latest run DVC will process.
Previous tip was mainly concerned with the first run, but what if the website changes significantly without causing an error in your scraping? This could for example happen, when an e-shop decides to widen the selection of items it offers. To prevent false positives (valid datasets flagged as invalid by DVC) in this case, you can use the parameter 'Clear History' (or 'clearHistory' in the JSON input) for a single run to delete all information about what a dataset should look like gathered before that run, allowing DVC to start anew for the particular actor/task.
As with any similar algorithm, there has to be a tradeoff between false positives and false negatives (invalid datasets flagged as valid by DVC). If the default setting doesn't work well enough for you, you can use parameters 'Average Multiplying Coefficient' and 'Maximal Multiplying Coefficient' (or 'averageMultiplyingCoefficient' and 'maximalMultiplyingCoefficient' in the JSON input) to adjust the tradeoff, or even change them both using the 'Leniency Coefficient' parameter (or 'leniencyCoefficient' in the JSON input).
See how Dataset Validity Checker is used in industries around the world