Duplications Checker avatar
Duplications Checker
Try for free

No credit card required

View all Actors
Duplications Checker

Duplications Checker

Try for free

No credit card required

Check your dataset for duplications. Accept only the highest quality data!

Duplications Checker


Duplications Checker is an Apify actor that helps you find duplicates in your datasets or JSON array.

  • Loads data from Apify Dataset, Key Value store or an arbitrary JSON and checks each item against all others for duplicate field.
  • The check takes seconds to a maximum of a few minutes for larger datasets.
  • Produces a report so you know exactly how many problems are there and which items contained them.
  • It is very useful to append this actor as a webhook. You can easily chain another actor after this one to send an email or add a report to your Google Sheets to name just a few examples. Check Apify Store for more.

How it works

  • Loads data in batches into memory (Key Value store or raw data are loaded all at once).
  • Each item in the batch is scanned for the provided field. Actor keeps track of previous occurences and count duplicates.
  • A report is created after the whole run and saved as OUTPUT to the default Key Value store.
  • Between each batch, the state of the actor is saved so it doesn't have to repeat the work after restart(migration).


This actor expects a JSON object as an input. You can also set it up in a visual UI editor on Apify. You can find examples in the Input and Example Run tabs of the actor page in Apify Store. All the input fields (regardless of section) are top level fields.

Main input fields

  • datasetId <string> Id of dataset where the data are located. If you need to use other input types like Key value store or raw JSON, use keyValueStoreRecord or rawData You have specify this, keyValueStoreRecord or rawData but only one of them
  • checkOnlyCleanItems <boolean> Only clean dataset items will be loaded and use for duplications checking if datasetId option is provided. Default: false
  • fields <array> List of fields in each item that will be checked for duplicates. Each field must not be nested and it should contain only simple value (string or number). It is also possible to use option field to pass only single <string> value (due to backward compatibility). You can prepare your data with preCheckFunction. Required
  • preCheckFunction <stringified function> Stringified javascipt function that can apply arbitrary transformation to the input data before the check. See preCheckFunction section. Optional
  • minDuplications <number> Minimum occurences to be included in the report. Default: 2

Show options

  • showIndexes: <boolean> Indexes of the duplicate items will be shown in the OUTPUT report. Set to false if you don't need them. Default: true
  • showItems: <boolean> Duplicate items will be pushed to a dataset. Set to false if you don't need them. Default: true
  • showMissing: <boolean> Items where the values for the field is missing or is null or '' will be included in the report Default: true

Dataset pagination options

  • limit: <number> How many items will be checked. Default: all
  • offset: <number> From which item the checking will start. Use with limit to check specific items. Default: 0
  • batchSize: <number> You can change number of loaded and processed items in each batch. This is only needed to be changed if you have really huge items. Default: 1000

Other data sources

  • keyValueStoreRecord <string> ID and record key if you want to load from KV store. Format is {keyValueStoreId}+{recordKey}, e.g. s5NJ77qFv8b4osiGR+MY-KEY. You have specify this, datasetId or rawData but only one of them
  • rawData <array> Array of objects to be checked. You have specify this, keyValueStoreRecord or datasetId but only one of them*.


preCheckFunction is useful to transform the input data before the actual check. Its main usefulness is to ensure that the field you are checking is a top level (not nested) field and that the value of that field is a simple value like number or string (The decision to not allow deep equality check for nested structures was made for simplicity and performance reasons).

So for example, let's say you have an item with a nested field images:

2  "url": "https://www.bloomingdales.com/shop/product/lauren-ralph-lauren-ruffled-georgette-dress?ID=3493626&CategoryID=1005206",
3  "images": [
4    {
5      "src": "https://images.bloomingdalesassets.com/is/image/BLM/products/9/optimized/10317399_fpx.tif",
6      "cloudPath": ""
7    }
8  ],
9  ... // more fields that you are not interested in

If you want to check the first image URL for duplications and keep the item url for a reference, you can easily transform the whole data with simple preCheckFunction:

(data) => data.map((item) => ({ url: item.url, imageUrl: item.images[0].src }))

Now, set field in input to imageUrl and all will work nicely.


At the end of the actor run, the report is saved to the default Key Value store as an OUTPUT. Also, if showItems is true, it will push duplicate items to the dataset.

By default, the report will include all information but you can opt-out if you set any of showIndexes, showItems, showMissing to false.

Report is an object where every field value that appeared at least twice (which means it was duplicate) is inluced as a key. For each of them, report contains count (minimum is 2), originalIndexes (which are indexes of items in your original dataset or after preCheckFunction) and outputIndexes (only present when showItems is enabled). The indexes should help you navigate the duplicates in your data.

OUTPUT example

2  "https://images.bloomingdalesassets.com/is/image/BLM/products/4/optimized/9153524_fpx.tif": {
3    "count": 2,
4    "originalIndexes": [
5      166,
6      202
7    ],
8    "outputIndexes": [
9      0,
10      1
11    ]
12  },
13  "https://images.bloomingdalesassets.com/is/image/BLM/products/9/optimized/9832349_fpx.tif": {
14    "count": 2,
15    "originalIndexes": [
16      1001,
17      1002
18    ],
19    "outputIndexes": [
20      2,
21      3
22    ]
23  }

The items are intentionally not included in the OUTPUT report to reduce its size. Instead they are pushed to the default dataset and you can locate them with outputIndexes. If you need to connect the OUTPUT with the dataset for deeper analysis, you can find the items with the help of indexes.

Checking more fields

The first version of the actor had the option to check more fields at once but it produced very complicated output and the implementation was too convoluted so I decided to abandon the idea for simplicity. In case you want to check more fields, simply run it once for each field. Since the actor consumption is pretty low, it is not a big deal.

More info coming soon!


If you find any problem or would like to add a new feature, please create an issue on the Github repo.

Thanks everybody for using it and giving any feedback!

Maintained by Community
Actor metrics
  • 6 monthly users
  • 99.5% runs succeeded
  • days response time
  • Created in Aug 2019
  • Modified about 3 years ago