Easy Data Processor: Merge, Clean, and Transform Your Data avatar

Easy Data Processor: Merge, Clean, and Transform Your Data

Under maintenance
Try for free

7 days trial then $9.99/month - No credit card required now

View all Actors
This Actor is under maintenance.

This Actor may be unreliable while under maintenance. Would you like to try a similar Actor instead?

See alternative Actors
Easy Data Processor: Merge, Clean, and Transform Your Data

Easy Data Processor: Merge, Clean, and Transform Your Data

dainty_screw/easy-data-processor-merge-clean-and-transform-your-data
Try for free

7 days trial then $9.99/month - No credit card required now

Meet the Ultimate Data Processor, a human-friendly tool that simplifies your data tasks. With this Apify actor, you can merge datasets, remove duplicates, and transform data quickly and effortlessly, all in one go. Say goodbye to complex processes and hello to streamlined data management

The ultimate dataset processing actor - merge, dedup & transform

Refined and optimized dataset processing actor for large scale merging, deduplications and transformation

Why to use this actor

  • Extremely fast data processing thanks for parallelizing workloads (easily 20x faster than default loading/pushing datasets)
  • Allows reading from multiple datasets silmutanesously, ideal for merging after scraping with many runs
  • Actor migration proof - All steps that can be persisted are persisted => work is not repeated and no duplicated data pushed
  • Dedup as loading mode allows for near constant memory processing even for huge datasets (think 10M+)
  • Deduplication allows for combination of many fields and even nested objects/arrays (those are JSON.stringified for deep equality check)
  • Allows for storing into KV store records
  • Allows super fast blank runs that count duplicates

Merging

You can provide more than one dataset. In that case all items are merged into single dataset or key value store output. If you use the Dedup after load mode, the order of items will retain the order of datasets provided.

Deduplication

If you optionally provide deduplication fields, this actor will deduplicate the dataset items. The deduplication process check the values of each field for equality and only return the first unique one (the first item that has a unique value for that field).

You can provide more than one field. In that case a combined string of that fields is checked, e.g. "name": "Adidas Shoes, "id": "12345" gets converted into "Adidas Shoes12345" for the checking purpose. So only items that have both fields the same are considered duplicates. This means the more fields you add, the less duplicates will be found.

Fields that are objects or arrays are also deeply compared via JSON.stringify. Just be aware that doing this for very large structures might have performance implications.

Transformation

This actor enables you to do arbitrary data transformations before and after deduplication via preDedupTransformFunction and postDedupTransformFunction.

These functions simply take the array of items and should return array of items. You don't need to necessarily return the same amount of items (can filter some out or add new ones).

You can access an object with helper variables, currently containing the Apify SDK reference

The default transformation does nothing with the items:

1(items, { Apify, customInputData }) => {
2    return items;
3}

In case of dedup-as-loading mode, you only have access to the items of the specific batch. But you can also access datasetId and datasetOffset parameters as each batch is only from one dataset.

1(items, { Apify, datasetId, datasetOffset, customInputData }) => {
2    return items;
3}

Input

Detailed INPUT table with description can be found on the actor's public page.

Changelog

Check the list of past updates here

Developer
Maintained by Community
Actor metrics
  • 2 monthly users
  • 0 stars
  • Created in Apr 2024
  • Modified 17 days ago