
Merge, Dedup & Transform Datasets
Pricing
Pay per usage

Merge, Dedup & Transform Datasets
The ultimate dataset processor. Extremely fast merging, deduplications & transformations all in a single run.
0.0 (0)
Pricing
Pay per usage
73
Monthly users
139
Runs succeeded
97%
Response time
9.8 days
Last modified
2 months ago
You can access the Merge, Dedup & Transform Datasets programmatically from your own applications by using the Apify API. You can also choose the language preference from below. To use the Apify API, you’ll need an Apify account and your API token, found in Integrations settings in Apify Console.
1{
2 "openapi": "3.0.1",
3 "info": {
4 "version": "0.0",
5 "x-build-id": "Trl7334aMnBLuHXXv"
6 },
7 "servers": [
8 {
9 "url": "https://api.apify.com/v2"
10 }
11 ],
12 "paths": {
13 "/acts/lukaskrivka~dedup-datasets/run-sync-get-dataset-items": {
14 "post": {
15 "operationId": "run-sync-get-dataset-items-lukaskrivka-dedup-datasets",
16 "x-openai-isConsequential": false,
17 "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
18 "tags": [
19 "Run Actor"
20 ],
21 "requestBody": {
22 "required": true,
23 "content": {
24 "application/json": {
25 "schema": {
26 "$ref": "#/components/schemas/inputSchema"
27 }
28 }
29 }
30 },
31 "parameters": [
32 {
33 "name": "token",
34 "in": "query",
35 "required": true,
36 "schema": {
37 "type": "string"
38 },
39 "description": "Enter your Apify token here"
40 }
41 ],
42 "responses": {
43 "200": {
44 "description": "OK"
45 }
46 }
47 }
48 },
49 "/acts/lukaskrivka~dedup-datasets/runs": {
50 "post": {
51 "operationId": "runs-sync-lukaskrivka-dedup-datasets",
52 "x-openai-isConsequential": false,
53 "summary": "Executes an Actor and returns information about the initiated run in response.",
54 "tags": [
55 "Run Actor"
56 ],
57 "requestBody": {
58 "required": true,
59 "content": {
60 "application/json": {
61 "schema": {
62 "$ref": "#/components/schemas/inputSchema"
63 }
64 }
65 }
66 },
67 "parameters": [
68 {
69 "name": "token",
70 "in": "query",
71 "required": true,
72 "schema": {
73 "type": "string"
74 },
75 "description": "Enter your Apify token here"
76 }
77 ],
78 "responses": {
79 "200": {
80 "description": "OK",
81 "content": {
82 "application/json": {
83 "schema": {
84 "$ref": "#/components/schemas/runsResponseSchema"
85 }
86 }
87 }
88 }
89 }
90 }
91 },
92 "/acts/lukaskrivka~dedup-datasets/run-sync": {
93 "post": {
94 "operationId": "run-sync-lukaskrivka-dedup-datasets",
95 "x-openai-isConsequential": false,
96 "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
97 "tags": [
98 "Run Actor"
99 ],
100 "requestBody": {
101 "required": true,
102 "content": {
103 "application/json": {
104 "schema": {
105 "$ref": "#/components/schemas/inputSchema"
106 }
107 }
108 }
109 },
110 "parameters": [
111 {
112 "name": "token",
113 "in": "query",
114 "required": true,
115 "schema": {
116 "type": "string"
117 },
118 "description": "Enter your Apify token here"
119 }
120 ],
121 "responses": {
122 "200": {
123 "description": "OK"
124 }
125 }
126 }
127 }
128 },
129 "components": {
130 "schemas": {
131 "inputSchema": {
132 "type": "object",
133 "properties": {
134 "datasetIds": {
135 "title": "Dataset IDs",
136 "type": "array",
137 "description": "Datasets that should be deduplicated and merged",
138 "items": {
139 "type": "string"
140 }
141 },
142 "fields": {
143 "title": "Fields for deduplication",
144 "type": "array",
145 "description": "Fields whose combination should be unique for the item to be considered unique. If none are provided, the actor does not perform deduplication.",
146 "items": {
147 "type": "string"
148 }
149 },
150 "output": {
151 "title": "What to output",
152 "enum": [
153 "unique-items",
154 "duplicate-items",
155 "nothing"
156 ],
157 "type": "string",
158 "description": "What will be pushed to the dataset from this actor",
159 "default": "unique-items"
160 },
161 "mode": {
162 "title": "Mode",
163 "enum": [
164 "dedup-after-load",
165 "dedup-as-loading"
166 ],
167 "type": "string",
168 "description": "How the loading and deduplication process will work.",
169 "default": "dedup-after-load"
170 },
171 "outputDatasetId": {
172 "title": "Output dataset ID or name (optional)",
173 "type": "string",
174 "description": "Optionally can push into dataset of your choice. If you provide a dataset name that doesn't exist, a new named dataset will be created."
175 },
176 "fieldsToLoad": {
177 "title": "Limit fields to load",
178 "type": "array",
179 "description": "You can choose which fields to load only. Useful to speed up the loading and reduce memory needs.",
180 "items": {
181 "type": "string"
182 }
183 },
184 "preDedupTransformFunction": {
185 "title": "Pre dedup transform function",
186 "type": "string",
187 "description": "Function to transform items before deduplication is applied. For 'dedup-after-load' mode this is done for all items at once. For 'dedup-as-loading' this is applied to each batch separately."
188 },
189 "postDedupTransformFunction": {
190 "title": "Post dedup transform function",
191 "type": "string",
192 "description": "Function to transform items after deduplication is applied. For 'dedup-after-load' mode this is done for all items at once. For 'dedup-as-loading' this is applied to each batch separately."
193 },
194 "actorOrTaskId": {
195 "title": "Actor or Task ID (or name)",
196 "type": "string",
197 "description": "Use Actor or Task ID (e.g. `nwua9Gu5YrADL7ZDj`) or full name (e.g. `apify/instagram-scraper`)."
198 },
199 "onlyRunsNewerThan": {
200 "title": "Only runs newer than",
201 "type": "string",
202 "description": "Use a date format of either `YYYY-MM-DD` or with time `YYYY-MM-DDTHH:mm:ss`."
203 },
204 "onlyRunsOlderThan": {
205 "title": "Only runs older than",
206 "type": "string",
207 "description": "Use a date format of either `YYYY-MM-DD` or with time `YYYY-MM-DDTHH:mm:ss`."
208 },
209 "outputTo": {
210 "title": "Where to output",
211 "enum": [
212 "dataset",
213 "key-value-store"
214 ],
215 "type": "string",
216 "description": "Either can output to a single dataset or to split data into KV records depending on upload batch size. KV is upload is much faster but data end up in many files.",
217 "default": "dataset"
218 },
219 "parallelLoads": {
220 "title": "Parallel loads",
221 "maximum": 100,
222 "type": "integer",
223 "description": "Datasets can be loaded in parallel batches to speed things up if needed.",
224 "default": 10
225 },
226 "parallelPushes": {
227 "title": "Parallel pushes",
228 "minimum": 1,
229 "maximum": 50,
230 "type": "integer",
231 "description": "Deduped data can be pushed in parallel batches to speed things up if needed. If you want the data to be in the exact same order, you need to set this to 1.",
232 "default": 5
233 },
234 "uploadBatchSize": {
235 "title": "Upload batch size",
236 "minimum": 10,
237 "maximum": 1000,
238 "type": "integer",
239 "description": "How many items it should upload in one pushData call. Useful to not overload Apify API. Only important for dataset upload.",
240 "default": 500
241 },
242 "batchSizeLoad": {
243 "title": "Download batch size",
244 "type": "integer",
245 "description": "How many items it will load in a single batch.",
246 "default": 50000
247 },
248 "offset": {
249 "title": "Offset (how many items to skip from start)",
250 "type": "integer",
251 "description": "By default we don't skip any items which is the same as setting offset to 0. For multiple datasets, it takes offset into the sum of their item counts but that is not very useful."
252 },
253 "limit": {
254 "title": "Limit (how many items to load)",
255 "type": "integer",
256 "description": "By default we don't limit the number loaded items"
257 },
258 "verboseLog": {
259 "title": "verbose log",
260 "type": "boolean",
261 "description": "Good for smaller runs. Large runs might run out of log space.",
262 "default": false
263 },
264 "nullAsUnique": {
265 "title": "Null fields are unique",
266 "type": "boolean",
267 "description": "If you want to treat null (or missing) fields as always unique items.",
268 "default": false
269 },
270 "datasetIdsOfFilterItems": {
271 "title": "Dataset IDs for just deduping",
272 "type": "array",
273 "description": "The items from these datasets will be just used as a dedup filter for the main datasets. These items are loaded first and then the main datasets are compared for uniqueness and pushed.",
274 "items": {
275 "type": "string"
276 }
277 },
278 "customInputData": {
279 "title": "Custom input data",
280 "type": "object",
281 "description": "You can pass custom data as a JSON object to be accessible in the transform functions as part of the 2nd parameter object."
282 },
283 "appendDatasetIds": {
284 "title": "Append dataset IDs to items",
285 "type": "boolean",
286 "description": "Useful for transform functions. Each item will get a field `__datasetId__` with the dataset ID it came from.",
287 "default": false
288 }
289 }
290 },
291 "runsResponseSchema": {
292 "type": "object",
293 "properties": {
294 "data": {
295 "type": "object",
296 "properties": {
297 "id": {
298 "type": "string"
299 },
300 "actId": {
301 "type": "string"
302 },
303 "userId": {
304 "type": "string"
305 },
306 "startedAt": {
307 "type": "string",
308 "format": "date-time",
309 "example": "2025-01-08T00:00:00.000Z"
310 },
311 "finishedAt": {
312 "type": "string",
313 "format": "date-time",
314 "example": "2025-01-08T00:00:00.000Z"
315 },
316 "status": {
317 "type": "string",
318 "example": "READY"
319 },
320 "meta": {
321 "type": "object",
322 "properties": {
323 "origin": {
324 "type": "string",
325 "example": "API"
326 },
327 "userAgent": {
328 "type": "string"
329 }
330 }
331 },
332 "stats": {
333 "type": "object",
334 "properties": {
335 "inputBodyLen": {
336 "type": "integer",
337 "example": 2000
338 },
339 "rebootCount": {
340 "type": "integer",
341 "example": 0
342 },
343 "restartCount": {
344 "type": "integer",
345 "example": 0
346 },
347 "resurrectCount": {
348 "type": "integer",
349 "example": 0
350 },
351 "computeUnits": {
352 "type": "integer",
353 "example": 0
354 }
355 }
356 },
357 "options": {
358 "type": "object",
359 "properties": {
360 "build": {
361 "type": "string",
362 "example": "latest"
363 },
364 "timeoutSecs": {
365 "type": "integer",
366 "example": 300
367 },
368 "memoryMbytes": {
369 "type": "integer",
370 "example": 1024
371 },
372 "diskMbytes": {
373 "type": "integer",
374 "example": 2048
375 }
376 }
377 },
378 "buildId": {
379 "type": "string"
380 },
381 "defaultKeyValueStoreId": {
382 "type": "string"
383 },
384 "defaultDatasetId": {
385 "type": "string"
386 },
387 "defaultRequestQueueId": {
388 "type": "string"
389 },
390 "buildNumber": {
391 "type": "string",
392 "example": "1.0.0"
393 },
394 "containerUrl": {
395 "type": "string"
396 },
397 "usage": {
398 "type": "object",
399 "properties": {
400 "ACTOR_COMPUTE_UNITS": {
401 "type": "integer",
402 "example": 0
403 },
404 "DATASET_READS": {
405 "type": "integer",
406 "example": 0
407 },
408 "DATASET_WRITES": {
409 "type": "integer",
410 "example": 0
411 },
412 "KEY_VALUE_STORE_READS": {
413 "type": "integer",
414 "example": 0
415 },
416 "KEY_VALUE_STORE_WRITES": {
417 "type": "integer",
418 "example": 1
419 },
420 "KEY_VALUE_STORE_LISTS": {
421 "type": "integer",
422 "example": 0
423 },
424 "REQUEST_QUEUE_READS": {
425 "type": "integer",
426 "example": 0
427 },
428 "REQUEST_QUEUE_WRITES": {
429 "type": "integer",
430 "example": 0
431 },
432 "DATA_TRANSFER_INTERNAL_GBYTES": {
433 "type": "integer",
434 "example": 0
435 },
436 "DATA_TRANSFER_EXTERNAL_GBYTES": {
437 "type": "integer",
438 "example": 0
439 },
440 "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
441 "type": "integer",
442 "example": 0
443 },
444 "PROXY_SERPS": {
445 "type": "integer",
446 "example": 0
447 }
448 }
449 },
450 "usageTotalUsd": {
451 "type": "number",
452 "example": 0.00005
453 },
454 "usageUsd": {
455 "type": "object",
456 "properties": {
457 "ACTOR_COMPUTE_UNITS": {
458 "type": "integer",
459 "example": 0
460 },
461 "DATASET_READS": {
462 "type": "integer",
463 "example": 0
464 },
465 "DATASET_WRITES": {
466 "type": "integer",
467 "example": 0
468 },
469 "KEY_VALUE_STORE_READS": {
470 "type": "integer",
471 "example": 0
472 },
473 "KEY_VALUE_STORE_WRITES": {
474 "type": "number",
475 "example": 0.00005
476 },
477 "KEY_VALUE_STORE_LISTS": {
478 "type": "integer",
479 "example": 0
480 },
481 "REQUEST_QUEUE_READS": {
482 "type": "integer",
483 "example": 0
484 },
485 "REQUEST_QUEUE_WRITES": {
486 "type": "integer",
487 "example": 0
488 },
489 "DATA_TRANSFER_INTERNAL_GBYTES": {
490 "type": "integer",
491 "example": 0
492 },
493 "DATA_TRANSFER_EXTERNAL_GBYTES": {
494 "type": "integer",
495 "example": 0
496 },
497 "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
498 "type": "integer",
499 "example": 0
500 },
501 "PROXY_SERPS": {
502 "type": "integer",
503 "example": 0
504 }
505 }
506 }
507 }
508 }
509 }
510 }
511 }
512 }
513}
Merge, Dedup & Transform Datasets OpenAPI definition
OpenAPI is a standard for designing and describing RESTful APIs, allowing developers to define API structure, endpoints, and data formats in a machine-readable way. It simplifies API development, integration, and documentation.
OpenAPI is effective when used with AI agents and GPTs by standardizing how these systems interact with various APIs, for reliable integrations and efficient communication.
By defining machine-readable API specifications, OpenAPI allows AI models like GPTs to understand and use varied data sources, improving accuracy. This accelerates development, reduces errors, and provides context-aware responses, making OpenAPI a core component for AI applications.
You can download the OpenAPI definitions for Merge, Dedup & Transform Datasets from the options below:
If you’d like to learn more about how OpenAPI powers GPTs, read our blog post.
You can also check out our other API clients:
Pricing
Pricing model
Pay per usageThis Actor is paid per platform usage. The Actor is free to use, and you only pay for the Apify platform usage.