S3 Uploader avatar
S3 Uploader

Pricing

Pay per usage

Go to Store
S3 Uploader

S3 Uploader

apify/s3-uploader

Developed by

Apify

Maintained by Apify

Upload data from an Apify dataset to an Amazon S3 bucket. Providing various filters and transformation options, this Actor allows precise control over data structure, formatting, and upload settings to ensure seamless integration into your data pipeline.

0.0 (0)

Pricing

Pay per usage

0

Monthly users

0

Last modified

4 days ago

This integration-ready Apify Actor uploads the content of an Apify dataset to an Amazon S3 bucket. You can use it to store data extracted by other Actors as either an integration or a standalone Actor.

Features

  • Uploads data in various formats (JSON, CSV, XML, etc.).
  • Supports variables for dynamic S3 object keys.
  • Supports various filtering and transformation options (select, omit, unwind, flatten, offset, limit, clean only, ...).

AWS IAM User Requirement

To use this Actor, you will need an AWS IAM user with the necessary permissions. If you do not have one already, you can create a new IAM user by following the official AWS guide.

Input Parameters

ParameterTypeRequiredDescription
accessKeyIdstringYour AWS access key ID used for authorization of the upload.
secretAccessKeystringYour AWS secret access key used for authorization of the upload.
regionstringThe AWS region where the target S3 bucket is located.
bucketstringThe name of the target S3 bucket.
keystringThe object key, which serves as an identifier for the uploaded data in the S3 bucket. It can include an optional prefix. If an object with the same key already exists, it will be overwritten with the uploaded data.
datasetIdstringThe Apify dataset ID from which data will be retrieved for the upload.
formatstringThe format of the uploaded data. Options: json, jsonl, html, csv, xml, xlsx, rss. Default: json.
fieldsarrayFields to include in the output. If not specified, all fields will be included.
omitarrayFields to exclude from the output.
unwindarrayFields to unwind. If the field is an array, every element will become a separate record and merged with the parent object. If the unwound field is an object, it is merged with the parent object. If the unwound field is missing or its value is neither an array nor an object, it cannot be merged with a parent object, and the item gets preserved as is. If you specify multiple fields, they are unwound in the order you specify.
flattenarrayFields to transform from nested objects into a flat structure.
offsetintegerNumber of items to skip from the beginning of the dataset. Minimum: 0.
limitintegerMaximum number of items to upload. Minimum: 1.
cleanbooleanIf enabled, only clean dataset items and their non-hidden fields will be uploaded. See the documentation for details. Default: true.

How It Works

  1. The Actor retrieves the specified dataset from Apify, transformed based on the provided input parameters (format, clean only, etc.).
  2. The data is uploaded to the specified S3 bucket, as an object of the provided key.
  3. If an object with the same key already exists, it is replaced with the new upload.

Error Handling

If the Actor encounters an issue, it will log an error and fail. Possible issues include:

  • Invalid AWS credentials.
  • Incorrect bucket name or permissions.
  • Nonexistent Apify dataset ID.

Help & Support

The S3 Uploader is actively maintained. If you have any feedback or feature ideas, feel free to submit an issue.

Pricing

Pricing model

Pay per usage

This Actor is paid per platform usage. The Actor is free to use, and you only pay for the Apify platform usage.