
S3 Uploader
Pricing
Pay per usage

S3 Uploader
Upload data from an Apify dataset to an Amazon S3 bucket. Providing various filters and transformation options, this Actor allows precise control over data structure, formatting, and upload settings to ensure seamless integration into your data pipeline.
0.0 (0)
Pricing
Pay per usage
0
Monthly users
0
Last modified
4 days ago
This integration-ready Apify Actor uploads the content of an Apify dataset to an Amazon S3 bucket. You can use it to store data extracted by other Actors as either an integration or a standalone Actor.
Features
- Uploads data in various formats (JSON, CSV, XML, etc.).
- Supports variables for dynamic S3 object keys.
- Supports various filtering and transformation options (select, omit, unwind, flatten, offset, limit, clean only, ...).
AWS IAM User Requirement
To use this Actor, you will need an AWS IAM user with the necessary permissions. If you do not have one already, you can create a new IAM user by following the official AWS guide.
Input Parameters
Parameter | Type | Required | Description |
---|---|---|---|
accessKeyId | string | ✅ | Your AWS access key ID used for authorization of the upload. |
secretAccessKey | string | ✅ | Your AWS secret access key used for authorization of the upload. |
region | string | ✅ | The AWS region where the target S3 bucket is located. |
bucket | string | ✅ | The name of the target S3 bucket. |
key | string | ✅ | The object key, which serves as an identifier for the uploaded data in the S3 bucket. It can include an optional prefix. If an object with the same key already exists, it will be overwritten with the uploaded data. |
datasetId | string | ✅ | The Apify dataset ID from which data will be retrieved for the upload. |
format | string | ❌ | The format of the uploaded data. Options: json , jsonl , html , csv , xml , xlsx , rss . Default: json . |
fields | array | ❌ | Fields to include in the output. If not specified, all fields will be included. |
omit | array | ❌ | Fields to exclude from the output. |
unwind | array | ❌ | Fields to unwind. If the field is an array, every element will become a separate record and merged with the parent object. If the unwound field is an object, it is merged with the parent object. If the unwound field is missing or its value is neither an array nor an object, it cannot be merged with a parent object, and the item gets preserved as is. If you specify multiple fields, they are unwound in the order you specify. |
flatten | array | ❌ | Fields to transform from nested objects into a flat structure. |
offset | integer | ❌ | Number of items to skip from the beginning of the dataset. Minimum: 0 . |
limit | integer | ❌ | Maximum number of items to upload. Minimum: 1 . |
clean | boolean | ❌ | If enabled, only clean dataset items and their non-hidden fields will be uploaded. See the documentation for details. Default: true . |
How It Works
- The Actor retrieves the specified dataset from Apify, transformed based on the provided input parameters (format, clean only, etc.).
- The data is uploaded to the specified S3 bucket, as an object of the provided key.
- If an object with the same key already exists, it is replaced with the new upload.
Error Handling
If the Actor encounters an issue, it will log an error and fail. Possible issues include:
- Invalid AWS credentials.
- Incorrect bucket name or permissions.
- Nonexistent Apify dataset ID.
Help & Support
The S3 Uploader is actively maintained. If you have any feedback or feature ideas, feel free to submit an issue.
Pricing
Pricing model
Pay per usageThis Actor is paid per platform usage. The Actor is free to use, and you only pay for the Apify platform usage.