Batch File Aws Cli Upload to S3

When working with Amazon S3 (Unproblematic Storage Service), you're probably using the S3 web console to download, copy, or upload files to S3 buckets. Using the console is perfectly fine, that'south what it was designed for, to brainstorm with.

Particularly for admins who are used to more mouse-click than keyboard commands, the spider web console is probably the easiest. Still, admins volition eventually encounter the demand to perform bulk file operations with Amazon S3, like an unattended file upload. The GUI is not the best tool for that.

For such automation requirements with Amazon Web Services, including Amazon S3, the AWS CLI tool provides admins with control-line options for managing Amazon S3 buckets and objects.

In this article, you will learn how to use the AWS CLI command-line tool to upload, copy, download, and synchronize files with Amazon S3. Y'all will also learn the basics of providing access to your S3 bucket and configure that access contour to work with the AWS CLI tool.

Prerequisites

Since this a how-to article, there will be examples and demonstrations in the succeeding sections. For yous to follow along successfully, you lot will need to run across several requirements.

  • An AWS business relationship. If you don't take an existing AWS subscription, you can sign upwards for an AWS Complimentary Tier.
  • An AWS S3 saucepan. You tin employ an existing saucepan if you'd prefer. Still, information technology is recommended to create an empty bucket instead. Delight refer to Creating a bucket.
  • A Windows 10 computer with at least Windows PowerShell 5.1. In this commodity, PowerShell 7.0.ii will be used.
  • The AWS CLI version 2 tool must be installed on your estimator.
  • Local folders and files that you will upload or synchronize with Amazon S3

Preparing Your AWS S3 Access

Suppose that you already accept the requirements in place. You'd remember y'all can already go and starting time operating AWS CLI with your S3 bucket. I mean, wouldn't it be nice if it were that simple?

For those of yous who are but kickoff to work with Amazon S3 or AWS in general, this section aims to help yous set upward access to S3 and configure an AWS CLI contour.

The full documentation for creating an IAM user in AWS can be plant in this link below. Creating an IAM User in Your AWS Account

Creating an IAM User with S3 Access Permission

When accessing AWS using the CLI, yous will need to create one or more IAM users with enough access to the resource yous intend to work with. In this section, y'all will create an IAM user with access to Amazon S3.

To create an IAM user with access to Amazon S3, you showtime demand to login to your AWS IAM console. Under the Access management group, click on Users. Adjacent, click on Add user.

IAM Users Menu
IAM Users Menu

Type in the IAM user'south proper name you lot are creating inside the User name* box such as s3Admin. In the Access type* selection, put a cheque on Programmatic access. Then, click the Next: Permissions button.

Set IAM user details
Set IAM user details

Next, click on Attach existing policies directly. Then, search for the AmazonS3FullAccess policy name and put a bank check on it. When done, click on Next: Tags.

Assign IAM user permissions
Assign IAM user permissions

Creating tags is optional in the Add tags folio, and you can only skip this and click on the Next: Review push button.

IAM user tags
IAM user tags

In the Review page, y'all are presented with a summary of the new account being created. Click Create user.

IAM user summary
IAM user summary

Finally, one time the user is created, you lot must re-create the Access key ID and the Secret access key values and save them for later user. Note that this is the simply fourth dimension that you can meet these values.

IAM user key credentials
IAM user central credentials

Setting Upwardly an AWS Profile On Your Computer

Now that you've created the IAM user with the appropriate access to Amazon S3, the next step is to set up the AWS CLI profile on your computer.

This section assumes that you already installed the AWS CLI version 2 tool equally required. For the profile cosmos, you will need the following data:

  • The Access key ID of the IAM user.
  • The Secret admission fundamental associated with the IAM user.
  • The Default region name is corresponding to the location of your AWS S3 bucket. Yous tin bank check out the list of endpoints using this link. In this article, the AWS S3 bucket is located in the Asia Pacific (Sydney) region, and the respective endpoint is ap-southeast-2.
  • The default output format. Utilise JSON for this.

To create the contour, open PowerShell, and type the command below and follow the prompts.

Enter the Access central ID, Cloak-and-dagger access cardinal, Default region proper name, and default output proper name. Refer to the demonstration below.

Configure an AWS CLI profile
Configure an AWS CLI profile

Testing AWS CLI Access

Subsequently configuring the AWS CLI profile, you tin can confirm that the profile is working by running this control below in PowerShell.

The command higher up should listing the Amazon S3 buckets that you have in your business relationship. The sit-in below shows the command in action. The result shows that list of bachelor S3 buckets indicates that the profile configuration was successful.

List S3 buckets
Listing S3 buckets

To larn about the AWS CLI commands specific to Amazon S3, you can visit the AWS CLI Control Reference S3 folio.

Managing Files in S3

With AWS CLI, typical file management operations tin can be done like upload files to S3, download files from S3, delete objects in S3, and copy S3 objects to another S3 location. It'southward all simply a matter of knowing the right command, syntax, parameters, and options.

In the following sections, the surround used is consists of the following.

  • 2 S3 buckets, namely atasync1and atasync2. The screenshot beneath shows the existing S3 buckets in the Amazon S3 console.
List of available S3 bucket names in the Amazon S3 console
List of available S3 bucket names in the Amazon S3 console
  • Local directory and files located under c:\sync.
Local Directory
Local Directory

Uploading Individual Files to S3

When you upload files to S3, you can upload one file at a fourth dimension, or by uploading multiple files and folders recursively. Depending on your requirements, you may choose one over the other that you deem advisable.

To upload a file to S3, you'll need to provide ii arguments (source and destination) to the aws s3 cp command.

For example, to upload the file c:\sync\logs\log1.xml to the root of the atasync1 bucket, y'all can use the command below.

            aws s3 cp c:\sync\logs\log1.xml s3://atasync1/          

Notation: S3 bucket names are always prefixed with S3:// when used with AWS CLI

Run the to a higher place control in PowerShell, merely change the source and destination that fits your environment first. The output should wait similar to the demonstration below.

Upload file to S3
Upload file to S3

The demo to a higher place shows that the file named c:\sync\logs\log1.xml was uploaded without errors to the S3 destination s3://atasync1/.

Use the command beneath to list the objects at the root of the S3 bucket.

Running the command above in PowerShell would result in a similar output, equally shown in the demo beneath. As yous can see in the output below, the file log1.xml is nowadays in the root of the S3 location.

List the uploaded file in S3
Listing the uploaded file in S3

Uploading Multiple Files and Folders to S3 Recursively

The previous section showed you how to re-create a unmarried file to an S3 location. What if you need to upload multiple files from a folder and sub-folders? Surely you wouldn't want to run the aforementioned command multiple times for different filenames, right?

The aws s3 cp command has an option to process files and folders recursively, and this is the --recursive choice.

As an example, the directory c:\sync contains 166 objects (files and sub-folders).

The folder containing multiple files and sub-folders
The binder containing multiple files and sub-folders

Using the --recursive option, all the contents of the c:\sync folder will be uploaded to S3 while also retaining the folder construction. To test, use the example code below, but make sure to change the source and destination appropriate to your environment.

You'll notice from the code below, the source is c:\sync, and the destination is s3://atasync1/sync. The /sync central that follows the S3 saucepan name indicates to AWS CLI to upload the files in the /sync binder in S3. If the /sync folder does not be in S3, it will exist automatically created.

            aws s3 cp c:\sync s3://atasync1/sync --recursive          

The code above will event in the output, equally shown in the demonstration below.

Upload multiple files and folders to S3
Upload multiple files and folders to S3

Uploading Multiple Files and Folders to S3 Selectively

In some cases, uploading ALL types of files is non the best choice. Like, when you only need to upload files with specific file extensions (east.grand., *.ps1). Another ii options available to the cp command is the --include and --exclude.

While using the command in the previous section includes all files in the recursive upload, the command below will include only the files that friction match *.ps1 file extension and exclude every other file from the upload.

            aws s3 cp c:\sync s3://atasync1/sync --recursive --exclude * --include *.ps1          

The demonstration below shows how the code to a higher place works when executed.

Upload files that matched a specific file extension
Upload files that matched a specific file extension

Another example is if yous want to include multiple different file extensions, you will demand to specify the --include option multiple times.

The instance command below will include only the *.csv and *.png files to the re-create control.

            aws s3 cp c:\sync s3://atasync1/sync --recursive --exclude * --include *.csv --include *.png          

Running the code above in PowerShell would present you with a similar outcome, every bit shown below.

Upload files with multiple include options
Upload files with multiple include options

Downloading Objects from S3

Based on the examples you've learned in this section, you can too perform the re-create operations in opposite. Meaning, you can download objects from the S3 bucket location to the local machine.

Copying from S3 to local would require yous to switch the positions of the source and the destination. The source being the S3 location, and the destination is the local path, like the ane shown beneath.

            aws s3 cp s3://atasync1/sync c:\sync          

Annotation that the aforementioned options used when uploading files to S3 are also applicable when downloading objects from S3 to local. For case, downloading all objects using the command beneath with the --recursive option.

            aws s3 cp s3://atasync1/sync c:\sync --recursive          

Copying Objects Between S3 Locations

Apart from uploading and downloading files and folders, using AWS CLI, you can besides copy or motion files betwixt 2 S3 bucket locations.

Y'all'll notice the command below using i S3 location every bit the source, and some other S3 location as the destination.

            aws s3 cp s3://atasync1/Log1.xml s3://atasync2/          

The sit-in below shows you the source file being copied to another S3 location using the command higher up.

Copy objects from one S3 location to another S3 location
Copy objects from one S3 location to some other S3 location

Synchronizing Files and Folders with S3

You've learned how to upload, download, and copy files in S3 using the AWS CLI commands so far. In this section, you'll learn about 1 more file performance command available in AWS CLI for S3, which is the sync command. The sync command but processes the updated, new, and deleted files.

At that place are some cases where you need to keep the contents of an S3 saucepan updated and synchronized with a local directory on a server. For example, you may have a requirement to keep transaction logs on a server synchronized to S3 at an interval.

Using the command below, *.XML log files located under the c:\sync binder on the local server will be synced to the S3 location at s3://atasync1.

            aws s3 sync C:\sync\ s3://atasync1/ --exclude * --include *.xml          

The demonstration beneath shows that after running the command above in PowerShell, all *.XML files were uploaded to the S3 destination s3://atasync1/.

Synchronizing local files to S3
Synchronizing local files to S3

Synchronizing New and Updated Files with S3

In this next instance, it is assumed that the contents of the log file Log1.xml were modified. The sync control should pick upward that modification and upload the changes done on the local file to S3, as shown in the demo below.

The command to use is still the same every bit the previous example.

Synchronizing changes to S3
Synchronizing changes to S3

As you can see from the output above, since simply the file Log1.xml was inverse locally, it was also the only file synchronized to S3.

Synchronizing Deletions with S3

By default, the sync command does not procedure deletions. Any file deleted from the source location is not removed at the destination. Well, non unless you use the --delete selection.

In this next example, the file named Log5.xml has been deleted from the source. The command to synchronize the files volition be appended with the --delete option, as shown in the code below.

            aws s3 sync C:\sync\ s3://atasync1/ --exclude * --include *.xml --delete          

When you run the command in a higher place in PowerShell, the deleted file named Log5.xml should also be deleted at the destination S3 location. The sample upshot is shown below.

Synchronize file deletions to S3
Synchronize file deletions to S3

Summary

Amazon S3 is an excellent resources for storing files in the cloud. With the use of the AWS CLI tool, the way you utilize Amazon S3 is further expanded and opens the opportunity to automate your processes.

In this article, you've learned how to use the AWS CLI tool to upload, download, and synchronize files and folders between local locations and S3 buckets. Yous've also learned that S3 buckets' contents can also be copied or moved to other S3 locations, as well.

There can exist many more use-case scenarios for using the AWS CLI tool to automate file direction with Amazon S3. Y'all can even endeavor to combine it with PowerShell scripting and build your own tools or modules that are reusable. Information technology is up to yous to notice those opportunities and show off your skills.

Further Reading

  • What Is the AWS Command Line Interface?
  • What is Amazon S3?
  • How To Sync Local Files And Folders To AWS S3 With The AWS CLI

hearnwhelving.blogspot.com

Source: https://adamtheautomator.com/upload-file-to-s3/

0 Response to "Batch File Aws Cli Upload to S3"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel