Connecting to COS (S3) using s3cmd

Overview

This guide will show you how to connect to your Cloud Object Storage (S3) account using the linux tool s3cmd.

Installing and Configuring s3cmd

The s3cmd tool is in most default repositories so you can use your package manager to install it:

# Centos/RHEL
# This may require the installation of the epel repository as outlined here. 
$ yum install s3cmd

# Debian/Ubuntu 
$ apt-get install s3cmd 

Once the package has been installed, grab our example configuration file here and update it with your Cloud Object Storage (S3) credentials:

$ wget -O $HOME/.s3cfg https://gist.githubusercontent.com/greyhoundforty/676814921b8f4367fba7604e622d10f3/raw/422abaeb70f1c17cd5308745c0e446b047c123e0/s3cfg

The 4 lines that need to be updated are  access_key, secret_key, host_base, and host_bucket. This is the same wether you use the example file or the one generated by running: s3cmd --configure.

Once those lines have been updated with the COS details from the Customer portal you can test the connection by issuing the command s3cmd ls which will list all the buckets on the account.

$ s3cmd ls 
2017-02-03 14:52  s3://backuptest
2017-02-06 15:04  s3://coldbackups
2017-02-03 21:23  s3://largebackup
2017-02-07 17:44  s3://winbackup

Common s3cmd commands

To create a bucket issue the command s3cmd mb

$ s3cmd mb s3://shinynewbucket
Bucket 's3://shinynewbucket/' created


A Note on Buckets: Bucket names must be DNS-compliant. Names must be between 3 and 63 characters long, must be made of lowercase letters, numbers, and dashes, must be globally unique, and cannot appear to be an IP address. A common approach to ensuring uniqueness is to append a UUID or other distinctive suffix to bucket names. 

To list the contents of a bucket use ls and append a trailing / to the bucketname.

$ s3cmd ls s3://backuptest/
                       DIR   s3://backuptest/saltmaster/

$ s3cmd ls s3://backuptest/saltmaster/
2017-02-03 21:03       273   s3://backuptest/saltmaster/duplicity-full-signatures.20170203T210310Z.sigtar.gpg
2017-02-03 21:03       248   s3://backuptest/saltmaster/duplicity-full.20170203T210310Z.manifest.gpg
2017-02-03 21:03       372   s3://backuptest/saltmaster/duplicity-full.20170203T210310Z.vol1.difftar.gpg

To push a file use the put command:

$ s3cmd put cors.py s3://shinynewbucket/
upload: 'cors.py' -> 's3://shinynewbucket/cors.py'  [1 of 1]
 255 of 255   100% in    0s   649.37 B/s  done

$ s3cmd ls s3://shinynewbucket/
2017-02-16 16:10       255   s3://shinynewbucket/cors.py

To retrieve a file use the get command:

$ s3cmd get s3://backuptest/saltmaster/duplicity-full-signatures.20170203T210310Z.sigtar.gpg
download: 's3://backuptest/saltmaster/duplicity-full-signatures.20170203T210310Z.sigtar.gpg' -> './duplicity-full-signatures.20170203T210310Z.sigtar.gpg'  [1 of 1]
 273 of 273   100% in    0s     3.27 kB/s  done

To sync a local directory to Cloud Object storage use the sync command. Note - if you delete a local file and run the sync command the file will also be removed from the bucket. The sync command is not recommended as a backup option.

$ ls testingdir
file.1  file.10 file.2  file.3  file.4  file.5  file.6  file.7  file.8  file.9

$ s3cmd sync testingdir s3://shinynewbucket/
upload: 'testingdir/file.1' -> 's3://shinynewbucket/testingdir/file.1'  [1 of 10]
 0 of 0     0% in    0s     0.00 B/s  done
upload: 'testingdir/file.10' -> 's3://shinynewbucket/testingdir/file.10'  [2 of 10]
 0 of 0     0% in    0s     0.00 B/s  done
upload: 'testingdir/file.2' -> 's3://shinynewbucket/testingdir/file.2'  [3 of 10]
 0 of 0     0% in    0s     0.00 B/s  done
upload: 'testingdir/file.3' -> 's3://shinynewbucket/testingdir/file.3'  [4 of 10]
 0 of 0     0% in    0s     0.00 B/s  done
upload: 'testingdir/file.4' -> 's3://shinynewbucket/testingdir/file.4'  [5 of 10]
 0 of 0     0% in    0s     0.00 B/s  done
upload: 'testingdir/file.5' -> 's3://shinynewbucket/testingdir/file.5'  [6 of 10]
 0 of 0     0% in    0s     0.00 B/s  done
upload: 'testingdir/file.6' -> 's3://shinynewbucket/testingdir/file.6'  [7 of 10]
 0 of 0     0% in    0s     0.00 B/s  done
upload: 'testingdir/file.7' -> 's3://shinynewbucket/testingdir/file.7'  [8 of 10]
 0 of 0     0% in    0s     0.00 B/s  done
upload: 'testingdir/file.8' -> 's3://shinynewbucket/testingdir/file.8'  [9 of 10]
 0 of 0     0% in    0s     0.00 B/s  done
upload: 'testingdir/file.9' -> 's3://shinynewbucket/testingdir/file.9'  [10 of 10]
 0 of 0     0% in    0s     0.00 B/s  done

$ s3cmd ls s3://shinynewbucket/testingdir/
2017-02-16 16:15         0   s3://shinynewbucket/testingdir/file.1
2017-02-16 16:15         0   s3://shinynewbucket/testingdir/file.10
2017-02-16 16:15         0   s3://shinynewbucket/testingdir/file.2
2017-02-16 16:15         0   s3://shinynewbucket/testingdir/file.3
2017-02-16 16:15         0   s3://shinynewbucket/testingdir/file.4
2017-02-16 16:15         0   s3://shinynewbucket/testingdir/file.5
2017-02-16 16:15         0   s3://shinynewbucket/testingdir/file.6
2017-02-16 16:15         0   s3://shinynewbucket/testingdir/file.7
2017-02-16 16:15         0   s3://shinynewbucket/testingdir/file.8
2017-02-16 16:15         0   s3://shinynewbucket/testingdir/file.9

To remove a specific file use the del or rm command.

$ s3cmd del s3://backuptest/saltmaster/duplicity-full-signatures.20170203T210310Z.sigtar.gpg
delete: 's3://backuptest/saltmaster/duplicity-full-signatures.20170203T210310Z.sigtar.gpg'

To remove a bucket use the rb command. Note - a bucket needs to be empty before the command will succeed. If the bucket is not empty you can use the --recursive flag to remove it and all of its contents.

$ s3cmd rb s3://shinynewbucket/
ERROR: S3 error: 409 (BucketNotEmpty): The bucket you tried to delete is not empty.

$ s3cmd rb s3://shinynewbucket/ --recursive
WARNING: Bucket is not empty. Removing all the objects from it first. This may take some time...
delete: 's3://shinynewbucket/cors.py'
delete: 's3://shinynewbucket/testingdir/file.1'
delete: 's3://shinynewbucket/testingdir/file.10'
delete: 's3://shinynewbucket/testingdir/file.2'
delete: 's3://shinynewbucket/testingdir/file.3'
delete: 's3://shinynewbucket/testingdir/file.4'
delete: 's3://shinynewbucket/testingdir/file.5'
delete: 's3://shinynewbucket/testingdir/file.6'
delete: 's3://shinynewbucket/testingdir/file.7'
delete: 's3://shinynewbucket/testingdir/file.8'
delete: 's3://shinynewbucket/testingdir/file.9'
Bucket 's3://shinynewbucket/' removed