Use Rclone

Learn how to use Rclone with OORT Storage.

What is Rclone?

Rclone is a command line program used to backup and sync files and directories to cloud storage platforms. Rclone is certified for use with OORT Storage. Over 40 cloud storage products supported, for learn more on Rclone.

Prerequisites

Configuring Rclone to connect to OORT Storage

To use Rclone, open a terminal window and navigate to the directory where you saved the executable.

  1. Run rclone config to setup and select n for a new remote.

rclone config
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> n
  1. Enter a name for the new remote configuration, e.g. dss-remote.

name> dss-remote
  1. A long list of supported storage types will prompt. Enter s3 and hit Enter.

Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
...
 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi
   \ (s3)
...
Storage> s3
  1. A further list of S3 storage providers will prompt. Enter Other and hit Enter.

Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
 1 / Amazon Web Services (AWS) S3
   \ (AWS)
 2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
   \ (Alibaba)
 3 / Ceph Object Storage
   \ (Ceph)
...
25 / Any other S3 compatible provider
   \ (Other)
provider> Other
  1. A choice will be given on how you will enter credentials. Enter false and hit Enter.

Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
 1 / Enter AWS credentials in the next step.
   \ (false)
 2 / Get AWS credentials from the environment (env vars or IAM).
   \ (true)
env_auth>false
  1. Enter your access key and secret key.

Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> your-access-key-id

Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> your-secret-access-key
  1. Press Enter to leave empty.

Option region.
Region to connect to.
Leave blank if you are using an S3 clone and you don't have a region.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
   / Use this if unsure.
 1 | Will use v4 signatures and an empty region.
   \ ()
   / Use this only if v4 signatures don't work.
 2 | E.g. pre Jewel/v10 CEPH.
   \ (other-v2-signature)
region>
  1. Specify the endpoint for OORT Storage.

The OORT Storage S3-Compatible API endpoint URL is:

Standard:https://s3-standard.oortech.com

Archive:https://s3-archive.oortech.com

Option endpoint.
Endpoint for S3 API.
Required when using an S3 clone.
Enter a value. Press Enter to leave empty.
endpoint>
  1. Press Enter to skip the location constraint as there is no location constraint.

Option location_constraint.
Location constraint - must be set to match the Region.
Leave blank if not sure. Used when creating buckets only.
Enter a value. Press Enter to leave empty.
location_constraint>
  1. Enter 1 to choose default ACL (private).

Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
If the acl is an empty string then no X-Amz-Acl: header is added and
the default (private) will be used.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
   / Owner gets FULL_CONTROL.
 1 | No one else has access rights (default).
   \ (private)
   / Owner gets FULL_CONTROL.
 2 | The AllUsers group gets READ access.
   \ (public-read)
   / Owner gets FULL_CONTROL.
 3 | The AllUsers group gets READ and WRITE access.
   | Granting this on a bucket is generally not recommended.
   \ (public-read-write)
   / Owner gets FULL_CONTROL.
 4 | The AuthenticatedUsers group gets READ access.
   \ (authenticated-read)
   / Object owner gets FULL_CONTROL.
 5 | Bucket owner gets READ access.
   | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ (bucket-owner-read)
   / Both the object owner and the bucket owner get FULL_CONTROL over the object.
 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ (bucket-owner-full-control)
acl>1
  1. You will be asked if you want to edit the advanced config. Enter n to save the default advanced configuration.

Edit advanced config?
y) Yes
n) No (default)
y/n> n
  1. A summary of the remote configuration will prompt. Type y and hit Enter to confirm.

Configuration complete.
Options:
- type: s3
- provider: Other
- access_key_id: VNG8OB3KBJM9ONL3X5WI
- secret_access_key: BqZHymfZU0SJy2gKfnWQUiSDGGOUBdmHA2YhrHhZ
- endpoint: https://s3-storj.oortech.com
- acl: private
- chunk_size: 64Mi
- disable_checksum: true
Keep this "dss-remote" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
  1. Now you should see one remote configuration available. Enter q and hit Enter to quit the configuration wizard.

Current remotes:

Name                 Type
====                 ====
dss-remote           s3

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q

Create a Bucket

Use the mkdir command to create new bucket, e.g. mybucket.

rclone mkdir dss-remote:mybucket

List All Buckets

Use the lsf command to list all buckets.

rclone lsf dss-remote:

Delete a Bucket

Use the rmdir command to delete an empty bucket.

rclone rmdir dss-remote:mybucket

Use the purge command to delete a non-empty bucket with all its content.

rclone purge dss-remote:mybucket

Upload Objects

Copy the source to the destination. Does not transfer files that are identical on source and destination, testing by size and modification time or MD5SUM.

Use the copy command to upload an object.

rclone copy --progress ~/Photos/myphoto.png dss-remote:mybucket/photos/

The copy can be done also from a remote path to Oort DSS.

rclone copy --progress s3-remote:mybucket/photos/myphoto.png dss-remote:mybucket/photos/

Sync or Migrate Data

Sync the source to the destination, changing the destination only. Doesn't transfer files that are identical on source and destination, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary.

rclone sync --progress ~/Photos/ dss-remote:mybucket/photos/

The sync can be done also from Oort DSS to the local file system.

rclone sync --progress dss-remote:mybucket/photos/ ~/Photos/

Or even between another cloud storage and Oort DSS.

rclone sync --progress s3-remote:mybucket/photos/ dss-remote:mybucket/photos/

List Objects

Use the ls command to list recursively all objects in a bucket.

rclone ls dss-remote:mybucket

Use the lsf command to list non-recursively all objects in a bucket or a folder.

rclone lsf dss-remote:mybucket

Download Objects

This is the same as copying, but the source path is something in the remote or in a bucket, while the destination path is on your local storage.

rclone copy --progress dss-remote:mybucket/photos/myphoto.png ~/Downloads/

Use a folder in the remote path to download all its objects.

rclone copy --progress dss-remote:mybucket/photos/ ~/Downloads/

Delete Objects

Use the delete command to delete a single object or folder.

rclone delete dss-remote:mybucket/photos/myphoto.png
rclone delete dss-remote:mybucket/photos

Use the delete command to delete all object in a folder.

rclone delete dss-remote:mybucket/photos/

Last updated