Supported storage backends

dropbox
azure
digitalocean
wasabi
local
hubic
openstack
webdav
onedrive
google-drive
amazon-s3
google-cloud-storage
backblaze-b2
sftp

#1

Duplicacy currently supports local file storage, SFTP, WebDav and many cloud storage providers.

Local disk
Storage URL:  /path/to/storage (on Linux or Mac OS X)
              C:\path\to\storage (on Windows)

SFTP
Storage URL:  sftp://username@server/path/to/storage (path relative to the home directory)
              sftp://username@server//path/to/storage (absolute path)

Login methods include password authentication and public key authentication. Due to a limitation of the underlying Go SSH library, the key pair for public key authentication must be generated without a passphrase. To work with a key that has a passphrase, you can set up SSH agent forwarding which is also supported by Duplicacy.


Dropbox
Storage URL:  dropbox://path/to/storage

For Duplicacy to access your Dropbox storage, you must provide an access token that can be obtained in one of two ways:

  • Create your own app on the Dropbox Developer page, and then generate the access token

  • Or authorize Duplicacy to access its app folder inside your Dropbox (following this link), and Dropbox will generate the access token (which is not visible to us, as the redirect page showing the token is merely a static html hosted by Dropbox). The actual storage folder will be the path specified in the storage url relative to the Apps folder.


Amazon S3
Storage URL:  s3://amazon.com/bucket/path/to/storage (default region is us-east-1)
              s3://region@amazon.com/bucket/path/to/storage (other regions must be specified)

You’ll need to input an access key and a secret key to access your Amazon S3 storage.

Minio-based S3 compatiable storages are also supported by using the minio or minios backends:

Storage URL:  minio://region@host/bucket/path/to/storage (without TLS)
Storage URL:  minios://region@host/bucket/path/to/storage (with TLS)

There is another backend that works with S3 compatible storage providers that require V2 signing:

Storage URL:  s3c://region@host/bucket/path/to/storage

Wasabi
Storage URL: (latest on master branch)
             wasabi://region@s3.wasabisys.com/bucket/path 
             wasabi://us-east-1@s3.wasabisys.com/bucket/path (us-east-1 region)
             wasabi://us-west-1@s3.us-west-1.wasabisys.com/bucket/path (us-west-1 region)

Storage URL: (2.1.0 or older)
             s3://region@s3.wasabisys.com/bucket/path

Where region is the storage region, bucket is the name of the bucket and path is the path to the top of the Duplicacy storage within the bucket. Note that us-west-1 additionally has the region in the host name but us-east-1 does not.

Wasabi is a relatively new cloud storage service providing a S3-compatible API. It is well-suited for storing backups, because it is much cheaper than Amazon S3 with a storage cost of $0.0049/GB/month (see note below), and no additional charges on API calls and download bandwidth.

S3 and Billing

Short Version

The s3 storage backend renames objects with a copy and delete which is inexpensive for AWS but more expensive for Wasabi. Use the wasabi backend for it to be handled properly.

Long Version

Wasabi’s billing model differs from Amazon’s in that any object created incurs charges for 90 days of storage, even if the object is deleted earlier than that, and then the monthly rate thereafter.

As part of the process for purging data which is no longer needed, Duplicacy renames objects. Because S3 does not support renaming objects, Duplicacy’s s3 backend does the equivalent by using S3’s copy operation to create a second object with the new name then deleting the one with the old name. S3-style renaming with Wasabi will incur additional charges during fossilization becasue of the additional objects it creates. For example, if a new 1 GB file is backed up in chunks on day 1, the initial storage will incur fees of at least $0.0117 (three months at $0.0039 each). If the file goes away and all snapshots that contained it are pruned on day 50, renaming the chunks will create an additional 1 GB of objects with a newly-started 90-day clock at a cost of $0.0117.

The wasabi backend uses Wasabi’s rename operation to avoid these extra charges.

Snapshot Pruning

Wasabi’s 90-day minimum for stored data means there is no financial incentive to reduce utilization through early pruning of snapshots. Because of this, the strategy shown in the documentation for the prune command can be shortened to the following without incurring additional charges:

                                  # Keep all snapshots younger than 90 days by doing nothing
$ duplicacy prune -keep 7:90      # Keep 1 snapshot every 7 days for snapshots older than 90 days
$ duplicacy prune -keep 30:180    # Keep 1 snapshot every 30 days for snapshots older than 180 days
$ duplicacy prune -keep 0:360     # Keep no snapshots older than 360 days

DigitalOcean Spaces
Storage URL: s3://nyc3@nyc3.digitaloceanspaces.com/bucket/path/to/storage

DigitalOcean Spaces is a s3-compatible cloud storage provided by DigitalOcean. The storage cost starts at $5 per month for 250GB and $0.02 for each additional GB. DigitalOcean Spaces has the lowest bandwidth cost (1TB free per account and $0.01/GB additionally) among those who charge bandwidth fees. There are no API charges which further lowers the overall cost.

Here is a tutorial on how to set up Duplicacy to work with DigitalOcean Spaces: How to Manage Backups to the Cloud with Duplicacy | DigitalOcean


Google Cloud Storage
Storage URL:  gcs://bucket/path/to/storage

You must first obtain a credential file by authorizing Duplicacy to access your Google Cloud Storage account or by downloading a service account credential file.

You can also use the s3 protocol to access Google Cloud Storage. To do this, you must enable the s3 interoperability in your Google Cloud Storage settings and set the storage url as s3://storage.googleapis.com/bucket/path/to/storage.


Microsoft Azure
Storage URL:  azure://account/container

You’ll need to input the access key once prompted.


Backblaze B2
Storage URL: b2://bucketname

You’ll need to input the account id and application key.

Backblaze’s B2 storage is one of the least expensive (at 0.5 cent per GB per month, with a download fee of 1 cent per GB, plus additional charges for API calls).

Please note that if you back up multiple repositories to the same bucket, the lifecyle rules of the bucket is recommended to be set to Keep all versions of the file which is the default one. The Keep prior versions for this number of days option will work too if the number of days is more than 7.


Google Drive
Storage URL: gcd://path/to/storage

To use Google Drive as the storage, you first need to download a token file from Google Drive for Duplicacy by authorizing Duplicacy to access your Google Drive, and then enter the path to this token file to Duplicacy when prompted.


Microsoft OneDrive
Storage URL: one://path/to/storage

To use Microsoft OneDrive as the storage, you first need to download a token file from OneDrive for Duplicacy by authorizing Duplicacy to access your OneDrive, and then enter the path to this token file to Duplicacy when prompted.


Hubic
Storage URL: hubic://path/to/storage

To use Hubic as the storage, you first need to download a token file from Hubic for Duplicacy by authorizing Duplicacy to access your Hubic drive, and then enter the path to this token file to Duplicacy when prompted.

Hubic offers the most free space (25GB) of all major cloud providers and there is no bandwidth charge (same as Google Drive and OneDrive), so it may be worth a try.


OpenStack Swift
Storage URL: swift://user@auth_url/container/path

If the storage requires more parameters you can specify them in the query string:

swift://user@auth_url/container/path?tenant=<tenant>&domain=<domain>

The following is the list of parameters accepted by the query string:

  • domain
  • domain_id
  • user_id
  • retries
  • user_agent
  • timeout
  • connection_timeout
  • region
  • tenant
  • tenant_id
  • endpiont_type
  • tenant_domain
  • tenant_domain_id
  • trust_id

This backend is implemented using GitHub - ncw/swift: Go language interface to Swift / Openstack Object Storage / Rackspace cloud files (golang).


WebDav
Storage URL:  webdav://username@server/path/to/storage (path relative to the home directory)
              webdav://username@server//path/to/storage (absolute path --> mind the `//`)

Confusion: wasabi versioning vs duplicacy versioning
Should I disable Backblaze B2 Cloud Lifecycle Settings?
The storage type 'wasabi' is not supported
Duplicacy User Guide
How to restore from Onedrive
Init command details
Duplicacy and wasabi stopped working out of the blue
Duplicacy and wasabi stopped working out of the blue
Init command details
SSH key with passphrase
#2

#3

I’m confused about this text on the Backblaze B2 section:
" Please note that if you back up multiple repositories to the same bucket, the lifecyle rules 1 of the bucket is recommended to be set to Keep all versions of the file which is the default one. The Keep prior versions for this number of days option will work too if the number of days is more than 7."

TheBestPessimist wrote on a topic about wasabi: “Duplicacy does it’s own file management and therefore the storage shouldn’t.”

Is there a reason to recommend enabling file retention on B2 but not wasabi?


#4

My only reason is that i have no idea how wasabi works.

If wasabi has similar features as B2’s lifecycle rules then i suppose they should be set in such a way that no files are deleted by wasaby and only duplicacy can delete the files.

I also didn’t make that rule about B2, someone else suggested that, and the team here just added the notes. :lab_coat:


#5

Cool, thanks for clarifying!


#6

samba is missing?


Added a samba:// storage backend that is basically a local drive backend but with caching enabled (for networked drives)


#7

forgot to add samba, because i generally (windows) mount my network drives, and therefore they just look like folders for me. Plus i’m not sure how the authentication is passed to duplicacy.


#8

Hmm! Should I be using samba:// then, instead of UNC //server/share on Windows? I don’t map drive letters - UNC paths are enough for me - but how does this cache thing work?


#9

For local-disk based storages Duplicacy doesn’t save metadata chunks in a local cache (under .duplicacy/cache) as it does for all other storages.

However, there is no need to add samba:// to //server/share, since for UNC storage paths, the cache option is turned on by default.


#10

Since this does not appear documented anywhere, can you please explain how to use a UNC share (even if it’s via samba, which is SMB) from a Windows client?

If you try to init a storage url without any prefix (like samba: or s3: etc.) it will fail. I tried using samba from my windows PC and it seems to have init’ed the storage. However, when I run a backup, it would appear that all the files are simply copied locally. That is, a new directory is created immediately under the root of my repository with the pseudo-path of the UNC share and it appears all the chunks, snapshots, config go there.

Mapping a network drive to a PC can be troublesome for automated/batch processes, especially if they were to run from a different account. It’s far better to use the unique UNC/SMB path. (In windows this is typically with backslashes, not forward-slashes.)

Thanks,
AJ


#11

I realize it’s bad form to reply to your own question, but after looking at the source code, there is special consideration given to Windows OS’s UNC paths.

So the answer is if you want to use a UNC Path on Windows, simply specify it normally (with backslashes, not forward slashes) with no storage-prefix. Do NOT use “samba” or “flat”. (Samba will not work; flat will probably work but not allow for caching.)

Ex:
duplicacy init -e PC_Users \\nasbox\backup\MyDuplicacy


#12

No, not at all. In the contrary, it is good practice to do so as it will prevent others wasting efforts to give you an answer you already have.