Duplicacy quick-start (CLI)

Note that these instructions are for the CLI version. To get started with the GUI version, click here.


Once you have the Duplicacy executable on your path, you can change to the directory that you want to back up (called repository) and run the init command:

$ cd path/to/your/repository
$ duplicacy init mywork sftp://user@192.168.1.100/path/to/storage

This init command connects the repository with the remote storage at 192.168.1.100 via SFTP. It will initialize the remote storage if this has not been done before (create the required duplicacy config files and folders), but it requires that the folder already exists on the storage (duplicacy will not create it). It also assigns the repository id mywork to the repository. This repository id is used to uniquely identify this repository if there are other repositories that also back up to the same storage.

You can now create backups of the repository by invoking the backup command. The first backup may take a while depending on the size of the repository and the upload bandwidth. Subsequent backups will be much faster, as only new or modified files will be uploaded. Each backup is identified by the repository id and an increasing revision number starting from 1.

$ duplicacy backup -stats

The restore command rolls back the repository to a previous revision:

$ duplicacy restore -r 1

Sometimes you may not want to run the restore operation directly from the original repository, as it may overwrite files that have not been backed up. Or you may want to run the restore operation from a different computer. Duplicacy is very flexible in this regard, as it allows you to create a new repository no matter where it is. As long as the new repository has the same repository id, Duplicacy will treat it as a clone of the original repository:

$ cd path/to/your/restore/dir   # this can be on the same or a different computer  
$ duplicacy init mywork sftp://user@192.168.1.100/path/to/storage
$ duplicacy restore -r 1

It is possible to back up two different repositories to the same storage. In fact, this is the recommended way, because this way you will take advantage of cross-computer deduplication – identical files from different repository will get deduplicated automatically.

$ cd path/to/your/repository2   # this can be on the same or a different computer
$ duplicacy init mywork2 sftp://user@192.168.1.100/path/to/storage    # different repository id but same storage url
$ duplicacy backup

Duplicacy provides a set of commands, such as list, check, diff, cat history, to manage backups:

$ duplicacy list            # List all backups
$ duplicacy check           # Check integrity of backups
$ duplicacy diff            # Compare two backups of the same repository, or the same file in two backups
$ duplicacy cat             # Print a file in a backup
$ duplicacy history         # Show how a file changes over time

The prune command removes backups by revisions, or tags, or retention policies:

$ duplicacy prune -r 1            # Remove the backup with revision number 1
$ duplicacy prune -t quick        # Remove all backups with the tag 'quick'
$ duplicacy prune -keep 1:7       # Keep 1 backup per day for backups older than 7 days
$ duplicacy prune -keep 7:30      # Keep 1 backup every 7 days for backups older than 30 days
$ duplicacy prune -keep 0:180     # Remove all backups older than 180 days

The first time the prune command is called, it removes the specified backups but keeps all unreferenced chunks as fossils.
Since it uses the two-step fossil collection algorithm to clean chunks, you will need to run it again to remove those fossils from the storage:

$ duplicacy prune           # Chunks from deleted backups will be removed if deletion criteria are met

To back up to multiple storages, use the add command to add a new storage. The add command is similar to the init command, except that the first argument is a storage name used to distinguish different storages:

$ duplicacy add s3 mywork s3://amazon.com/mybucket/path/to/storage

You can back up to any storage by specifying the storage name:

$ duplicacy backup -storage s3

However, backups created this way will be different on different storages, if the repository has been changed during two backup operations. A better approach, is to use the copy command to copy specified backups from one storage to another:

$ duplicacy copy -r 1 -to s3   # Copy backup at revision 1 to the s3 storage
$ duplicacy copy -to s3        # Copy every backup to the s3 storage
2 Likes

I need a bit more information for the quick-start on Linux…

Where and what do I need to do to get the executable to run on Alpine Linux?
I want to run the CLI version within the WebUI docker version.