Guidance on initial environment setup

Now that I am forced to migrate off my current cloud backup service (thanks CrashPlan), I’m considering deploying Duplicacy, I’ve just started working with CLI version on Linux and would like some feedback on the initial design.

Requirements: local and cloud backup for one Linux media server and three Windows workstations. 1-2TB of data to be protected.

High-level design:

  • Backup all four systems to a external hard drive attached to the media server
  • Use the Copy option to upload the backups to the cloud service

Is that the best (or at least a viable) approach?

Also, I anticipate the initial cloud upload to take some time (weeks) and the Linux server is normally rebooted via a cron job every seven days. Is there a way to automatically restart the backup / copy jobs if they are interrupted by a server reboot?

Thanks and Happy New Year!

David

Yes, backup/copy is the recommended way to back up to multiple storages as this will give you identical backups on different storages. And you don’t need to waste CPU times on your workstations to perform the backup job twice.

There isn’t an automatic way to restart the backup/copy jobs other than creating another cron job to restart them after the reboot.

1 Like

Thanks. Having the cloud backup restart automatically after a reboot does make sense. If I’m following how the program works, it will just pick up where the previous backup left off.

Is there any way to gauge percent completion of the backup/copy job?

If you pass the -stats option to the backup command, then there is a log message reporting the progress and remaining time ever time a chunk has been uploaded.

The copy command doesn’t have a similar option, but the number of uploaded chunks and the number of total chunks are always shown with every uploaded chunk.