Best practice for check

Hello,

I am new Duplicacy user, looking to replace CrashPlan with it on my family computers. So far I am very happy with the program, it is amazingly fast and ingeniously designed. It has a bit steep learning curve though (compared to other programs like CrashPlan or Arc), but the forum offers a lot of useful information.

I am unsure about one thing, and that is proper use of the check command - obviously I want to keep my backups in a good shape, but the command has many options and I am not sure which one I should run regularly.

Currently my setup, made via the GUI, is like this:

  • backups every 30 minutes
  • maintenance every 2 hours with jobs in this order:
    – check (local storage, no params)
    – prune (local storage, 1:7 7:30 30:365 0:3650)
    – copy (to remote storage)
    – check (remote storage, no params)
    – prune (remote storage, 1:7 7:30 30:365 0:3650)

Both storages are my Linux servers accessed via SFTP.

The question is, should I also run check command with some of the non default options? -files does not seem to be designed to run regularly since it downloads every single chunk for every snapshot. It is possible to specify a snapshot, but not with some vague option like “last”. -chunk does the same thing, but on chunk level rather than file level.
Also does the check actually solve problems on itself? Like if it finds a missing chunk, will it somehow cause the next backup run to fix the issue? Or should I watch the reports?

It appears to me that if I do no expect that something will mess with the filesystem of my SFTP storages, the plain check command is good enough to ensure the backups integrity, but I may be missing something thing.

Thank you.

If your server filesystem guarantees data integrity (BTRfS/zfs) then simple check (that only verifies that’s all referenced chunks are present) should suffice. Otherwise I would either schedule periodic check with —chunks or —files flag to check integrity of the chunk files and actual restorability or initialize the storage with erasure coding enabled — to help combat eventual rot at the expense of slightly higher storage utilization.

Thank you for the reply. Both my storages are running on btrfs, so I think I will not use -files or -chunks for now, but I will look into the erasure coding option, because I have completely missed that (it is not listed under the init command documentation thread). According to the related thread it works with a copy command, so I can easily create a new storage and then copy the existing backups. Thanks!

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.