Aborted first backup - can it be continued? how to find unnecessary chunks

Hi,

I have a question that I couldn’t find the answers for on this forum. So here it goes.

Using GUI version I started backing up two folders (initial backups) to B2: one small (about 1,7GB) and one huge (about 170GB).

Somewhere about 2/3rd of the huge upload my kid turned of the computer. Now when I resumed the 170GB backup it started all over again. Duplicacy shows however that I have over 149GB of data on my storage used.

Questions:

  1. Can Duplicacy somehow “resume” the aborted backup? If not then it seems that I’ll pay to Backblaze for 149GB of unusable data :frowning: + fee for removing that data.
  2. If resuming is not an option then how can I figure out which chunks I should delete (part of bigger backup) and which can stay (smaller backup)?

To be honest I don’t know what to do now. It does not seem very optimistic to keep my fingers crossed any time I’ll try to do initial backup and hoping that nothing will happen so I won’t have to start over again (and pay Backblaze again for uploading and removing).

Is there any positive solution to problems above? Any help would be highly appreciated.

Best regards

While I believe resuming initial backups may take longer, you have nothing to worry about in terms of the data and upload bandwidth - it can all be re-used and resumed due to Duplicacy’s de-duplication feature.

To put it simply, when Duplicacy splits up your data into chunks, compresses it and encrypts it, the vast majority of that data should be ‘deterministic’ - i.e. you should end up with the same chunk content and chunk filename. That’s why you can have a totally separate computer, with the same data, backing up to the same storage, and it will de-duplicate most of those chunks (except for maybe a handful of meta-data) and not have to re-upload them.

Just start the backup again and it’ll skip uploading chunks that already exist on the storage.

Even though it might look like it’s processing them again, it has to do that locally for each chunk before deciding that it’s already on B2. (In ‘initial’ backups, there’s also supposed to be an incomplete file that already lists these previously uploaded files, but I can’t reliably say if that reduces the backup time - with the same efficiency as incremental backups.)

Once your initial backups are complete, you can clean-up with a Prune job using the -exhaustive flag.