Yes.
I did not see any failures in the logs.
After reading the Backup function in duplicacy_backupmanager.go, I have a clearer understanding of how the process works.
Chunks from the previous revision are never (re)uploaded
The backup function reads a list of chunks IDs from the last snapshot revision into a variable chunkCache. To populate chunkCache, the code looks at which chunk IDs are referenced by the previous snapshot revision. It does not verify whether those chunks actually exist on the storage.
// This cache contains all chunks referenced by last snasphot. Any other chunks will lead to a call to
// UploadChunk.
chunkCache := make(map[string]bool)
Importantly for this discussion, the code comment’s inverse is also true: any chunk which is listed in chunkCache is not uploaded.
In other words, Duplicacy will not, under any circumstance, attempt to upload any chunk that the previous snapshot revision already refers to.
File metadata is compared with the previous revision
I also see that the function looks to see if each local file has an exact match (by path, date and size) in the last snapshot revision. If it does, it just copies the chunk-id references from the last revision. It doesn’t make any attempt to (re)upload those chunks or verify whether they still exist on the storage.
However with the -hash option, the list of files in the last revision is ignored, and as a result, all local files are considered “new”, i.e. suitable for backing up. Everything goes into the chunker, all at once. This may result in a different chunking than previous revisions produced (see below).
TL;DR A new snapshot revision can be missing chunks, inherited from the previous revision
— unless the -hash option is used.
Read on for more of what I learned about trying to recreate missing chunks.
But can missing chunks be recreated?
The same file may be represented by different chunks in different contexts
My attempt at reading duplicacy_chunkmaker.go leads me to believe that a given chunk can contain data from more than one file. Files may be fed into the chunk maker in different orders / different combinations each time a backup runs, depending on the state of the file system at the time. Therefore, the exact same file will end up being chunked differently at different times — particularly the starting and ending data of files, and small files.
Please correct me if I’m misinterpreting the chunk maker’s behavior.
Therefore it may not be possible to get Duplicacy to recreate and reupload missing chunks, even if the source data still exists
The way that duplicacy creates and uploads chunks depends not only on the current state of files in the repository, but also on the complex ways that the filesystem has changed throughout the backup history. A given chunk may encode parts of two or more files which happened to be eligible for uploading at a particular moment, and it may be practically impossible to recreate the exact state that led to that chunking.