Duplicacy copy speed up?

It looks like at the moment, the copy command gets a full file list of the destination. Is there anyway to cache this locally? it takes a lot of time and costs a decent amount of money with some backends.

My current backup strategy is have a full local backup, and push it off-site when the local backup is done

Which storage backend are you using? Some backends like Google Drive can use multiple threads to list chunks in the storage when you specify the -threads option.

Im using google cloud storage, in ‘Archive’ mode. i currently have about 4M chunks, and a copy with -threads 4 takes about 3 minutes, and costs $0.20 just to download the chunk list

Bump? A local cache would speed things up a lot :slight_smile:

A local cache to store the chunk list in the storage would invalidate cross-computer deduplication, so it is unlikely to be implemented in Duplicacy.

2 Likes

would it? it would only not copy a chunk if it has been copied (of found already there) before.

It would be an issue if chunks went missing, or you run a pruning job that deletes chunks but not the snapshot IDs or something. an option to ignore / revalidate the cache would be needed, i agree