Hello Duplicacy community, I have searched for tips regarding boosting performance to backup/restore towards google drive & not found much, so I’m making this thread to hopefully get some tips from other seasoned users that have managed it.
I use duplicacy web version on a Windows NAS to make local & cloud encrypted backup of my local NAS data. My choice of cloud is google workspace, because I happen to have a lot of space available there. Even if it’s often argued that it’s not the best cloud destination, I would like to make the best of it if possible.
In other separate usage (data that does not need versioning/deduplication and so on) I use rclone to push data to/from this google workspace and I saturate my gigabit connection with transfers easily, so I know that this is not a bandwidth problem towards Google. I have played with the ‘threads’ option and I manage to get it to transfer usually max 15MB/s (while rclone does 100+MB/s without a problem)
I thought this might have something to do with my chunk size? Duplicacy’s default is 4MiB as far as I know & sending loads of small files is slower than sending bigger files. Would it be better if I boosted this to a higher number? Would it make a difference?
Does it make any difference if the backup is encrypted or not?
Other than that I only know of -threads that I can use to influence the throughput. If anyone has some ideas to share with me I’d be happy to hear them.
Another thing I wanted to ask about is ‘copy-compatible’ - is there any disadvantage whatsoever to have my cloud backup ‘copy-compatible’ ? I think once I’ve found the ideal settings for my cloud backup, I figure I’ll just tick ‘copy-compatible’ & use the exact same settings for the local backup. Then it should always work interchangably to copy it to other local & cloud destinations right? I don’t see much benefit to using different encryption keys etc. for the different backup locations.