Any tips to boost google drive performance as much as possible?

Hello Duplicacy community, I have searched for tips regarding boosting performance to backup/restore towards google drive & not found much, so I’m making this thread to hopefully get some tips from other seasoned users that have managed it.

I use duplicacy web version on a Windows NAS to make local & cloud encrypted backup of my local NAS data. My choice of cloud is google workspace, because I happen to have a lot of space available there. Even if it’s often argued that it’s not the best cloud destination, I would like to make the best of it if possible.

In other separate usage (data that does not need versioning/deduplication and so on) I use rclone to push data to/from this google workspace and I saturate my gigabit connection with transfers easily, so I know that this is not a bandwidth problem towards Google. I have played with the ‘threads’ option and I manage to get it to transfer usually max 15MB/s (while rclone does 100+MB/s without a problem)

I thought this might have something to do with my chunk size? Duplicacy’s default is 4MiB as far as I know & sending loads of small files is slower than sending bigger files. Would it be better if I boosted this to a higher number? Would it make a difference?

Does it make any difference if the backup is encrypted or not?

Other than that I only know of -threads that I can use to influence the throughput. If anyone has some ideas to share with me I’d be happy to hear them.

Another thing I wanted to ask about is ‘copy-compatible’ - is there any disadvantage whatsoever to have my cloud backup ‘copy-compatible’ ? I think once I’ve found the ideal settings for my cloud backup, I figure I’ll just tick ‘copy-compatible’ & use the exact same settings for the local backup. Then it should always work interchangably to copy it to other local & cloud destinations right? I don’t see much benefit to using different encryption keys etc. for the different backup locations.

I found a topic here, where a user changed the chunk size apparently to 14MB and then achieved 100MB/s speeds.

I guess I have some testing to do.

This strongly depends on type of data you are backing up. For static data this does not matter - you upload it once and that’s it. For static data that gets added in large chunks (e.g. media) huge chunk size will help with practically no downsides. For data that experiences small distributed changes (like text documents, code bases, other databases) there is a point where gains from throughput due to large chunks are negated by the massive amount of extra data that needs to be uploaded and stored because of said too large of the chunks size. Whether that matters of course also depends on how much of that type of data you have and how much of it changes. If too much — this can matter even if storage itself is unlimited, like Google drive.

From my limited personal experience backing up my entire iCloud folder (about 1TB) to Storj with average 32MB chunk size I did not notice any excessive space usage over last year. I don’t have 4MB chunk size based history of the same data to compare, but the size on the storage looks reasonable.

I’m pretty sure it must be power of two.

Yes, it’s possible that user was talking about number of threads and not chunk size, not sure.

Anyway, so that’s worth a shot for me to try.