Duplicacy copy uses memory proportional to backup size

Please describe what you are doing to trigger the bug:

I have a duplicacy repository of size 5.5TiB with over 1000 revisions, and i’m trying to use duplicacy copy to copy the snapshots between repositories (wasabi to google drive)

Please describe what you expect to happen (but doesn’t):

I expect the copy to complete successfully, allowing me to use the new repository.

Please describe what actually happens (the wrong behaviour):

Instead, memory usage rises for each “Copying snapshot X at revision Y” message, eventually causing the OOM killer to kill the process after it eats through all of my 32GiB of ram.

A workaround I have found is to copy in chunks, for example duplicacy copy -id X -r 1-100 then -r 100-200 etc.

This should be fixed by Release the list of chunk hashes after processing each snapshot. · gilbertchen/duplicacy@d43fe1a · GitHub

5 Likes

Thank you! I just compiled with the patch and it seems to be fixed, only 1.2GiB memory usage now.

2 Likes

This topic was automatically closed after 79 days. New replies are no longer allowed.