It happens from time to time that I want to restore some small file, like a small log file that was already deleted by some software that only keeps log files of the past week around, and in theory Duplicacy works great for that - but in practice, it’s always really, really annoying how slow the restore process is.
I first need to select the Backup ID, then wait 20 minutes while it’s doing “Listing revisions”. Then after 20 minutes I need to remember that I actually triggered that, select the revision I need, and then wait for 1-3 hours while it’s doing “Listing files in revision”. And then after that, I can select my 1 MB log file and restore it, which goes quickly then.
I guess the reason it’s so slow is that it needs to download a lot of metadata from the storage backend, but why can it not just cache that metadata locally somehow? I don’t mind giving it a few hundred GB more local disk space or whatever, if that would allow me to do quick restores. I can see in task manager that Duplicacy is using between 1 and 10 Mbit’s of network bandwidth while it spends hours on “Listing files in revision”, and ~1 MB/s of disk usage and 0.1% CPU usage.