Point taken, but say I was storing 5TB: rather than $5/mo, it’s basically 20cents per TB more… so $6TB/mo. Basically 20% more of something really cheap, is still quite cheap. It’s going to be fairly immaterial for most duplicacy users (I say this completely unfounded and without research, but who here has 50, 100, 500 terabytes?)
Yes there’s a 1yr minimum storage duration or you get early delete fees, but I was storing my backups long term anyways on Amazon, so (at least for me) it’s purely academic and not actually any different at all. Maybe someone who’s churning their backups rapidly would benefit, but then hot storage might be no more expensive for them anyways if they were really going for it. Maybe I’m a complete edge case, but I imagine most Duplicacy users have multi-year retention - you never know when that file went corrupt, or you made a mistake and didn’t notice for years IMHO.
Ultimately I’m on Duplicacy because I’m a Crashplan refugee and I still want to work in a similar way - basically the ability to go back almost forever, it’s saved my bacon for reals, more than once. Otherwise I’d just pay for the consumer backblaze backup plan and have 30 days.
The main bonus point is: have you ever tried to do a restore from AWS?
The workflow I found was this:
You have to make sure you’ve disabled your AWS config thing (I forget it’s name) that sets those blobs to archive tier in the first place, or you’ll be chasing your tail…
You try and restore. it errors. it gives you a chunk reference in the duplicacy log file, you go find the blob, you tell AWS to restore it. You wait up to 12 hours or something (or you pay $currency for speed) you then come back and check on the console periodically… once it’s there, you restore the file overwriting the original.
You try and restore again. it gives you another chunk reference…rinse repeat maybe hundreds of times, each seperated by up to 12 hours until duplicacy finally actually restores. A large file may take actual months of daily effort.
Maybe there’s a way to get duplicacy to give you the entire list of chunks it needs, and then you could feed that into S3 CLI but I didn’t research that far.
Alternately you restore your entire some-terabytes archive which will probably cost you hundreds of $currency - you just wiped out any saving you made by using GDA when Google “just works”. I think for me to do a full DR from AWS was something like $750, and I can’t restore files granularly. Ouch.
Personally I think it’s worth the marginally increased storage costs for a platform that actually works with Duplicacy, no messing about required. It’s ultimately a small price to pay surely?