I’m evaluating numerous cloud backup solutions. Duplicacy is winning so far. My requirements include: minimal complexity, good support, storage agnostic, local encryption.
I need to backup 1TB of data to cloud storage. I already have reliable lan backup but want a solution for complete disaster/recovery. Of this 1TB of data, approx 85% could be very cold (it never changes, I never access it). The other 15% is a mix of rarely changes, maybe 5% changes daily or weekly. Nonetheless I will probably never need to access unless my house burns down (and since I’m in Northern California this is becoming a possibility almost every year).
So my first question is somewhere on this forum I found this chart: https://i.imgur.com/g3zXJii.png Where did this come from? I’ve worked some s3 and Azure calculators and found that maybe Azure is in fact less expensive for what I’m trying to do. Can anyone comment on cost? I’m aiming to keep my cost down to <$45/year for 1TB storage (no access).
I recently got fiber and at this time I have 60Mb up. I’m about to do some speed test but still wondering if I schedule this first backup to run only nightly, or throttled through the day, can Duplicacy deal with failures or unfinished backups?
And lastly, somewhere on this forum I read that writing to a local repository before sending to cloud is a better practice. I will try to find this explanation again but can someone enlighten me here?
Thank you for any details you can provide.