Evaluating Best Backup Strategy

I’m evaluating numerous cloud backup solutions. Duplicacy is winning so far. My requirements include: minimal complexity, good support, storage agnostic, local encryption.

I need to backup 1TB of data to cloud storage. I already have reliable lan backup but want a solution for complete disaster/recovery. Of this 1TB of data, approx 85% could be very cold (it never changes, I never access it). The other 15% is a mix of rarely changes, maybe 5% changes daily or weekly. Nonetheless I will probably never need to access unless my house burns down (and since I’m in Northern California this is becoming a possibility almost every year).

So my first question is somewhere on this forum I found this chart: https://i.imgur.com/g3zXJii.png Where did this come from? I’ve worked some s3 and Azure calculators and found that maybe Azure is in fact less expensive for what I’m trying to do. Can anyone comment on cost? I’m aiming to keep my cost down to <$45/year for 1TB storage (no access).

I recently got fiber and at this time I have 60Mb up. I’m about to do some speed test but still wondering if I schedule this first backup to run only nightly, or throttled through the day, can Duplicacy deal with failures or unfinished backups?

And lastly, somewhere on this forum I read that writing to a local repository before sending to cloud is a better practice. I will try to find this explanation again but can someone enlighten me here?

Thank you for any details you can provide.

That cloud storage comparison is a screenshot of a source data for my very old blog post (2018) Cloud Storage Pricing | Trinkets, Odds, and Ends that was since updated here Cloud Storage Pricing, Revisited | Trinkets, Odds, and Ends and by now became obsolete again, but that screenshot seems to have taken on life of its own.

It is not cost effective to store small amount of data and 1TB is right on the threshold when most solutions are least optimal.

If you already have Existing storage available such as part of g-suite or office365 then use that. It will be effectively free.

Archival storage is generally not suitable for backups and while it can work saving $15/year is definitely not worth the troubles.

The most popular one is Backblaze at $5/TB/month and you can get free egress through cloudflare; but once you have more then 2TB then g-suite is the way to go.

Personally I happen to mostly backup to from local machines to a nas appliance which replicates the datastore (via Duplicacy copy) to Backblaze, but some machines (portables) backup you B2 directly — since I’m not around the storage appliance. Both approaches work fine.

My main advice would be to stay away from archival grade storage; it can be made to work but the result not worth the effort.

4 Likes

@saspus response was complete.

I agree that it is not worth using cold storages, a lot of work for almost no savings.

I use B2 (1.8TB) and I’m satisfied with the price and service. The second option would be Wasabi (it is not worth it for a volume below 1TB, because this is the minimum amount charged). Amazon and Google Cloud have many “surprise costs” with API transactions.

No problem. When you restart, it will check that the first chunks are already in storage and will skip them.

2 Likes

I’m curious about this reasoning. Which calculation did you do?

G-suite gives you unlimited storage at $12/month (and the min 5 accounts requirement is not enforced In case you only have 1 account).

Backblaze is $5/TB/month. So, 2TB is the same ballpark as g-suite in terms of cost.

1 Like

Indeed. I looked again and business accounts have the option of unlimited storage. Mine is a basic G Suite account and has a limit of 30 TB (30 GB included + extra storage purchased separately). But AFAIK if you have less than 5 users you will only have 1 TB, isn’t it?

2020-05-14 10.24.32 gsuite.google.com.br bb29fc336330

The 5 user minimum for unlimited with GSuite business is currently not enforced.

2 Likes

Ah, thanks, didn’t know that currently is not being enforced/required.

Is this temporary? Could you please point me where i can read about? I searched and didn’t find …

I don’t know if anyone besides Google knows if/when Google intends to enforce it. But it seems like it’s been this way for at least a couple of years. Some people seem to (ab)use it to extreme limits (hundreds of TBs+) so I’d imagine it’s when Google decides it’s cost effective to do something about it.

Some people have been saying they’ll start enforcing it any day now for at least 2 years; but here we are. For many of the people using the loophole, I’m guessing they’ll just pay for 4 more users if/when Google starts enforcing it and still have it be orders of magnitude cheaper than storing the data with pay-as-you-go cloud object storage vendors.

I don’t think you’ll find anything official since it’s a loophole. It seems to be popular with the DataHoarder community on reddit (example) and rclone community.

There’s also a thread at least partially about it on this forum here: So what about unlimited storage?

edit: I take that back, it looks like @TheBestPessimist is legit in the linked forum thread, and has 5 users in order to get unlimited :slight_smile:.

3 Likes

Yep, I have 6 users now (some new ones, some old ones).

My current usage is ~5TB


And it looks like i’m the highest storage user. The closest one after me comes at ~2.5TB.

I’m also using Google Drive File Stream instead of the normal Google Drive client or the Gdrive API, so i don’t have any issues with google’s rate limiting (which is extremely relaxed for their own apps/usage)

2 Likes