Personal backups, choices and strategies

I’d like to describe how I wanted to set up our home server and handle backups, and then some big dilemma’s - would be nice to get feedback and hear how others are approaching this.

Context: a server at home, 20% for public-facing sites, 80% for personal storage by family members (not all in the same place). The sites are simple, they just need to be up and running. Personal storage is all within Nextcloud (OSS Dropbox equiv w/ calendar, address book, photo galeries, and more). Lots of photo’s, also for semi-professional work (i.e. big high-res “shoots”).

Our data demands are modest. Total storage right now 1 TB, expected growth perhaps 50 GB/mo average.

Now the setup I’m working with: Intel i3 NUC, 256G root SSD, and 2x 2T ext USB3 HD’s.

The system runs Ubuntu 16.04 LTS (I’m comfy with Linux), and the external HDs are set up as BTRFS with raid1 profile, as data area for Nextcloud.

And to round it off: I’ve set up a remote SFTP server on a Raspberry Pi, as off-site backup for Duplicacy. Which is working splendidly, btw.

To summarise: the data lives on 3 disks, of which one off-site with history. More when using Nextcloud in Dropbox mode, with (some of) the data synced to each person’s own laptop, tablet, etc.

The local setup has been running for nearly a year, but the off-site backup is recent.

And yet … I’ve run into serious trouble, with days needed to get things back on track. One of the two BTRFS drives failed when I started adding a 3rd drive (I suspect a USB glitch while live-inserting the new drive). And from there it went downhill (I also suspect that BTRFS on Ubuntu 16.04 is not really ready for major failure scenarios).

The trouble with this sort of thing is not the metadata (file listings) but the data itself. It turns out that some files (about 1000 so far) have become unreadable (caught by BTRFS’s checksums), and I’m restoring from the off-site Duplicacy backup. I do not want to find out a year from now that this failure has led to bit rot somewhere - it needs to be resolved 100% (and then I’d like to get my life back, please).

This whole recovery is taking a long time (even longer because a 2nd attempt messed things up again, and I’m now pretty sure it’s a software issue in BTRFS). So I’m ready to ditch BTRFS. Its features are fantastic, but it looks like some (basic) failure scenarios are just not getting the attention they need (lots of info on the web is stale, bugs unsolved, wikis outdated - as usual in the fast-moving OSS world).

Which brings me to the dilemma: how to best set up this system for long-term peace of mind?

I’m now considering a single-disk EXT4 2 TB for Nextcloud, a 2nd disk locally managed for easy and fast redundancy, and the 3rd disk off-site (leaving it just as it is right now).

The question is how to set up that local redundancy: LVM-based RAID1? Periodic rsync in combination with local mount/unmount? Periodic rsync over the LAN to an independent little setup, e.g. another Raspberry Pi? Duplicacy to a direct-mounted 2nd drive? Duplicacy over the LAN to again a Raspberry Pi?

I don’t want to start an endless discussion, after all everyone has different trade-offs. But perhaps some suggestions, tips, critiques? I’m ok with this mishap, after all it was my decision to not have our data on some cloud service. But it’d be nice if it never happens again :slight_smile:

Cheers,
-jcw

Maybe not what you wanted to hear – but for backup I would stay clear from any DYI solutions (such as (and especially!) RaspberryPI with extrernal drive and the likes; raspberry pi was never meant to be used in production - it’s a prototyping device, for a reason. reliability was never part of requirements) and instead rely on commercially supported solutions backed by the company dedicated to keeping it working as opposed to hoping and praying.

Anecdotal example:

As a CrashPlan customer, slowly migrating towards being former crahsplan customer my research led me to realize that the most cost effective and reliable way to backup is to use reliable backup tool (hi Duplicacy) on the endpoints and reliable storage as a backend. Unless you have less than couple of TB data some online services might do OK, however for larger amounts it becomes cheaper to buy two Synology boxes (Why Synology and not parts from Newegg+ Linux + headache? - because commercial company will keep it working while I sleep) and replicate one onto another from geographically diverse locations.

In addition, the select most irreplaceable data should also go to commercial S3 type storage.

This is working for me for quite some time now; entire extended family backs up to my synology NAS over VPN via Duplicacy and/or Synology tools (CloudStation backup) and then my NAS replicates to another one, also over VPN, using native Synology tools.

3 Likes

Seems you’re doing the right thing by employing Duplicacy in your backup strategy - regardless if it’s hosted on a Raspberry Pi, so long as it completes backups and you regularly test restores - but, correct me if I’m wrong, you currently only have 1 backup, off-site? I would certainly try to have a 2nd backup location and it would be more useful to you, in terms of restoration times, if that was local. On the LAN, for example.

Alternatively, or in addition to, you also want to minimise the headache in recovering, quickly, from hardware failure. i.e. Actual redundancy - without the need to restore directly from backup, which can be slow - unless in the worst of cases.

If you had more than just 2 disks, I’d recommend something like SnapRAID, and mergerFS to pool data disks. However, that might not suit the type of data you have; it’s good for mostly static data. The reason I like SnapRAID however, is that it’s independent of the file system and doesn’t tie you in to a certain configuration, you could yank a drive out and still access files. You can easily add disks, of different sizes even.

Additionally, my feeling is that external USB enclosures aren’t terribly robust for a 24/7 server. I’ve always had various issues with USB controllers and finding one that doesn’t crap out once in a while, has been difficult (lately, ASMedia’s ASMT1153e has been working for me, so long as you have good quality USB cables). That type of connectivity, combined with live software RAID 1, would kinda scare me TBH.

And obviously there are real performance issues with multiple attached USB drives, especially if you want to scale up.

So personally speaking, I would look at working your way towards an all-in-one NAS, with directly attached storage. Be that DIY; own build, sticking with Ubuntu or go with FreeNAS + NextCloud plugin, or off-the-shelf; Synology + NextCloud as saspus recommends. Incidentally, my own home solution involves an old HP MicroServer, Windows Server, SnapRAID, DrivePool, 21TBs worth of WD Reds, acts as Duplicacy storage for local PC backups and GSuite for off-site backup. It’s not perfect, but let’s me expand and re-jiggle things.

Anyway, hope the above has given you some ideas at least.

3 Likes

Thanks for the input - several new aspects (both s/w and h/w) for me to ponder on, thx again.

Assuming that you’re using neither a commercial co-hosting provider nor your employer’s Internet, that means you’re dependent on someone’s home internet connection. How has your experience been with that. I see several potential problems that have so far made me not go nown that path:

  1. Those internet connections sometimes go down. With ends of the connection being a consumer ISP, the total downtime basically doubles.
  2. Unless the person whose home you’re using to host your NAS is technologically literate, you are probably facing additional downtime due to “human error” and you may even have to travel there to fix things.
  3. Electricity costs. This is probably covered by your disclaimer about the amount of data, but I want to mention it anyway: depending on electricity prices, running a 20W NAS 24/7 for a month may well cost the same as 1TB of commercial storage.

It’s been working well enough that I have closed my CrashPlan Pro account, and dual-synology is my official backup strategy now.

True. But this is an offsite backup, so few hours of even days of downtime does not affect anything. In total last year there was only a single outage when ISP was upgrading infrastructure in the community.

Ooooh, yes. That does happen. mostly cat stepping on a shiny red toggle on a surge protector :). The fix was a phone call and a scotch tape away.
[/quote]

For US, CA the electricity costs 0.20/kWh. Synology 918+ fully populated consumes 40W. So, $5.80/month.
So yes, if you did that all for 1TB of storage that would not have made sense, considering also equipment cost.
However, when you start adjusting for:

  1. for larger capacity (I have 20TB out of which I use 12);
  2. for winter months when you would use heater anyway the electricity cost per TB is effectively 0 (in the sense that if not synology, your heater would have consumed the same amount of energy; synology helps heat up the room so your heater works less. Conservation of energy),
  3. for other utility that NAS provides besides being a backup destination (e.g. you don’t need huge amount of storage on your shiny iMac, you can use NAS for photos and iTunes libraries; media server for the living room AppleTV) etc.

– it turns out that it pays for itself pretty quickly. That’s just my experience.

2 Likes