Best practice for single NAS + Cloud backup?

In 2016 I upgraded to a QNAP TS-251+ to run CrashPlan, a memory hog. While QNAP specs said that max memory was 8GB, Intel’s spec for the J1900 processor was 16GB max, and users reported successfully upgrading to 16GB, so that’s what I did. While the QNAP UI reported 16GB memory capacity, over the years I noticed that no more than 8GB were ever actually used. I eventually learned that the TS-251+ had a custom SoC wired for only 8GB.

Hmmm…
:thinking: :face_with_monocle:

Like you, I archive to a personal NAS so cloud storage is just part of my 3-2-1 backup procedures. (One of my NAS is with my parents backing up their data and giving me another offsite storage for my data in turn.)

For most end users their cloud storage provider shutting down might likely just be a minor inconvenience. It’s users with multi-terabyte accounts that have difficulty if they don’t have local copies of their data.

Years ago a cloud storage provider I was formerly a customer of ended up going bankrupt, but users were given sufficient time to download their data. I moved my data to another storage provider that was then eventually acquired by Motorola and merged with its own existing cloud storage service. It was great for a few years until Motorola announced it was being discontinued. Motorola gave users many months and even provided a bulk download utility.

I’ve been fortunate, while others not so much. Sometimes it’s really short notice: Cloud provider Nirvanix gives customers two weeks to vacate data. Other times a company has been running on fumes for so long that an orderly shutdown isn’t possible. Then there are times when it’s not due to a lack of finances. When Megauploads abruptly shuttered in 2012 it left customers desperately scrambling to recover their data: Feds Tell Megaupload Users to Forget About Their Data.

Unfortunately there are no guarantees even with paying customers at well funded companies. Samsung is closing “Samsung Cloud” at the end of September. Last month on July 30th Amazon announced it was discontinuing its “Amazon Drive” service at the end of next year.

I really hope that Wasabi survives because having more competition and options is good for everyone. I think the big question is if Wasabi will eventually have to raise prices, add egress fees, and/or put bandwidth limits on monthly downloads in order to be sustainable. Amazon has its AWS cash cow; Microsoft has Azure, O365, Windows, Xbox, etc.; Google has ad revenue, Play, Maps, Workspace, Waze, Waymo, Fiber, Fi, Nest, Pixel and a stack of other products to rely on while Wasabi has just cloud storage.

1 Like

thanks for detailed history lesson :slight_smile: i wasn’t even ware of all these services, and ya - you are right that there are no guarantees, all we can do is prepare for the worst (or at leas the most likely) scenarios

frankly, I think it would be best if Wasabi could join forces with a company like Vultr or Linode so they could offer combined cloud services (both storage and compute) but maybe there’s a good reason why each company only focuses on one area (unless it’s a tech giant)

You’re welcome. :nerd:

I only built a NAS after having gone through multiple cloud migrations (some involuntarily; some due to service changes). With cloud storage being the primary ‘1’ in my “3-2-1 backup”, it’s no longer urgent if my cloud storage provider goes belly up.

Yup, I agree. Most people nowadays pay for service bundles (e.g., phone, internet, TV, streaming) even though they could easily go à la carte because of combo savings and convenience.

Wasabi’s co-founders previously owned Carbonite before selling it to OpenText (public company that sells information management software). At least on paper, OpenText + Carbonite seems like a good match. Carbonite had said that it was going to be operating at a loss for the foreseeable future so having a profitable parent company improves its long-term prospects.

Storage has evolved into a commodity. It’s a race-to-the-bottom so differentiating with low prices, “unlimited” storage and/or an API are the only things left (the latter is disappearing with more and more S3-compatible services).

But unlimited storage is so difficult to wrap a business around – it’s like an all-you-can-eat restaurant where customers can return for breakfast, lunch, dinner and snacks between meals every day for a monthly fee that costs less than a Big Mac combo. Not long ago OneDrive offered unlimited storage for $99/yr. Microsoft eventually nixed it with one reason being that some home users had over 75TB of data.

1 Like

The primary difference that extra few bucks buys you is the dual nvme cache slots and ability to use the expansion unit. They can come in handy depending on your use case and many people proclaim their benefits. I got my DS920+ on sale here in the UK so the cost diff was negligible. For me, the RAM makes all the difference.

There’s no reason to not add the RAM. I’ve seen debates about it, but the impact is both measurable and observable. Obviously, Synology would not provide you with support for your additional RAM, but that’s understood. Documentation tells you it is max 4GB, but that is complete BS. It is easily upgradeable to 20GB and the cpu take full advantage of the extra ram. It does not cause any issues at all, except improved performance.

@1wanderingexpat i think the main question is what’s the added benefit of (for example) 20GB over 6GB for the applications i will be running in practice.
if more isn’t needed, i can save both money AND maintain support from Synology
doing it just to see a higher number in the system dashboard isn’t worth it

As usual, it depends on your usecase. The biggest benefit from extra memory would be if you’re using your NAS as an application server, in which case it will depends on the apps you use. Things like Crashplan or duplicacy can eat a lot of RAM, Plex might, even some filesystems are really memory-hungry (eg. ZFS, not sure if you can run it on Synology).

Even if you don’t run a lot of apps, extra memory will be used for filesystem cache which might improve your file access times. But again, this will depend on your usage patterns.

Bottom line is, you don’t know if you’re going to (noticeably) benefit from extra RAM unless you install and run it. Obviously, if you notice non-trivial swap utilization you will likely benefit from extra RAM.

1 Like

Yep, there’s absolutely no need to rush into upgrading the RAM. It can be done after initial setup and after deciding how you’d like to use the NAS and Duplicacy.

4GB vs. 16GB DDR4 SO-DIMM...

Besides the extra money spent on an additional 12GB that might go unused, or very rarely used, there are other less obvious costs…

  • The extra RAM requires electricity whether or not it’s holding any data. The difference between a 4GB and 16GB DDR4 SO-DIMM is $3 to $15 extra per year on an electric bill depending on where one lives in the U.S. (in other countries it can be significantly higher). Hang onto the DS420+ for 5 years and the extra electricity costs more than the SO-DIMM itself; 10 years and it’s equal to about 1/3 the cost of the DS420+.

  • If the motherboard wasn’t designed for a 16GB SO-DIMM, the extra power required could overheat the traces and pads on the PCB (for the same reasons why it’s always better to use a properly sized extension cord). Best case scenario the motherboard runs hotter; worse case scenario – if not a fire hazard – the extra heat can weaken the contacts between the pads and solder, causing all kinds of system instability or hardware failure (low-temp solder is commonly used when a SoC is soldered on). Plus the extra heat will also speed up degradation of the electronic components.

  • The extra heat will also cause more expansion and contraction, increasing the odds of subtle system instability caused by poor contact between a memory module its socket (the plastic/metal clips help keep a memory module from completely popping out, but weren’t designed to hold it like a clamp) – I’ve seen this happen on a variety of both $100 desktop PCs and $10,000+ rackmount servers.

SSD cache...

If you decide that you need/want more disk buffering, consider using the M.2 SSD slots because you can easily reuse the SSDs later on a different NAS, laptop, desktop, or even in an external USB enclosure:

Frequently asked questions about using Synology SSD cache
Important considerations when creating SSD cache
What is the minimum recommended size for my SSD cache?
Should I pin all Btrfs metadata to an SSD cache?

zram...

Another consideration is if Synology includes the Linux kernel’s zram module in DSM. It’s been in use for years in Android, Chrome OS and many Linux distributions (e.g., Fedora, Raspbian).

To see if zram is available and enabled, SSH onto the DS420+ and issue the following command to display the current status:

zramctl --output-all

While it’s most often used on hosts with less than 8GB of RAM, even bigger hosts can benefit.

Apps...

Although you hadn’t mentioned interest in Plex, Docker and/or other apps besides Duplicacy and network file sharing, I ran some tests on my NAS (built from a barebones kit) to give you an idea of what’s possible.

Plex (v4.76.1) in Docker transcoding and streaming a 1080p H.264-encoded video (2.2Mbps bit rate) to a remote web browser over a WireGuard VPN link (tossed in some encryption for good measure) had a ~686MB memory footprint.

(Besides Plex, my NAS also runs a web server, a rsync server, Samba, Syncthing, Duplicacy and two different VPNs on 6GB of ECC DDR RAM typically with zero to a few hundred megabytes of swap to a SSD.)

1 Like

The main benefit is performance all-around. I can see, measure, and “feel” the difference. It’s up to you; you’re not going to “lose” Synology support. They’re simply not going to support anything beyond spec. It’s the same as if you choose to use non-certified drives. They just won’t support them.

1 Like

Oops, I inadvertently disclosed having a time machine. :wink:

Thanks everyone for the valuable inputs, to summarize a bit :

My plan would involve the following components:

  1. Desktop HDDs (and other “source” media e.g. phone storage)
  2. “Simple” cloud service (e.g. Google Drive)
  3. NAS (with 1 drive redundancy)
  4. Wasabi (Dual regions: Europe / US)
  5. Backblaze B2

And my backup strategy would be categorized into the following:

  • Critical - Personal data that must not be lost under any circumstance — would be backed up to all five components
  • Important - Various singular data that cannot ever be realistically reproduced or reacquired - and therefore should not ever be lost (e.g. data with sentimental value) — Would be be backed up to (2), (3) and (4A)
  • Valuable - Various data that either exists elsewhere in the world and/or can be realistically reproduced/reacquired - albeit - that might incur some financial loss - and therefore, better not be lost (e.g. media that was purchased with $$$) — Would be backed up to (3) and (4A)
  • Useful - Data that can be easily reacquired or is not meaningful enough to matter if lost (e.g. saved games, popular downloaded film & TV), and therefore can be lost — would be backed up to (3) only
  • Transient - Data that is not backed up at all

The backup strategy would be:

  • Critical: PC → Google Drive (sync), PC → NAS (backup), NAS → B2 (copy), NAS → Wasabi (copy), Wasabi (europe) → Wasabi (US)
  • Important: PC → Google Drive (sync), PC → NAS (backup), NAS-> Wasabi (copy)
  • Valuable: PC → NAS (copy), PC → Wasabi (copy)
  • Useful: PC → NAS (copy) /// alternatively, exists only on NAS

This means that my critical data will actually have seven copies around the world :smiley: , in three continents, with 3 different cloud providers, and at least 2 different devices at home

It also means that only critical, important or valuable data is backed up through a deduplicative process that would protect it from ransomware (from what I understand?)

Regarding the RAM, i will try to operate without and see how it goes, if necessary i will probably upgrade it to the recommended level, even if going above it might work.
Realistically i will only be running backups from it and possibly occasional media streaming

If i understand correctly, then thanks to Duplicacy de-duplication, it’s not a problem if some of the categories are contained within themselves, e.g. - if i make a backup for important that also happens to contain critical - then the data would only exist once in the storage (assuming both are backed up to the same storage) - and simply pointed to from two different repositories?