Best practice for single NAS + Cloud backup?

Rather than storing data on desktop PCs, I store it on the NAS and access it via network folders (see Properties -> Location on Windows). That way, it’s not critical to backup the desktop PCs, and all data is accessible from any PC on the LAN. I then backup the NAS to directly attached RAID storage using Duplicacy, and copy that to Backblaze.

I do back up system images of the PCs to the RAID storage a few times a year using the Duplicacy CLI, but I don’t copy these to Backblaze because they are large and non-critical.

1 Like

That’s another great point…
I’m really glad i made this thread - you all bring up stuff I didn’t fully think through

But is the behavior of this “mounted” style of usage good enough to be indistinguishable for regular use? e.g. - if i watch movies directly through the network drive, move files around, do searches, etc. - how much of a performance hit should i expect (assuming the NAS hosts 16TB WD red pros) ?

Also, going through this path leaves me with one less copy of the data, which kind of goes against the 3-2-1 principle, no?

What do you use to make these images?
what would you consider the main purpose of this process? to be able to maintain uninterrupted operation in the event of a corrupted OS and/or failure of the main SSD etc.?

I’ve been running everything off the NAS for over 10 years, so I’m not sure if the performance is “indistinguishable”. My most I/O intensive applications are Lightroom and Photoshop, which ran just fine over a 1Gbps connection. I recently upgraded to 10Gbps, and haven’t noticed a dramatic improvement. (Note that the max transfer rate of SATA III hard drives is 6Gbps.) Each PC has a second SSD that I thought I’d use for temp storage when working on large image files, but I rarely bother to use this anymore. Performance might be an issue for something like editing video files.

My recollection is that Windows search doesn’t support network drives. I’m using X1 Search 8.6.2, an old version. Synology has an add-on search tool. (I have a QNAP, and have found their Qsirch app to be virtually useless.) The advantage of a NAS-based search app is that it can maintain a single copy of its index on the NAS without transferring files over the LAN.

There’s a copy on the NAS (two if you count RAID 1 mirroring), a local copy in the RAID storage (another 2 if you count RAID 5 as redundant), and an offsite copy at Backblaze.

I use the free version of Macrium Reflect, and I’ve also successfully used AOMEI Backupper Free. A system image avoids having to reinstall/reconfigure the OS and apps from scratch in the event of OS corruption or hardware failure, which would save me a couple of man-weeks effort. With all data on the NAS, should a PC fail, one can just use another until it’s repaired/replaced.

There are some files that are useful to have resident on each PC, for example, some application data (Adobe in my case), the desktop, password safes. These I synchronize across all PCs through the NAS using FreeFileSync. Since there’s a copy of these on the NAS, they’re backed up with everything else.

I also sync various folders on our phones and tablets with the NAS, so those are backed up too.

1 Like

Generally speaking, it’s best to follow a manufacturer’s recommendations but there’s often some leeway in the specifications because the recommendations are chosen based on components readily available at the time and to allow for variances in every component.

Synology’s DS420+ uses Intel’s Celeron J4025 which was released in late 2019. Intel states that the maximum supported RAM is 8GB (ideally as a pair of 4GB modules for dual-channel performance). However, others on this forum, Reddit, YouTube and elsewhere have reported success with higher capacity modules.

Is it advisable? No. Should you do it? Sure, the risk of damage is tiny as long as the memory module is the right type and well made. Pick a module that other DS420+ owners have confirmed to work without any issues.

Having said that, a few things to consider…

Synology says that the DS420+ consumes 28.30 watts when the internal drives are being accessed (based on a stock unit populated with four WD10EFRX, aka “1TB Western Digital Red Plus”).

A WD10EFRX pulls 3.3W during read/write activity, a Celeron J4025 has a 10W TDP at full load, and 2GB of DDR4 requires about 0.75W (rule of thumb is that every 8GB of DDR4 requires 3W of power).

The WD Red Pro has a 7200RPM spindle speed so it requires more power than the Red and Red Plus models. The 6TB Red Pro pulls 7.2W during read/writes (+3.9W compared to a WD10EFRX).

Then there are also the pair of M.2 slots for optional SSDs.

16GB of DDR4 would increase the power draw by just over 21% on the 28.3W baseline reference. It’s extra load on the mainboard, power supply and cooling fans. A mainboard manufacturer would have selected components (e.g., capacitors, resistors, voltage regulators, etc.) for the intended maximum memory capacity. Likewise, Synology would have designed the DS420+ assuming a total of 6GB of DDR4.

I wouldn’t recommend upgrading the RAM immediately. It’s best to stress test the NAS for at least a few days or more just in case there’s a need to call tech support and/or get a RMA. Having an unsupported amount of RAM will complicate tracking down any stability issues (Synology says each year they receive 50+ support tickets as a result of memory upgrades).

After upgrading the RAM, run a full memory test to check for a defective memory module, compatibility, plus stability under extended high load and heat (Synology uses Memtest86 under the hood for its Synology Assistant add-on).

1 Like

Wasabi Technologies, Inc. was founded in 2015 and started accepting customers in June 2017, so it’s been in business for just over 5 years.

As a private company, reporting to the SEC and the media isn’t required so detailed financial information isn’t readily available.

Most startup companies seek out venture capitalists for seed money in exchange for interest payments, equity and/or profits (e.g., “Shark Tank”). Wasabi decided instead to recruit angel investors and family offices (the former is an individual private investor; the latter is a private company set up to invest a family’s money). There are pros and cons to each one so it doesn’t mean Wasabi made a poor choice.

From Wasabi’s press releases that sometimes mention the results of new funding rounds, it’s been able to raise around $275 million so far (averaging ~$39.3M per year since its founding).

Is $40 million a year in investor funding plus customer revenue enough to cover the office staff, software developers, engineers, electricians, plumbers, security, etc. plus the utilities, facilities and other operating expenses in multiple locations around the world? I’m not sure. For comparison, Walmart spends over $25 million a year maintaining just its website. It could be that Walmart hasn’t been getting the most for its money (unlikely), or that Wasabi has been able to live on a shoestring budget. But based on Wasabi’s repeat funding rounds – most recent one I could find was last year in November 2021 – it’s been dependent on debt and private financing to help cover expenses (there’s very little public info on if/when any corporate bonds are nearing maturity).

Given how many cloud storage providers have come and gone over the last 20 years, the odds aren’t great for Wasabi, especially with Amazon, Google and Microsoft as the competition. The barrier to entry is pretty low with CrashPlan, Jungle Disk and many others being built on top of S3 / GCS / Azure.

But at the same time, the global volume of data keeps expanding by leaps and bounds so the “pie” also keeps getting bigger and bigger.

1 Like

Sorry, was out of touch for a bit; I use a DS920+ It runs a 4-core Intel Celeron at 2ghz. I’ve upgrade the ram to 20 (added a 16gb stick). It has the capacity for 2 nvme cache drives, but the reviews are mixed on them, so I’m not using them at all. It’s a great NAS & the performance is magnificent.

1 Like

I see, well - that’s a good point but i also don’t see what’s the big deal for the end-user?
It doesn’t seem likely that the data would just evaporate out-of-the-blue one day,
seems like a way more likely scenario is that they get bought out by a bigger fish

and in any case seems likely that clients should have enough time to find alternatives, no?

there’s like a $60-70 difference between the 420+ and the 920+ where i live, and even though the different is small I just can’t bring myself to spend it, considering that this is money that could go to a dedicated solution if i ever really needed one, rather than try to make my NAS into a general-purpose server
I read a review (i believe it’s the “NAS compared” channel on youtube or whatever) that said that basically the difference is mostly felt in cases of horizontal scaling (e.g. multiple users doing different things, or various tasks running simultaneously, e.g. surveillance cameras, backups, media streaming, etc.)


and regarding the RAM - ya those are good points, i’ll probably see how it operates and then just add the 4GB stick if necessary…

In 2016 I upgraded to a QNAP TS-251+ to run CrashPlan, a memory hog. While QNAP specs said that max memory was 8GB, Intel’s spec for the J1900 processor was 16GB max, and users reported successfully upgrading to 16GB, so that’s what I did. While the QNAP UI reported 16GB memory capacity, over the years I noticed that no more than 8GB were ever actually used. I eventually learned that the TS-251+ had a custom SoC wired for only 8GB.

Hmmm…
:thinking: :face_with_monocle:

Like you, I archive to a personal NAS so cloud storage is just part of my 3-2-1 backup procedures. (One of my NAS is with my parents backing up their data and giving me another offsite storage for my data in turn.)

For most end users their cloud storage provider shutting down might likely just be a minor inconvenience. It’s users with multi-terabyte accounts that have difficulty if they don’t have local copies of their data.

Years ago a cloud storage provider I was formerly a customer of ended up going bankrupt, but users were given sufficient time to download their data. I moved my data to another storage provider that was then eventually acquired by Motorola and merged with its own existing cloud storage service. It was great for a few years until Motorola announced it was being discontinued. Motorola gave users many months and even provided a bulk download utility.

I’ve been fortunate, while others not so much. Sometimes it’s really short notice: Cloud provider Nirvanix gives customers two weeks to vacate data. Other times a company has been running on fumes for so long that an orderly shutdown isn’t possible. Then there are times when it’s not due to a lack of finances. When Megauploads abruptly shuttered in 2012 it left customers desperately scrambling to recover their data: Feds Tell Megaupload Users to Forget About Their Data.

Unfortunately there are no guarantees even with paying customers at well funded companies. Samsung is closing “Samsung Cloud” at the end of September. Last month on July 30th Amazon announced it was discontinuing its “Amazon Drive” service at the end of next year.

I really hope that Wasabi survives because having more competition and options is good for everyone. I think the big question is if Wasabi will eventually have to raise prices, add egress fees, and/or put bandwidth limits on monthly downloads in order to be sustainable. Amazon has its AWS cash cow; Microsoft has Azure, O365, Windows, Xbox, etc.; Google has ad revenue, Play, Maps, Workspace, Waze, Waymo, Fiber, Fi, Nest, Pixel and a stack of other products to rely on while Wasabi has just cloud storage.

1 Like

thanks for detailed history lesson :slight_smile: i wasn’t even ware of all these services, and ya - you are right that there are no guarantees, all we can do is prepare for the worst (or at leas the most likely) scenarios

frankly, I think it would be best if Wasabi could join forces with a company like Vultr or Linode so they could offer combined cloud services (both storage and compute) but maybe there’s a good reason why each company only focuses on one area (unless it’s a tech giant)

You’re welcome. :nerd:

I only built a NAS after having gone through multiple cloud migrations (some involuntarily; some due to service changes). With cloud storage being the primary ‘1’ in my “3-2-1 backup”, it’s no longer urgent if my cloud storage provider goes belly up.

Yup, I agree. Most people nowadays pay for service bundles (e.g., phone, internet, TV, streaming) even though they could easily go à la carte because of combo savings and convenience.

Wasabi’s co-founders previously owned Carbonite before selling it to OpenText (public company that sells information management software). At least on paper, OpenText + Carbonite seems like a good match. Carbonite had said that it was going to be operating at a loss for the foreseeable future so having a profitable parent company improves its long-term prospects.

Storage has evolved into a commodity. It’s a race-to-the-bottom so differentiating with low prices, “unlimited” storage and/or an API are the only things left (the latter is disappearing with more and more S3-compatible services).

But unlimited storage is so difficult to wrap a business around – it’s like an all-you-can-eat restaurant where customers can return for breakfast, lunch, dinner and snacks between meals every day for a monthly fee that costs less than a Big Mac combo. Not long ago OneDrive offered unlimited storage for $99/yr. Microsoft eventually nixed it with one reason being that some home users had over 75TB of data.

1 Like

The primary difference that extra few bucks buys you is the dual nvme cache slots and ability to use the expansion unit. They can come in handy depending on your use case and many people proclaim their benefits. I got my DS920+ on sale here in the UK so the cost diff was negligible. For me, the RAM makes all the difference.

There’s no reason to not add the RAM. I’ve seen debates about it, but the impact is both measurable and observable. Obviously, Synology would not provide you with support for your additional RAM, but that’s understood. Documentation tells you it is max 4GB, but that is complete BS. It is easily upgradeable to 20GB and the cpu take full advantage of the extra ram. It does not cause any issues at all, except improved performance.

@1wanderingexpat i think the main question is what’s the added benefit of (for example) 20GB over 6GB for the applications i will be running in practice.
if more isn’t needed, i can save both money AND maintain support from Synology
doing it just to see a higher number in the system dashboard isn’t worth it

As usual, it depends on your usecase. The biggest benefit from extra memory would be if you’re using your NAS as an application server, in which case it will depends on the apps you use. Things like Crashplan or duplicacy can eat a lot of RAM, Plex might, even some filesystems are really memory-hungry (eg. ZFS, not sure if you can run it on Synology).

Even if you don’t run a lot of apps, extra memory will be used for filesystem cache which might improve your file access times. But again, this will depend on your usage patterns.

Bottom line is, you don’t know if you’re going to (noticeably) benefit from extra RAM unless you install and run it. Obviously, if you notice non-trivial swap utilization you will likely benefit from extra RAM.

1 Like

Yep, there’s absolutely no need to rush into upgrading the RAM. It can be done after initial setup and after deciding how you’d like to use the NAS and Duplicacy.

4GB vs. 16GB DDR4 SO-DIMM...

Besides the extra money spent on an additional 12GB that might go unused, or very rarely used, there are other less obvious costs…

  • The extra RAM requires electricity whether or not it’s holding any data. The difference between a 4GB and 16GB DDR4 SO-DIMM is $3 to $15 extra per year on an electric bill depending on where one lives in the U.S. (in other countries it can be significantly higher). Hang onto the DS420+ for 5 years and the extra electricity costs more than the SO-DIMM itself; 10 years and it’s equal to about 1/3 the cost of the DS420+.

  • If the motherboard wasn’t designed for a 16GB SO-DIMM, the extra power required could overheat the traces and pads on the PCB (for the same reasons why it’s always better to use a properly sized extension cord). Best case scenario the motherboard runs hotter; worse case scenario – if not a fire hazard – the extra heat can weaken the contacts between the pads and solder, causing all kinds of system instability or hardware failure (low-temp solder is commonly used when a SoC is soldered on). Plus the extra heat will also speed up degradation of the electronic components.

  • The extra heat will also cause more expansion and contraction, increasing the odds of subtle system instability caused by poor contact between a memory module its socket (the plastic/metal clips help keep a memory module from completely popping out, but weren’t designed to hold it like a clamp) – I’ve seen this happen on a variety of both $100 desktop PCs and $10,000+ rackmount servers.

SSD cache...

If you decide that you need/want more disk buffering, consider using the M.2 SSD slots because you can easily reuse the SSDs later on a different NAS, laptop, desktop, or even in an external USB enclosure:

Frequently asked questions about using Synology SSD cache
Important considerations when creating SSD cache
What is the minimum recommended size for my SSD cache?
Should I pin all Btrfs metadata to an SSD cache?

zram...

Another consideration is if Synology includes the Linux kernel’s zram module in DSM. It’s been in use for years in Android, Chrome OS and many Linux distributions (e.g., Fedora, Raspbian).

To see if zram is available and enabled, SSH onto the DS420+ and issue the following command to display the current status:

zramctl --output-all

While it’s most often used on hosts with less than 8GB of RAM, even bigger hosts can benefit.

Apps...

Although you hadn’t mentioned interest in Plex, Docker and/or other apps besides Duplicacy and network file sharing, I ran some tests on my NAS (built from a barebones kit) to give you an idea of what’s possible.

Plex (v4.76.1) in Docker transcoding and streaming a 1080p H.264-encoded video (2.2Mbps bit rate) to a remote web browser over a WireGuard VPN link (tossed in some encryption for good measure) had a ~686MB memory footprint.

(Besides Plex, my NAS also runs a web server, a rsync server, Samba, Syncthing, Duplicacy and two different VPNs on 6GB of ECC DDR RAM typically with zero to a few hundred megabytes of swap to a SSD.)

1 Like

The main benefit is performance all-around. I can see, measure, and “feel” the difference. It’s up to you; you’re not going to “lose” Synology support. They’re simply not going to support anything beyond spec. It’s the same as if you choose to use non-certified drives. They just won’t support them.

1 Like

Oops, I inadvertently disclosed having a time machine. :wink:

Thanks everyone for the valuable inputs, to summarize a bit :

My plan would involve the following components:

  1. Desktop HDDs (and other “source” media e.g. phone storage)
  2. “Simple” cloud service (e.g. Google Drive)
  3. NAS (with 1 drive redundancy)
  4. Wasabi (Dual regions: Europe / US)
  5. Backblaze B2

And my backup strategy would be categorized into the following:

  • Critical - Personal data that must not be lost under any circumstance — would be backed up to all five components
  • Important - Various singular data that cannot ever be realistically reproduced or reacquired - and therefore should not ever be lost (e.g. data with sentimental value) — Would be be backed up to (2), (3) and (4A)
  • Valuable - Various data that either exists elsewhere in the world and/or can be realistically reproduced/reacquired - albeit - that might incur some financial loss - and therefore, better not be lost (e.g. media that was purchased with $$$) — Would be backed up to (3) and (4A)
  • Useful - Data that can be easily reacquired or is not meaningful enough to matter if lost (e.g. saved games, popular downloaded film & TV), and therefore can be lost — would be backed up to (3) only
  • Transient - Data that is not backed up at all

The backup strategy would be:

  • Critical: PC → Google Drive (sync), PC → NAS (backup), NAS → B2 (copy), NAS → Wasabi (copy), Wasabi (europe) → Wasabi (US)
  • Important: PC → Google Drive (sync), PC → NAS (backup), NAS-> Wasabi (copy)
  • Valuable: PC → NAS (copy), PC → Wasabi (copy)
  • Useful: PC → NAS (copy) /// alternatively, exists only on NAS

This means that my critical data will actually have seven copies around the world :smiley: , in three continents, with 3 different cloud providers, and at least 2 different devices at home

It also means that only critical, important or valuable data is backed up through a deduplicative process that would protect it from ransomware (from what I understand?)

Regarding the RAM, i will try to operate without and see how it goes, if necessary i will probably upgrade it to the recommended level, even if going above it might work.
Realistically i will only be running backups from it and possibly occasional media streaming

If i understand correctly, then thanks to Duplicacy de-duplication, it’s not a problem if some of the categories are contained within themselves, e.g. - if i make a backup for important that also happens to contain critical - then the data would only exist once in the storage (assuming both are backed up to the same storage) - and simply pointed to from two different repositories?