Issues Backing Up From Unraid to Another Unraid Via SMB

Hi everyone, I’m having some trouble backing up from one of my unraid servers to another unraid server over tailscale via mounting the destination via SMB. The total source is about 5 TB and it was about halfway through and I had to kill the docker in the middle because of other resource issues and started the backup again and it said about 6 hours left in the process (out of about 15 initially). After about 12 hours it’s still showing as 6+ hours even with a consistent speed of >50mbps. I quit the job on the docker and started it up again and it seems to be starting over from the beginning. The folder size at the destination is about 2 TB so I’m not really sure where to go from here, should I just delete the current destination folder and start the job over?

It does not start over. In the sense, that it does not upload existing chunks again. But it does go through the trouble of generating these chunks again after interruption. Let it finish.

Depending on the nature of your data you may want to increase average chunk size, if memory consumption is a problem.

Are you backing up to a mounted share or using smb remote?

What should I make the chunk sizes? It’s mostly RAW photos (~43mb each) and I have it as a mounted share.

Would I change the chuck size by adding a flag?

Raw photos are incompressible, non-deduplicatable, and immutable. All benefits of versioned deduplicating compressing backup tools like duplicacy are out of the window: you only get drawbacks.

You can configure your target share for immutability, and rsync copy the data instead, for ransomware protection filesystem snapshots work well.

That said, you can specify min,max, and/or average chunk sizes during storage initialization, it’s one of the parameters to init.

The default is 4M which may be too small. But I would not expect drastic performance difference; most time is spent hashing data. I. Wondering if fixed chunking approach will produce better results here (min chunk size =max chunk size).

It does not matter in this context, but using SMB remote in duplicacy instead may be more stable going forward

Interesting, I never realized that they were incompressible or non-deduplicatable. Up until now I have been using rsync for the same folder using the flags -avPh. Both of my Unraid servers are xfs so do you have any suggestions on how to make it immutable?

I’m not sure if xfs supports ACLs and whether there is enough flexibility to allow write but not modify. You can always simply run a daily job on a scheduler to strip write permission from files uploaded a day ago or earlier. Not ideal, but will work.

My best advice, however, would be to move to zfs on both your Unraid servers. Then you get:

  • support for snapshots – taking periodic snapshots is cheap and protects against ransomware. You can have massive number of snapshots that won’t take any space becuase of immutability of your data.
  • support for checksumming and self healing: xfs does not provide data integrity guarantees; you really want that for photos. Data degradation - Wikipedia
  • zfs send/receive, where you can send incrementally entire dataset (along with the snapshots, attributes, etc) to your other server in a single stream.

(Anecdotally, I’m doing the same myself. Two TrueNAS servers, and only use two different backup solutions (including duplciacy) for non-media mutable data, like documents and other projects)