Ah, why don’t you backup to the nas then? Or, if it is the data on the NAS that you backup (the volumeUSB1
looks like some NASes would create" – does your nas support snapshots? You may then want to just create a series of snapshots – they are very cheap and provide a local way-back machine. Probability that (properly designed, cooled, and powered) RAID10 loses data is significantly smaller than that of a single USB drive (costs cut, horrible thermals, atrocious power)
Since these are by nature immutable, incompressible, unique files, you don’t really need a version history, or strong encryption, or deduplication, – pretty much everything duplicacy is good at :). Instead, you can rclone your media directly to the cloud. With some providers you can tweak the security for access keys that will allows you to copy upload, but not delete or modify, so that if your local version rots, it won’t overwrite good cloud version (essentially, keeping your media always at version 0)
LOL, same, and also have a massive ripped CDs collection
Three comments here – 1) why do backups need to be in sync in the first place, 2) you can start them at the same time, then they’ll be in sync.
The proper way to do backup is to create a local filesystem snapshot first, and then backup this snapshot, to uncouple from changes to the filesystem happining while backup is in progress. Duplicacy does it automatically on Windows and macOS (see -vss
flag) but for all other OSes you’d need to script that yourself. In this case, you can have perfectly synchronized backups by backing up the same snapshot:
- create snapshot fs-temp
- mount it to /mnt/fs-temp
- backup /mnt/fs-temp to storj
- backup /mnt/fs-temp to usb
- unmount the fs-temp
- delete fs1-temp
The overhead here is that you essentially do the job twice; but duplicacy is fast, so why not? I don’t necessarily have a problem with copy, but I do have a big problem with single drives, and USB drives in particular. Furthermore, copy is also quite resource intensive and it has to decrypt and re-encrypt data. So the difference becomes filesystem scan and compression. Compression is very fast, and second filesystem scan will be instant as everything would be cached in ram after the first scan. So it’s not that bad.
You can, but I wouldn’t – most storage providers have non-zero egress fees. Actually, the better the provider is suitable for backup – the higher egress fees it has, because all optimizations are done for retention, and not turnover, unlike hot storage. The best one out there is Amazon Glacier Deep Archive: $1/TB to store, but 180 day min retention, and restore of data over 100G/month threshold is from every expensive to exorbitant, depending on how fast you want your data back. Unfortunately, duplicacy does not support that kind of storage, but it may be perfect location to sync your media library to (I’m talking not just CD rips, but family photos, videos, etc). Something that does not change and you never expect to need to restore. But if you ever do – well, then the restore cost does not matter.
Google drive is fine, it’s one of (the only?) *drive type service that kind of works in these scenarios. They however started enforcing quotas recently so the trick of getting unlimited storage for $12/month is no longer works. And yes, latency is huge, and performance is not so good – but for backup the should not matter. But I agree, [ab]using drive service as object storage replacements is not sustainable.
I think init has an argument (–storage-name or something like that)