The storage type 'wasabi' is not supported

Ah, I see.

2.1.0 appears to be the latest available at: Releases · gilbertchen/duplicacy · GitHub

Do I need to build from src in order to access the wasabi endpoint?

yup! (or just wait some time until 2.1.1 comes out)

Thanks, is there any ETA for 2.1.1?

Or is there any info on building from source (that page on github appears to be blank presently)?

This is the wiki page on installation: Build Duplicacy from source

I’ve fixed the link in README.md that points to this page.

1 Like

I do not yet speak ‘go’ so I’m sure I am missing something, but I’m getting errors:

with $GOROOT=/usr/local/go and $GOPATH=workspace and the github.com/gilberchen/duplicacy cloned under workspace/src and the cli repo cloned as a sibling to duplicacy, I then

cd $GOPATH/src/github.com/gilbertchen/duplicacy and enter
go build duplicacy/duplicacy_main.go

I get errors similar to the following:

src/duplicacy_gcsstorage.go:17:2: cannot find package "cloud.google.com/go/storage"
src/duplicacy_snapshotmanager.go:24:2: cannot find package "github.com/aryann/difflib"
src/duplicacy_s3storage.go:16:2: cannot find package "github.com/aws/aws-sdk-go/aws"

… and others.

I’m guessing I somehow need to load some dependencies, but I am not clear how to do this?

TIA!

Basically, the first code example in the wiki should be enough:

go get -u github.com/gilbertchen/duplicacy/...

run that in terminal, and you will find the binary file created. it will also download all the dependencies and everything, as needed.

1 Like

Ah, I see. That was a bit confusing to me as it just appears to hang silently with no output for quite awhile. Also, I need to have both GOROOT=/usr/local/go and GOPATH=$PWD before running this, but it then works just as you said. Thanks!

On to trying to cross compile as shown on the page next…

1 Like

I have cross-compiled for an ARM NAS as described on the page. I have been able to successfully init the wasabi based repository. However the backup command does not appear to be working. It hangs for a long while and then just ends with ‘Killed’.

Here is the backup command I am using:

./duplicacy_linux64k_arm -v -d -stack -background backup -stats -threads 3

Only output (other than a number of lines going through the files to be backed up) is ‘Killed’

Out of memory perhaps?

Try setting DUPLICACY_ATTRIBUTE_THRESHOLD=1 as described here

1 Like

Unfortunately, it fails in the same manner even with the env variable set?

Did you confirm the problem is with memory utilization? How much ram does your device have and how much ram does duplicacy consume? How many files are being backed up?

Yes, it is definitely memory related. As the files scroll by, I can see memory utilization quickly grow via the free command. My poor little NAS only has 512MB real memory and only about 512MB swap space. Swap space eventually also gets used and steadily declines until it is essentially all exhausted. Garbage collection/memory management keeps things going in this resource starved state for quite an impressive amount of time (all things considered), but eventually cannot keep up and the OOM killer kicks in.

dmesg shows the following after the ‘Killed’ message has been encountered:

[3470677.666051] Out of memory: Kill process 25998 (duplicacy_linux) score 257 or sacrifice child
[3470677.666076] Killed process 25998 (duplicacy_linux) total-vm:808116kB, anon-rss:197940kB, file-rss:80kB

I am attempting to backup approx 2.75TB total.

It’s more of a number of files rather than total size, but yes, 512Mb is way too low.

Other than getting a beefier device you can work around the issue by creating a few smaller repositories as a bunch of folders with first level symlinks to your actual data folders that duplicacy would follow and backing them up in succession to the same storage.

Annoying, but it will get the job done.

1 Like

I’m thinking of mounting my NAS drive via SMB to a larger Linux box and then running backup from there to get around the memory issues. Any concerns or cautions with this approach?

No issues; if anything this would be more streamlined approach: let the NAS do what it does best – storing and serving massive amounts of data – and let compute devices with beefy cpus and ram do what they excel at – processing that data.

Does the copy command have similar memory requirements as the backup command? In other words, if I were to perform a local backup from a beefier box to my tiny NAS, would it be reasonable to think that the NAS could handle the task of copying that backup to cloud storage? Or is even this going to require substantial memory to accomplish?

Just ran a test on a repo with ~700k files, ~33k chunks.

Backup pass required just short of 2GB of RAM (1GB with DUPLICACY_ATTRIBUTE_THRESHOLD set to 1)
Copy stayed level at under 450Mb.

2 Likes

@saspus, thanks, that is enlightening! Just curious give your experience what you think of this as an approach? Obviously the copy will get behind the backup(s) I’m guessing fairly substantially, so I’d need to come up with an approach to only copy every so often or something along those lines.

Not sure if this will help you at all - but I went slightly tangential way:

  1. Duplicacy from clients to NAS via SFTP
  2. NAS takes volume snapshots once a day
  3. NAS then replicates those snapshots to remote NAS using NAS native tools which are supposed to work by definition, and there are few recent snapshots stored so I have a few recent duplicacy datastore states to revert to if e.g. NAS dies mid-replication.

Not sure how easy it would be to adopt this approach with wasabi though…

Edit - scratch that. You can just run duplicacy copy on a nas in a loop all the time. Adding new currently unreferenced chunks to the remote datastore via copy does not invalidate its current state, so at any point in time you have consistent datastore in the cloud; and it does not matter how long it takes to transfer data to wasabi, or whether more chunks were added while you copy.

Unless I misunderstand severely how duplicacy works. Hopefully Gilbert can comment.

Edit 2. Now that I think about it, I think I’ll start doing the same thing. My NAS has 8GB ram…

Edit 3. Wow. I just decided to make a docker container for duplicacy to avoid compiling for synology and then found this: GitHub - christophetd/duplicacy-autobackup: Painless automated backups to multiple storage providers with Docker and duplicacy.. If your NAS can run docker - this would be the way to go.

1 Like

Nice find. Sadly, no docker on my tiny NAS :slight_smile: