The storage type 'wasabi' is not supported

Wasbi Storage recommends using the wasabi (vs the s3 variant) endpoint for storage:

Storage URL: wasabi://region@s3.wasabisys.com/bucket/path (latest on master branch)

for pricing reasons.

However when I attempt to init a new repositiory, I get the following error:

The storage type 'wasabi' is not supported

Here is the command I am attempting that is producing the aboe eror:

./duplicacy_linux_arm_2.1.0 init -e my-backups wasabi://us-east-1@s3.wasabisys.com/my-backups

What am I missing and/or doing wrong?

TIA!

Since you’re using 2.1.0, must use S3 backend.

From the same wiki page:

Storage URL: s3://region@s3.wasabisys.com/bucket/path (2.1.0 or older)
Storage URL: wasabi://region@s3.wasabisys.com/bucket/path (latest on master branch)

1 Like

Ah, I see.

2.1.0 appears to be the latest available at: Releases · gilbertchen/duplicacy · GitHub

Do I need to build from src in order to access the wasabi endpoint?

yup! (or just wait some time until 2.1.1 comes out)

Thanks, is there any ETA for 2.1.1?

Or is there any info on building from source (that page on github appears to be blank presently)?

This is the wiki page on installation: Build Duplicacy from source

I’ve fixed the link in README.md that points to this page.

1 Like

I do not yet speak ‘go’ so I’m sure I am missing something, but I’m getting errors:

with $GOROOT=/usr/local/go and $GOPATH=workspace and the github.com/gilberchen/duplicacy cloned under workspace/src and the cli repo cloned as a sibling to duplicacy, I then

cd $GOPATH/src/github.com/gilbertchen/duplicacy and enter
go build duplicacy/duplicacy_main.go

I get errors similar to the following:

src/duplicacy_gcsstorage.go:17:2: cannot find package "cloud.google.com/go/storage"
src/duplicacy_snapshotmanager.go:24:2: cannot find package "github.com/aryann/difflib"
src/duplicacy_s3storage.go:16:2: cannot find package "github.com/aws/aws-sdk-go/aws"

… and others.

I’m guessing I somehow need to load some dependencies, but I am not clear how to do this?

TIA!

Basically, the first code example in the wiki should be enough:

go get -u github.com/gilbertchen/duplicacy/...

run that in terminal, and you will find the binary file created. it will also download all the dependencies and everything, as needed.

1 Like

Ah, I see. That was a bit confusing to me as it just appears to hang silently with no output for quite awhile. Also, I need to have both GOROOT=/usr/local/go and GOPATH=$PWD before running this, but it then works just as you said. Thanks!

On to trying to cross compile as shown on the page next…

1 Like

I have cross-compiled for an ARM NAS as described on the page. I have been able to successfully init the wasabi based repository. However the backup command does not appear to be working. It hangs for a long while and then just ends with ‘Killed’.

Here is the backup command I am using:

./duplicacy_linux64k_arm -v -d -stack -background backup -stats -threads 3

Only output (other than a number of lines going through the files to be backed up) is ‘Killed’

Out of memory perhaps?

Try setting DUPLICACY_ATTRIBUTE_THRESHOLD=1 as described here

1 Like

Unfortunately, it fails in the same manner even with the env variable set?

Did you confirm the problem is with memory utilization? How much ram does your device have and how much ram does duplicacy consume? How many files are being backed up?

Yes, it is definitely memory related. As the files scroll by, I can see memory utilization quickly grow via the free command. My poor little NAS only has 512MB real memory and only about 512MB swap space. Swap space eventually also gets used and steadily declines until it is essentially all exhausted. Garbage collection/memory management keeps things going in this resource starved state for quite an impressive amount of time (all things considered), but eventually cannot keep up and the OOM killer kicks in.

dmesg shows the following after the ‘Killed’ message has been encountered:

[3470677.666051] Out of memory: Kill process 25998 (duplicacy_linux) score 257 or sacrifice child
[3470677.666076] Killed process 25998 (duplicacy_linux) total-vm:808116kB, anon-rss:197940kB, file-rss:80kB

I am attempting to backup approx 2.75TB total.

It’s more of a number of files rather than total size, but yes, 512Mb is way too low.

Other than getting a beefier device you can work around the issue by creating a few smaller repositories as a bunch of folders with first level symlinks to your actual data folders that duplicacy would follow and backing them up in succession to the same storage.

Annoying, but it will get the job done.

1 Like

I’m thinking of mounting my NAS drive via SMB to a larger Linux box and then running backup from there to get around the memory issues. Any concerns or cautions with this approach?

No issues; if anything this would be more streamlined approach: let the NAS do what it does best – storing and serving massive amounts of data – and let compute devices with beefy cpus and ram do what they excel at – processing that data.

Does the copy command have similar memory requirements as the backup command? In other words, if I were to perform a local backup from a beefier box to my tiny NAS, would it be reasonable to think that the NAS could handle the task of copying that backup to cloud storage? Or is even this going to require substantial memory to accomplish?

Just ran a test on a repo with ~700k files, ~33k chunks.

Backup pass required just short of 2GB of RAM (1GB with DUPLICACY_ATTRIBUTE_THRESHOLD set to 1)
Copy stayed level at under 450Mb.

2 Likes

@saspus, thanks, that is enlightening! Just curious give your experience what you think of this as an approach? Obviously the copy will get behind the backup(s) I’m guessing fairly substantially, so I’d need to come up with an approach to only copy every so often or something along those lines.