New Feature: Erasure Coding

What would you suggest as a reasonable value for data and parity shards? How much extra space would you estimate it would use?

1 Like

I guess 5/2 or 8/3 should be good. 5/2 can withstand a single bad block 1/5 the chunk size, or 2 individual bad bytes. 8/3 can withstand a single block 1/4 the chunk size, or 3 individual bad bytes. However, please keep in mind that no erasure coding can offer 100% protection, and a much more reliable method is to set up a second storage and copy backups between storages using the copy command.

1 Like

Sorry to ask, but how to init for the webinterface edition? I want to use onedrive and tried:
/config/duplicacy_linux_x64_2.6.10 init -e -c 64M -min 32M -max 128M -erasure-coding 8:3 -storage-name QnapDuplicacy QnapDuplicacy odb://Duplicacy/QNAP

This generates a subfolder called “.duplicacy” but is not been recognized by the webinterface. So what do I need to do to make it work with the webinterface? (Maybe an example would be great.)

Never mind, I added it using CLI (which initialize the remote storage in my case). In the WebUI I added then the created storage path.

But a last question:
After creating this storage on my onedrive, how can I check if this is working? At the moment everything seems to be normal.

This is exactly what I need. I want to migrate from Hetzner Storage box (sftp) to backblaze b2 by using the copy command and at the same time enable erasure coding.
after running the copy command once can I juse use the “duplicacy backup” command to this new storage?
I am running this on a slow ARM device. Is this faster compared to run a blank new backup with the same files to the new destination?

Yes, it should work this way. However, I’m not sure if you really need erasure coding for b2. They should already use erasure coding (and replication) to store your files.

There is definitely some overhead. They claim the encoding can be as fast as 1GB/s but with an ARM cpu it can be slower. Moreover, any parity shards are extra, so if you’re using 8 data shards and 3 parity shards the overhead will be at least 3/8 = 37.5%.

This topic was automatically closed after 20 days. New replies are no longer allowed.