Slow transfer to Ext SSD

I am attempting to backup my Vertical Backup files using Duplicacy to an external SSD via USB. The transfer seems to be taking forever!

The folder its backing up is 216 GB. After around a day of running it’s only done 127.75 GB. I’m using --threads 4 but still doesn’t seem to be going at a great speed.

Any ideas/suggestions?

Do I understand correctly - you are backing up vertical backup (which is duplicacy) store again with duplicacy?? But why?? What are you trying to achieve?

What kind of SSD is it? Run CrystalDiskMark on it, see what is the actual SSD performance. Not all of them are created equal.

Is the USB interface 2.0 or 3.x? Because without UASP, which usually comes with 3.0, copying lots of small files will take an absolute eon to copy.

It’s not really about the total size in GB, Vertical Backup’s chunk size is 1MB (unless you set it higher, such as 4MB), so you’re talking about over 200k ickle files.

UASP should help with that.

I am running Vertical Backup on ESX to do the following;

  • A Daily backup to a Synology Rack Station
  • A Weekly backup to the Synology Rack Station

From the weekly backup I’m then running Duplicacy on Synology to push this to another SFTP Server offsite. I experienced many failures with using Vertical to do the push to SFTP so I’m having a little play around to see how it goes.

I just realised Its pulling from a spinning disk so doesn’t matter if it is an SSD but it’s a Samsung 840 Evo. I believe its a USB 2 interface but still seems super slow?!

Could it be worth increasing these file sizes then? If so - what would be a good amount 4MB like you said or perhaps 10MB?

From the weekly backup I’m then running Duplicacy on Synology to push this to another SFTP Server offsite.

Vertical backup is duplicacy. What you are doing is wasting resources and bandwidth re-compressing and re-duplicacting already processed data as well as storing discarded chunks forever.

What you need instead is

  1. Create snapshot of the volume where you store your vertical backup
  2. rsync or upload vertical backup storage from that snapshot to your offsite storage/ftp. Note, rsync should be both ways - you don’t want to keep chunks that vertical backup discard forever on your remote destination.
  3. delete snapshot.

its a USB 2 interface

USB2 is 400Mbps which translates to about 28-30MB/sec transfer speed best case including overhead.

You’ll get much less de-duplication if you increase the chunk size, but you may want to consider it, if you can’t find a way to avoid the USB 2.0 bus.

I’m testing Vertical Backup and Duplicacy copy in a similar scenario as you…

I’ve already copied around 600GBs worth of VMs to a second storage - first to a local external HDD (that took a couple days) - and they’re now synchronising (duplicacy copy) across the internet (SFTP).

Both storages are on external USB 3.0 caddies, attached to a dedicated USB 3.0 card, although because one of the VMs being backed up is the the SFTP server for VB, I can’t yet run passthrough on the USB 3.0 card for maximum throughput. However, I believe it’s using UASP, I can’t actually tell even though the guest shows UASP in Device Manager. :slight_smile: My plan is to create a Debian VM for the SFTP server and passthrough the controllers.

This isn’t necessarily true.

There are numerous scenarios where you might want to use duplicacy copy to replicate a Vertical Backup storage elsewhere - e.g. if you have several clients, all with their own storage (encrypted with their own password), and you want to pool them all together, and de-duplicate them in a copy-compatible centralised off-site storage, perhaps in the cloud.

I’m currently testing this, there doesn’t seem to be much overhead in the decrypting and encrypting phase (and if it’s decompressing and compressing chunks - I know not - perhaps that should be an optimisation the copy command should consider).

Rsync’ing that many files also has a fairly hefty memory overhead, and it doesn’t give you the flexibility of many-to-one backup storage. It’s not the only way to do it.

USB 2.0 also has a fairly ancient transfer protocol which wasn’t designed for lots of little files, so I’d say ~5-10MB/s tops. USB 3.0 + UASP is definitely the way to go.

There are numerous scenarios where you might want to use duplicacy copy to replicate a Vertical Backup storage elsewhere

Op did not say anything about using Vertical Backup’s copy. Op was taking about backing up Vertical backup with duplicacy:

I am attempting to backup my Vertical Backup files using Duplicacy

There is no conceivable case when creating versioned backup of a versioned backup even remotely makes sense.

Copying snapshots on the other hand would make sense, (and I would argue that vertical backup should be used to do that, and not conceptually different tool (which just happens to be the same at the moment).

However this is not what you suggest next:

if you have several clients, all with their own storage (encrypted with their own password), and you want to pool them all together, and de-duplicate them in a copy-compatible centralised off-site storage, perhaps in the cloud.

For duplicacy copy to work encryption keys must be identical. How else do you expect it to work? This is has absolutely no chance to work and the only thing you will end up with – severely inflated storage with redundant data and maintenance nightmare.

I’ll repeat again - there is no valid use case requiring to create a versioned backup of a versioned backup store. If you feel the need to create one you should reconsider your backup strategy from scratch.

Completely agree…

I think what @ashley is trying to achieve is just have an off-site copy.

I would:

  • create a storage in the off-site SFTP with the add command using the -copy option and the Synology storage as parameter (-copy <storage name>);
  • periodically perform a duplicacy copy command from the Synology storage to the SFTP storage.

I use this strategy and it works perfectly.

1 Like

I disagree. Of course there is.

Using a tool like rsync - which doesn’t understand the underlying Vertical Backup/Duplicacy storage - allows you to do only one thing; make a 1:1 mirrored copy.

Duplicacy copy on the other hand, understands which chunks belong to which snapshots. You can copy all or a subset (revisions) of a storage. OP stated:

Unless OP has two separate storages - one for the dailies and one for the weeklies (which is rather inefficient use of space) - then Duplicacy copy absolutely is the tool to use here, not rsync.

Granted, Duplicacy’s copy command doesn’t yet support tags (which I proposed here, and should be simple to implement) but you can work around it with a script.

In my use case, I want to copy 1 VM (Exchange Server) daily and the rest weekly, partly due to restricted bandwidth.

Yes, they must be copy-compatible. But the master passwords for each storage can be different. I’ve tried it, it works.

I agree, this is a perfectly acceptable use case for rsync, in the initial seeding stage. It may be faster to do that directly than with USB 2.0. :slight_smile:

I think it is -threads 4 that slows down the copy speed. You should use only one thread for local-disk based storages.

1 Like

Agreeing here with @gchen: if you copy transfer data from a HDD -> SSD, use only 1 thread.
If you would transfer from HDD to cloud then multiple threads are useful, but not in this case.