Restore speed slow

I have an external drive formatted with zfs. Used storage is about 1.45 TB. I’m doing some test restores and it’s taking quite a long time.

I am trying to restore a pdf and it’s been almost an hour. After a while I find it takes too long so I stop it, so I don’t know how long it would actually take. I tried restoring through both the GUI and the beta web tool. I haven’t tried the CLI.

On the web tool it says “starting” and that’s all I see. I did some other test restores on other repositories, and those restores were a lot quicker. Though their storage size were smaller (about 300 GB on one and 4 GB on another). Is there any way I can speed it up? Latest version of duplicacy GUI on mac os x.

This is from the log file, not sure if it is of any use

2018-12-03 22:53:38.526 INFO REPOSITORY_SET Repository set to /Volumes/pool
2018-12-03 22:53:38.526 INFO STORAGE_SET Storage set to /Volumes/Pool Backup/Local Backup
2018-12-03 22:53:39.058 INFO RESTORE_INPLACE Forcing in-place mode with a non-default preference path
2018-12-03 22:53:57.374 WARN LIST_FAILURE Failed to list subdirectory: open /Volumes/pool/.Spotlight-V100: permission denied
2018-12-03 22:53:57.374 WARN LIST_FAILURE Failed to list subdirectory: open /Volumes/pool/.Trashes: permission denied
2018-12-03 22:53:57.374 WARN LIST_FAILURE Failed to list subdirectory: open /Volumes/pool/.fseventsd: permission denied
2018-12-03 22:58:24.011 WARN LIST_FAILURE Failed to list subdirectory: open /Volumes/pool/data/.DocumentRevisions-V100: permission denied
2018-12-03 22:58:24.011 WARN LIST_FAILURE Failed to list subdirectory: open /Volumes/pool/data/.Spotlight-V100: permission denied
2018-12-03 22:58:24.011 WARN LIST_FAILURE Failed to list subdirectory: open /Volumes/pool/data/.TemporaryItems: permission denied
2018-12-03 22:58:24.011 WARN LIST_FAILURE Failed to list subdirectory: open /Volumes/pool/data/.Trashes: permission denied
2018-12-03 22:58:24.011 WARN LIST_FAILURE Failed to list subdirectory: open /Volumes/pool/data/.fseventsd: permission denied
2018-12-03 23:01:53.327 WARN LIST_FAILURE Failed to list subdirectory: open /Volumes/pool/media/.DocumentRevisions-V100: permission denied
2018-12-03 23:01:53.327 WARN LIST_FAILURE Failed to list subdirectory: open /Volumes/pool/media/.Spotlight-V100: permission denied
2018-12-03 23:01:53.327 WARN LIST_FAILURE Failed to list subdirectory: open /Volumes/pool/media/.TemporaryItems: permission denied
2018-12-03 23:01:53.327 WARN LIST_FAILURE Failed to list subdirectory: open /Volumes/pool/media/.Trashes: permission denied
2018-12-03 23:01:53.327 WARN LIST_FAILURE Failed to list subdirectory: open /Volumes/pool/media/.fseventsd: permission denied

From the log it looks like directory listing was very slow. You can avoid this problem by restoring the selected file to an empty directory. This page explains how to restore to a different directory.

Apart from finding a workaround, what does this actually mean? I take it thar the slowness of the directory listing is the system’s fault, not duplicacy’s, but shouldn’t duplicacy be able to handle such a situation? Or how rarely does this kind of slowness occur?

Maybe @nanotech can say something about what might be the cause of the slowness?

Right, I guess /Volumes/pool/ is a networked directory. Listing a locally mounted drive usually should not take more than a minute.

So are you suggesting duplicacy should not be used to backup such directories? Or, perhaps it could be more patient with them?

Actually, “pool” is not networked. It’s connected to my computer by USB 3. It is part of a 4 drive USB enclosure. I think it’s due to zfs on my mac, it adds too much overhead. I find it has some read write permission issues too, since it runs as root.

It was interesting when I started using it, since it seemed to have good data integrity protection. Though I found that it never detected any checksum issues (which is a good thing) and I have come to the conclusion that for me, it’s not worth the hassle. The risk of data loss due to corruption was very low, and if the zfs software stopped working then I would lose access to all my data.

I’ll just stick with backing up more (keeping a local and cloud backup). I don’t keep the drive running 24/7 anyway and it’s not that much data. My mac isn’t that fast either, 8 GB RAM with dual core i5 processor. I am in the process of converting the drives back to HFS+ and I’ll try backing up and restoring again and see how it goes.

Edit: I re-ran duplicacy with the reformatted drives and the restore was much faster. All is well

1 Like

Oh no. These three things shall never be near each other in any sentence: USB3, 4-Drive enclosure and ZFS. By using ZFS in that configuration you only reaping all drawbacks of ZFS and none of the benefits.

On topic, though, 8GB is way too small for ZFS to be even remotely useful, assuming you have large drives. Furthermore, given that this is an USB3 enclosure I’ll bet you have a lot of interrupt activity hogging core 0. What kind of device is this? Hardware raid? Or does it present each drive separately to the system and you are running RaidZ on them? (please, please, let this not be the case)

What Mac is this?

Try running restore again and in the Activity Monitor sort processes by CPU utilization. If top is kernel_task - you have pretty much that, all time spent in USB interrupt handler. (USB RAID enclosures shall not exist in my humble opinion)

Next, select Duplicacy in the list and run Sample Process or Spindump. See what stack is duplicacy spends most time in. I’ll guess it would be waiting for filesystem events. You can share your spindump.txt file if you’d like more input from the community;

However, we are trying to address wrong problem here: I would strongly urge you to rethink your storage arrangement from scratch.

Perhaps sell all of that and get Synology NAS with BTRFS support (or some thunderbolt based DAS, in the latter case use supported filesystem (APFS or JHFS+); however with i5 and 8GB RAM I doubt you will benefit from DAS, so NAS would be most appropriate solution).

2 Likes

I appreciate your concern. I have already switched back to using HFS+ for my files.