Check fails with multiple errors ("can't be found" and "doesn't seem to be encrypted")

When trying to run a check with -chunks I’m getting more than 100 warnings for either the chunk can’t be found, or authentication failed, or doesn’t seem to be encrypted.

This is my first time running -chunks in more than three years of running backups. A normal check without the -chunks setting will pass successfully.

Any ideas where to begin troubleshooting this?

Setup:
Unraid with @saspus Duplicacy-Web docker container
Destination is Storj using native
WebUI 1.7.2
CLI 3.1.0

Example log:

Running check command from /cache/localhost/all
Options: [-log check -storage tower -threads 30 -persist -chunks -a -tabular]
..................
2023-06-04 15:10:52.980 INFO VERIFY_PROGRESS Verified chunk 1c0ad2370945a5162b564355166335b83a418b45fc70e5e884e3232ab879454c (11040/1353170), 68.54MB/s 1 day 03:25:22 0.8%
2023-06-04 15:10:53.019 WARN DOWNLOAD_CHUNK Chunk 0fa3f946a8d7600a5102cf92679abb25eefa0c8c5a6aeef2925c912fcf43afff can't be found
2023-06-04 15:10:53.019 INFO VERIFY_PROGRESS Verified chunk 0fa3f946a8d7600a5102cf92679abb25eefa0c8c5a6aeef2925c912fcf43afff (11041/1353170), 68.54MB/s 1 day 03:25:18 0.8%
2023-06-04 15:10:53.033 WARN DOWNLOAD_CHUNK Chunk 3640d6e8e4378a600ec297de419850d63c91b322221584fd911bcfeb1e9b876b can't be found
2023-06-04 15:10:53.033 INFO VERIFY_PROGRESS Verified chunk 3640d6e8e4378a600ec297de419850d63c91b322221584fd911bcfeb1e9b876b (11042/1353170), 68.54MB/s 1 day 03:25:11 0.8%
2023-06-04 15:10:53.061 INFO VERIFY_PROGRESS Verified chunk efa543aff0da409183c2cabd47ce968e0c1ff4fdd373fa159eed6ef0be58810f (11043/1353170), 68.54MB/s 1 day 03:25:05 0.8%
2023-06-04 15:10:53.114 INFO VERIFY_PROGRESS Verified chunk 00d1619709dff5384f7a28e85847b60b00f4c59b0e71dd4059f1f2810df7c5aa (11044/1353170), 68.55MB/s 1 day 03:25:02 0.8%
2023-06-04 15:10:53.154 INFO VERIFY_PROGRESS Verified chunk d8796d576958c32ea41d65fdfb35ee7e0b054f68b433cab1b2adb5af14d1e15d (11045/1353170), 68.55MB/s 1 day 03:24:58 0.8%
2023-06-04 15:10:53.157 INFO VERIFY_PROGRESS Verified chunk da5a9e67d1e3956ac21b52bc05e825be370d14f6d1b6b4ca03391254f4974062 (11046/1353170), 68.55MB/s 1 day 03:24:50 0.8%
2023-06-04 15:10:53.161 INFO VERIFY_PROGRESS Verified chunk 4e5191f84962d5200b8d32e03327adf573441d94f22cd04d62f2e3b197ce6348 (11047/1353170), 68.56MB/s 1 day 03:24:41 0.8%
2023-06-04 15:10:53.304 INFO VERIFY_PROGRESS Verified chunk 92f334b3af357c804bb5de108a1c166ba388c7600accb9ff791db2cdfbb38fd1 (11048/1353170), 68.55MB/s 1 day 03:24:49 0.8%
2023-06-04 15:10:53.345 INFO VERIFY_PROGRESS Verified chunk 5632d2118f89bc6089bb86fb2070abdaba013f63415839f9cf26bbc7b8142209 (11049/1353170), 68.55MB/s 1 day 03:24:45 0.8%
2023-06-04 15:10:53.452 INFO VERIFY_PROGRESS Verified chunk c5aac193747296f92831fd11ce12029c89658de3992c8229f6c4e4d113841a35 (11050/1353170), 68.55MB/s 1 day 03:24:49 0.8%
2023-06-04 15:10:53.558 INFO VERIFY_PROGRESS Verified chunk a37012c673d800fb18591e5c50dd73dcbdb885ed7806582d8e7a2c00e58eb2eb (11051/1353170), 68.55MB/s 1 day 03:24:53 0.8%
2023-06-04 15:10:53.563 INFO VERIFY_PROGRESS Verified chunk eee1e76df5c02a34eb12847cdb2dc12b00a89e9fa302d0539ba0374922a27851 (11052/1353170), 68.55MB/s 1 day 03:24:45 0.8%
2023-06-04 15:10:53.657 INFO VERIFY_PROGRESS Verified chunk 067ccd46bfc1c595e2e8bd38517cc60d7f5131b24ca0ea0955dc787fb7a5e841 (11053/1353170), 68.56MB/s 1 day 03:24:47 0.8%
2023-06-04 15:10:53.677 INFO VERIFY_PROGRESS Verified chunk 2a59a09b652e0b295bfc10a2986d93bbabf5d372ccf35268633bee659697eb29 (11054/1353170), 68.56MB/s 1 day 03:24:41 0.8%
2023-06-04 15:10:53.709 INFO VERIFY_PROGRESS Verified chunk ecc5e477cb0bfb8d52e9fda0e9ee64de4984b874190c69007875b37d3365d46d (11055/1353170), 68.57MB/s 1 day 03:24:36 0.8%
2023-06-04 15:10:54.071 INFO VERIFY_PROGRESS Verified chunk 8f2b34202bdcdd252ce36c985e2a25e57c20931b3f3c72f7fd36bf16661d0869 (11056/1353170), 68.54MB/s 1 day 03:25:11 0.8%
2023-06-04 15:10:54.144 INFO VERIFY_PROGRESS Verified chunk 2b0df587931f6d09715d1c98e17edecba67e583cb94e66181a275a5e6100c4b9 (11057/1353170), 68.54MB/s 1 day 03:25:10 0.8%
2023-06-04 15:10:54.184 INFO VERIFY_PROGRESS Verified chunk 06ce747bc5c0169d5226f8d6b867075551f472fbb8867fe9a01e77bfd1bd356b (11058/1353170), 68.54MB/s 1 day 03:25:06 0.8%
2023-06-04 15:10:54.441 INFO VERIFY_PROGRESS Verified chunk f28dcfd8f3ee5b562d7ef07278647b63a6c4b1c0bfdb3712d04924d462896e61 (11059/1353170), 68.52MB/s 1 day 03:25:28 0.8%
2023-06-04 15:10:54.575 INFO VERIFY_PROGRESS Verified chunk 996d2cbe575bc3381ec652d151367d49ac2a0591b34e772477f953eb69a6eaee (11060/1353170), 68.52MB/s 1 day 03:25:36 0.8%
2023-06-04 15:10:54.606 WARN DOWNLOAD_RETRY Failed to decrypt the chunk 7585e95f18f43e73b4e21c0966f0bde309d7e5051387b0aeccdc452c18123458: cipher: message authentication failed; retrying
2023-06-04 15:10:54.618 INFO VERIFY_PROGRESS Verified chunk 27b07f1f99e66b8b0b9497a1132128f61bdec1a9147f5491608144cd3030d479 (11061/1353170), 68.52MB/s 1 day 03:25:32 0.8%
2023-06-04 15:10:54.641 WARN DOWNLOAD_RETRY Failed to decrypt the chunk c8b1574b3cb1bba0fb1019daf4e5e3424d727066c12431a9ddcde8ff4f78a310: The storage doesn't seem to be encrypted; retrying
2023-06-04 15:10:54.688 INFO VERIFY_PROGRESS Verified chunk 5af0fc9969b41d0705b8c91f671962ac3649a7d04091fe5171751ee6b538bdab (11062/1353170), 68.52MB/s 1 day 03:25:31 0.8%
2023-06-04 15:10:54.743 WARN DOWNLOAD_RETRY Failed to decrypt the chunk 87acb3d7bba17577af98af8aae238bdaba578ee1cef2fad6eba7b1da93323072: cipher: message authentication failed; retrying
2023-06-04 15:10:54.816 INFO VERIFY_PROGRESS Verified chunk 816bcfedaecf81153c2eb03fc321caca9593ec9e4702bda18fef27f93f2fba1c (11063/1353170), 68.52MB/s 1 day 03:25:38 0.8%
2023-06-04 15:10:54.859 INFO VERIFY_PROGRESS Verified chunk 147cf08a119be8284bc417ee967f38a2995f94351fbe449fd4880b4fd44039b7 (11064/1353170), 68.52MB/s 1 day 03:25:34 0.8%
2023-06-04 15:10:54.929 WARN DOWNLOAD_RETRY Failed to decrypt the chunk 15823519a06dc900f748d65845754a573e55f642e555a9ca2ac9670e02a1e833: The storage doesn't seem to be encrypted; retrying
exit status 2
........

As an initial check, I tried searching for a missing chunk such as chunks\0f\a3f946a8d7600a5102cf92679abb25eefa0c8c5a6aeef2925c912fcf43afff or chunks\75\85e95f18f43e73b4e21c0966f0bde309d7e5051387b0aeccdc452c18123458 and turned up with nothing.

Well, it started downloading it and probably failed; if the chunk was missing it would display different error, and your check without -chunks would have failed too. (How did you check for the presence of that chunk on the storage? But never mind that, see below.)

So, it looks like it advances quite a bit, but then transfers start failing. If you are using native integration with storj – check your router and network; storj downloads data from a massive number of peers – a lot of concurrent connections; up to 50 per file. And then you have another 30 duplicacy threads on top – so this is 1500 connections per file accessed – I’m sure your router is probably having a seizure and dropping connection in a most disgraceful way possible, resulting in partial downloads, and duplciacy complaining on inability to decrypt the partial file.

I would suggest to start with reducing number of threads to 1. Storj is already highly parallel. This likely will fix your issue. I woudl also reboot the router, because it’s probably ran out of ram now and struggling to stay alive :slight_smile:

I had it at 1 thread and was getting only <2mb/s, with an ETA of 27 days to check the chunks. Maybe 8 threads is a good compromise?

And to your point of finding the file, I’m new to storj so this might be silly but I was using the web browser, going to the folder and crtl+f for the chunk. (for example, for the failing chunk 7585e95f18f43e73b4e21c0966f0bde309d7e5051387b0aeccdc452c18123458 I navigated to /bucket/chunks/75/ and searched for 85e95f18f43e73b4e21c0966f0bde309d7e5051387b0aeccdc452c18123458)

With -threads 8 I get similar behavior, it works for a while then starts breaking down. A new error i haven’t seen before is:

error retrieving piece 02: ecclient: piecestore: rpc: tcp connector failed: rpc: context deadline exceeded

I guess I’ll have to drop it to 1 thread and maybe it will take a month to verify them all. I’ve never done it before and I just switched storage destinations so I feel like this is important before I delete the old destination.

in the web UI they show only 1000 objects max. You can try using other tools, like Cyberduck, or their command line client, “uplink”.

This is way too slow,

This is an error returned by uplink library, and it does not look healthy, something is wrong with connectivity. Did you restart you router before testing?

To confirm whether this is your network or somethign else you can try configuring storj via their (or later your own if you prefer) S3 gateway. Then al that connectivity overhead will be done by the S3 gateway and you’ll be getting back whole finished files. If that works – the problem is your network equipment, you can continue using S3 gateway then: in fact, it will be even faster for uploads, which is what you are going to be doing most of the time.

Definitely, keep your old backup at least for a month after everything is working perfectly on the new destination.

Yep, this is definitely your router:

This is very certain due to the load on your router (too many connections and/or too much throughput, which clogs up your ISP line)

What is your router model? Does it support SQM? You can try enabling that, or replace the router with the one that does. Otherwise using S3 gateway is the only solution

Unifi UDM Pro :woozy_face:

I’ll reboot it and enable SQM in the WAN settings, and if no change, then I’ll recreate my storage with S3 instead of native

1 Like

Whoah indeed! I have the same one and don’t see any issues. But I do have SQM on, it still helps on my 800/20Mbps channel. In fact SQM was the reason I ended up with ubiquiti stuff many years ago — they were one of the few commercial vendors that had it out of the box. :slight_smile: I’ve tied fq_codel on OpenWRT, got blown away and ordered the gateway drug — USG3 :slight_smile: — that same day :slight_smile:

So let’s hope it’s your modem/upstream hardware that is a culprit here and SQM would help.

Btw, storj uploads 2.5x more data than the payload, so if your upstream is limited using s3 gateway may be beneficial anyway.

You can use their gateway, or you can run your own somewhere in the cloud — e.g. Oracle provides free instances with 10TB of monthly traffic included. But why bother if storj already maintains their own gateways and does not charge for them.

Reboot modem too, if you use one — I’d suspect it’s the one that got overwhelmed as opposed to udmp.

Ok a few updates now.

Still seeing ~3mb/s with one thread on Storj native.

  • I have symmetrical 1gb fiber
  • Updated the UDM Pro to 3.0.20, which of course required a reboot.
  • Enabled Smart Queues (SQM) set to 800mps down/up
  • There are no other routers in the chain (I use wpa_supplicant on my UDM for direct connection to the ONT)

Edit: Now connecting to the S3 gateway I’m seeing 4.5mb/s with SQM enabled

This is super weird. I assume since you run unraid it’s more or less decent hardware and not CPU limited.

My downloads from storj saturate the connection, but I don’t use duplicacy with it. I’ll try to reproduce your results on my network later tonight (PST).

Few things I’m suspecting, assuming no network issues with your unraid itself:

  • maybe duplicacy is linked with the old version of uplink library and something changed there
  • maybe latency is disproportionally affecting throughput due to small default chunk size. With storj you want to be under but close to 64MB for best performance and least overhead. But that shall not reduce performance 100x.

4 megabit per second or megabyte per second? Either way this too is ridiculously, obscenely low.

I’ll try to reproduce on my setup, I’m curious.

Which storj satellite are you using? Us1?

In the meantime, can you try to use recent uplink utility directly to download duplicacy chunks folder and check speed.

It has stabilized at 4.69MB/s within the duplicacy container on Storj S3

2023-06-05 11:50:22.837 DEBUG CHUNK_DOWNLOAD Chunk 61742668323e9a82630bd3588c64f715dc83609494be1480d85d3f9b9ef34518 has been downloaded
2023-06-05 11:50:22.837 INFO VERIFY_PROGRESS Verified chunk 61742668323e9a82630bd3588c64f715dc83609494be1480d85d3f9b9ef34518 (6475/1328636), 4.69MB/s 16 days 04:41:26 0.5%

Now here’s what I get with the Storj Uplink CLI on my unraid server, it bounces up as high as 95MB/s but it seems to STALL after 15-20 chunks. It doesn’t freeze, it just kind of stops. Both of these log snippets are actually the whole log. When it got to the last line you see here, it just stalled out and did nothing else.

root@tower:~# uplink cp -r --progress sj://tower/backup/chunks/00 /temp/download
downloading 5557 files...
/temp/download/00/4ae4ab9ee0bc06d7e33bcdf04fce8a0e753b27be16781cb5a6537966ce2e78     (1 of 5557) 3.99 MB / 3.99 MB100.00% 91.20 Mi…
/temp/download/00/ec7d924041ae2157a5bdaa8e1adf5acfa4cc6914aab57f5ddc3564244ccfc9     (2 of 5557) 3.16 MB / 3.16 MB100.00% 11.21 Mi…
/temp/download/00/d84e71669b0ca50b2431e1629a0d0125506a835e95e40c44a0cd46284922fe     (3 of 5557) 8.33 MB / 8.33 MB100.00% 83.89 Mi…
/temp/download/00/63a5c604b4a6e58ea248cff4496ac4c08b05c07001d85a3b1b98159e04ba3e     (4 of 5557) 5.56 MB / 5.56 MB100.00% 46.82 Mi…
/temp/download/00/e5843b11e77de48180c46ef71929400e4aa4d4e025d9fd1151073942dabffa     (5 of 5557) 2.13 MB / 2.13 MB100.00% 0.00 b/s
/temp/download/00/e7825d3e70b90edd62686b406f3498cbfd2ea392676fd5a613dc028e026133     (6 of 5557) 9.79 MB / 9.79 MB100.00% 79.01 Mi…
/temp/download/00/63a5e82f4bb55959b7ea79586bb9cf044995a95d7c87f5132492aedc3c9cc1     (7 of 5557) 9.87 MB / 9.87 MB100.00% 95.01 Mi…
/temp/download/00/1f91e083517a63890e5d4a77d498a594341f47dbeee0310bd5cfdadc43adbe     (8 of 5557) 3.43 MB / 3.43 MB100.00% 86.24 Mi…
/temp/download/00/655767a58187385ff7ef5a33a59ea0ecbc32586c0f086ce7ec862807722138     (9 of 5557) 2.99 MB / 2.99 MB100.00% 0.00 b/s
/temp/download/00/2e874ac9c64c67d7b2c08179b7c41ba7e9e74dac6387a9d1316a516e881100     (10 of 5557) 2.16 MB / 2.16 MB100.00% 90.11 M…
/temp/download/00/3b854bb070583ef105d382335de52b32f79dfaa3c7762e75ba47e0993b6c44     (11 of 5557) 2.01 MB / 2.01 MB100.00% 70.26 M…
/temp/download/00/c652353ca8825547490870a248a25ecb532cb7ae5edfc662cd8692c55ce696     (12 of 5557) 16.84 MB / 16.84 MB100.00% 86.43…
/temp/download/00/cb9f88efd6518dc85fea57bd14ef61541edfbcb30366a15cc4421029f15928     (13 of 5557) 1.84 MB / 1.84 MB100.00% 0.00 b/s
/temp/download/00/19508e7e35920a2b1044988cf6385dd3bbae9eb7929b5857b5e5ae91cd9a4f     (14 of 5557) 2.25 MB / 2.25 MB100.00% 93.71 M…
/temp/download/00/8a16503b4fa7b805d4c0e672dfabac8da3e215a6d0540b2d47271760f54fcc     (15 of 5557) 6.08 MB / 6.08 MB100.00% 91.95 M…
/temp/download/00/bc75795c90f57423c5a0477c48ebb69080686220c61870f83faa13de8a2393     (16 of 5557) 1.81 MB / 1.81 MB100.00% 51.56 M…
/temp/download/00/7d408d7c6fc64e9d7c7ccd97c76f8fb125aa8eb8b40fdd8400f90c375f1e1e     (17 of 5557) 2.09 MB / 2.09 MB100.00% 0.00 b/s
/temp/download/00/9fa6198c373a054d0619d8215a0390dee4fd9ea29e5df28aad7ac1386b5cd2     (18 of 5557) 5.56 MB / 5.56 MB100.00% 91.54 M…

On my windows PC, it’s pretty interesting too. Check out the fluctuation of speeds here. Then it stalls out after about 30 chunks and does nothing…

C:\Users\me\Downloads\uplink_windows_amd64>uplink cp --progress -r sj://citadel/backup/chunks/00 ~/Downloads/temp
downloading 5557 files...
~/Downloads/temp/00/ec7d924041ae2157a5bdaa8e1adf5acfa4cc6914aab57f5ddc3564244ccfc9     (2 of 5557) 3.16 MB / 3.16 MB [=====] 100.00% 0.00 b/s
~/Downloads/temp/00/d84e71669b0ca50b2431e1629a0d0125506a835e95e40c44a0cd46284922fe     (3 of 5557) 8.33 MB / 8.33 MB [=] 100.00% 130.21 MiB/s
~/Downloads/temp/00/63a5c604b4a6e58ea248cff4496ac4c08b05c07001d85a3b1b98159e04ba3e     (4 of 5557) 5.56 MB / 5.56 MB [=] 100.00% 137.58 GiB/s
~/Downloads/temp/00/e5843b11e77de48180c46ef71929400e4aa4d4e025d9fd1151073942dabffa     (5 of 5557) 2.13 MB / 2.13 MB [=====] 100.00% 0.00 b/s
~/Downloads/temp/00/e7825d3e70b90edd62686b406f3498cbfd2ea392676fd5a613dc028e026133     (6 of 5557) 9.79 MB / 9.79 MB [=] 100.00% 175.79 GiB/s
~/Downloads/temp/00/63a5e82f4bb55959b7ea79586bb9cf044995a95d7c87f5132492aedc3c9cc1     (7 of 5557) 9.87 MB / 9.87 MB [=] 100.00% 405.75 MiB/s
~/Downloads/temp/00/1f91e083517a63890e5d4a77d498a594341f47dbeee0310bd5cfdadc43adbe     (8 of 5557) 3.43 MB / 3.43 MB [=====] 100.00% 0.00 b/s
~/Downloads/temp/00/655767a58187385ff7ef5a33a59ea0ecbc32586c0f086ce7ec862807722138     (9 of 5557) 2.99 MB / 2.99 MB [=====] 100.00% 0.00 b/s
~/Downloads/temp/00/2e874ac9c64c67d7b2c08179b7c41ba7e9e74dac6387a9d1316a516e881100     (10 of 5557) 2.16 MB / 2.16 MB [] 100.00% 249.04 MiB/s
~/Downloads/temp/00/3b854bb070583ef105d382335de52b32f79dfaa3c7762e75ba47e0993b6c44     (11 of 5557) 2.01 MB / 2.01 MB [====] 100.00% 0.00 b/s
~/Downloads/temp/00/c652353ca8825547490870a248a25ecb532cb7ae5edfc662cd8692c55ce696     (12 of 5557) 16.84 MB / 16.84 MB  100.00% 222.24 MiB/s
~/Downloads/temp/00/cb9f88efd6518dc85fea57bd14ef61541edfbcb30366a15cc4421029f15928     (13 of 5557) 1.84 MB / 1.84 MB [====] 100.00% 0.00 b/s
~/Downloads/temp/00/19508e7e35920a2b1044988cf6385dd3bbae9eb7929b5857b5e5ae91cd9a4f     (14 of 5557) 2.25 MB / 2.25 MB [====] 100.00% 0.00 b/s
~/Downloads/temp/00/8a16503b4fa7b805d4c0e672dfabac8da3e215a6d0540b2d47271760f54fcc     (15 of 5557) 6.08 MB / 6.08 MB [====] 100.00% 0.00 b/s
~/Downloads/temp/00/bc75795c90f57423c5a0477c48ebb69080686220c61870f83faa13de8a2393     (16 of 5557) 1.81 MB / 1.81 MB [====] 100.00% 0.00 b/s
~/Downloads/temp/00/7d408d7c6fc64e9d7c7ccd97c76f8fb125aa8eb8b40fdd8400f90c375f1e1e     (17 of 5557) 2.09 MB / 2.09 MB [====] 100.00% 0.00 b/s
~/Downloads/temp/00/9fa6198c373a054d0619d8215a0390dee4fd9ea29e5df28aad7ac1386b5cd2     (18 of 5557) 5.56 MB / 5.56 MB [=] 100.00% 25.45 MiB/s
~/Downloads/temp/00/122317b633c882c688b92de152236c40cddc79aed0797293803b9837ed1afd     (19 of 5557) 4.81 MB / 4.81 MB [=] 100.00% 56.75 GiB/s
~/Downloads/temp/00/71b2d1ea2b7328cfa6644e60e90f3015f69ceadee92bbff5069d834b381d5c     (20 of 5557) 1.24 MB / 1.24 MB [====] 100.00% 0.00 b/s
~/Downloads/temp/00/103e8227416c62584f58e01be2b44b4ade7affecbaf87fd478f3862f694dee     (21 of 5557) 1.06 MB / 1.06 MB [====] 100.00% 0.00 b/s
~/Downloads/temp/00/4cd3453c3345e18290ce18ded009a32d12da6e0142259dfc39d67ef73c0f77     (22 of 5557) 2.46 MB / 2.46 MB [====] 100.00% 0.00 b/s
~/Downloads/temp/00/b5992eca6deb4d3306f1387f741a89c5553c4079475a75a4dfa96c7e0c4bad     (23 of 5557) 9.00 MB / 9.00 MB [] 100.00% 371.54 GiB/s
~/Downloads/temp/00/0fbcf5af0bc52a3e78a6b5b53a203888a6b0459690576351811cad3a832255     (24 of 5557) 1.76 MB / 1.76 MB [====] 100.00% 0.00 b/s
~/Downloads/temp/00/b976ff72f7078723af963baef9726c91c1d6fdf381ee5bde8d54a0a145736d     (25 of 5557) 4.80 MB / 4.80 MB [====] 100.00% 0.00 b/s
~/Downloads/temp/00/0946a1ce0a49074d4c2a6eaeda1430800de6f2916c6f7907abe6b69d2fdf7e     (26 of 5557) 6.66 MB / 6.66 MB [====] 100.00% 0.00 b/s
~/Downloads/temp/00/6b162c78527cd187b7333b8dcdb7298bfbaf664c8d26c199f5dc64950a7bc6     (27 of 5557) 1.33 MB / 1.33 MB [=] 100.00% 86.83 MiB/s
~/Downloads/temp/00/e043a950d64b9728aea06dacce02a5300f5ae8c2d5c2f9b325cded61405568     (28 of 5557) 3.65 MB / 3.65 MB [====] 100.00% 0.00 b/s
~/Downloads/temp/00/e82958b2e96d058f192e909b6ced2a60d6774f2c1bffdd818f8496ab776886     (29 of 5557) 4.23 MB / 4.23 MB [====] 100.00% 0.00 b/s
~/Downloads/temp/00/1804955e2dd00912af5e63a4e7f520b659d60f06fe0322931f469874dc99a7     (30 of 5557) 3.55 MB / 3.55 MB [====] 100.00% 0.00 b/s

Sorry got derailed with other things yesterday.

Strange. I haven’t tested the massive download yet, but I’ve tried running duplicacy’s benchmark command on two storages – native and s3 to the same bucket, with different chunk sizes, 4Mb and 64:

This is what I got:

Chunk\Proto Native S3
4 MB 2.24M/s 14.21M/s
64 MB 9.71M/s 14.07M/s
Log from benchmarks
Storage set to storj://12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S@us1.storj.io:7777/duplicacy/duplicacy
Generating 256.00M byte random data in memory
Writing random data to local disk
Wrote 256.00M bytes in 0.05s: 5240.22M/s
Reading the random data from local disk
Read 256.00M bytes in 0.02s: 10510.50M/s
Split 256.00M bytes into 58 chunks without compression/encryption in 16.88s: 15.17M/s
Split 256.00M bytes into 58 chunks with compression but without encryption in 17.29s: 14.81M/s
Split 256.00M bytes into 58 chunks with compression and encryption in 17.48s: 14.64M/s
Generating 64 chunks
Uploaded 256.00M bytes in 479.37s: 547K/s
Downloaded 256.00M bytes in 114.08s: 2.24M/s
Deleted 64 temporary files from the storage
Storage set to storj://12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S@us1.storj.io:7777/duplicacy/duplicacy
Generating 256.00M byte random data in memory
Writing random data to local disk
Wrote 256.00M bytes in 0.06s: 4375.03M/s
Reading the random data from local disk
Read 256.00M bytes in 0.02s: 10367.81M/s
Split 256.00M bytes into 2 chunks without compression/encryption in 16.91s: 15.14M/s
Split 256.00M bytes into 2 chunks with compression but without encryption in 17.64s: 14.51M/s
Split 256.00M bytes into 2 chunks with compression and encryption in 17.65s: 14.51M/s
Generating 4 chunks
Uploaded 256.00M bytes in 523.65s: 501K/s
Downloaded 256.00M bytes in 26.37s: 9.71M/s
Deleted 4 temporary files from the storage
Storage set to s3://us-east-1@gateway.storjshare.io/duplicacy/duplicacy-s3
Generating 256.00M byte random data in memory
Writing random data to local disk
Wrote 256.00M bytes in 0.07s: 3673.91M/s
Reading the random data from local disk
Read 256.00M bytes in 0.02s: 10261.98M/s
Split 256.00M bytes into 58 chunks without compression/encryption in 17.27s: 14.83M/s
Split 256.00M bytes into 58 chunks with compression but without encryption in 17.75s: 14.42M/s
Split 256.00M bytes into 58 chunks with compression and encryption in 18.02s: 14.21M/s
Generating 64 chunks
Uploaded 256.00M bytes in 265.33s: 988K/s
Downloaded 256.00M bytes in 104.72s: 2.44M/s
Deleted 64 temporary files from the storage
Storage set to s3://us-east-1@gateway.storjshare.io/duplicacy/duplicacy-s3
Generating 256.00M byte random data in memory
Writing random data to local disk
Wrote 256.00M bytes in 0.13s: 1999.07M/s
Reading the random data from local disk
Read 256.00M bytes in 0.04s: 6269.59M/s
Split 256.00M bytes into 2 chunks without compression/encryption in 17.47s: 14.65M/s
Split 256.00M bytes into 2 chunks with compression but without encryption in 17.87s: 14.33M/s
Split 256.00M bytes into 2 chunks with compression and encryption in 18.19s: 14.07M/s
Generating 4 chunks
Uploaded 256.00M bytes in 199.60s: 1.28M/s
Downloaded 256.00M bytes in 20.38s: 12.56M/s
Deleted 4 temporary files from the storage

All of those are horrific numbers. I’ll try plain downloads and see what happens. Something must be terribly wrong there.

According to this multiple threads is the way to go since duplicacy downloads a lot of small files, so segment parallelism does not apply. So, I’ll try to replicate that and see if I’m seeing those connectivity issues with -chunks as you did.

I’m now uploading the larger backup to run check for longer (it succeeded with 500 chunks), and the modem is not happy:

Maybe solution is indeed to use 64M chunks, and S3 gateway, since the consumer grade network equipment (my modem and your fiber box) does not seem to be able to handle many connections.

I’ll wait for my upload to complete, and then run check and report.

Well this has turned into an interesting investigation! Curious to hear what you find.

Your logs show failures at after 10k chunks; I managed to get to 2525 chunks, and run the check ten times in a row, (deleting the verified chunks every time) with 40 threads each time and did not reproduce the issue.

During each run with native integration the memory usage by duplicacy was climbing from 2.1 to 3.5GB of ram. Using s3 endpoint memory usage climbed from 1.5G to 2.3G.

I’m wondering if there is some sort of leak that affects stability here under heavy memory load, so when you actually go 10k pieces sequentially something does not handle failure to allocate ram in perhaps storj uplink module.

How much ram do you have on your unraid server?

Against S3 endpoint I got about 30MBps download. With native integration the download maxed out at 45MBps, while also maxing out 8 cores on my server (it’s an old one Xeon(R) CPU E3-1270 V2). I guess storj decryption was quite extensive and it was CPU that was in fact limiting the throughput.

So, at this point I have 2525 pieces from about 15 GB of backed up data. I can add 100Gb more, backup that, and run check again and see if something horrible happens.

And what about a test using a cloud VM, to eliminate this intermediate point of the modem/connection?

That’s what using s3 gateway effectively accomplishes: the gateway runs on the cloud instance and takes care of splitting, encryption, erasure coding, and uploading to random nodes, 80 per segment.

But nevertheless good idea, if I reproduce it in my network, I will run check form my Oracle instance too, for completeness.

It’s a Xeon E3-1245 v6 with 32GB. Based on your specs I think I should be able to match your speeds! Also have no VMs running, it’s all containers. No reserved cores or anything.

Right now I’m still sputtering along at 2.97MB/s with 24 days remaining on my check :stuck_out_tongue: