Backup failing for Google Cloud Storage

I redid the 2.0.0 release. It should now accept a storage path after the bucket in the GCS storage url.

Seems to be working. I gave it 8 threads and over 5 TB to run until completion this time.

This warning message is a bit odd, but it didn’t prevent it from running. Almost looks like it tried to scan the .duplicacy directory.

2017-05-16 08:14:53.593 INFO VSS_CREATE Creating a shadow copy for E:
2017-05-16 08:15:03.493 INFO VSS_DONE Shadow copy {15673AD1-7260-48E6-A415-8B782261620C} created
2017-05-16 08:15:03.494 INFO BACKUP_INDEXING Indexing E:
2017-05-16 08:15:03.542 INFO SNAPSHOT_FILTER Loaded 8 include/exclude pattern(s)
2017-05-16 08:19:39.194 INFO LIST_ATTRIBUTES Discarding file attributes
2017-05-16 08:20:53.327 WARN LIST_FAILURE Failed to list subdirectory: open E:/.duplicacy\shadow\/System Volume Information: Access is denied.

Earlier versions of Go have trouble accessing the UNC paths to shadow volumes (such as \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy16) so a symbolic link .duplicacy\shadow has to be created which points to the newly created shadow volume. When it reads E:\/.duplicacy\shadow\/System Volume Information, it is just a shadow copy of E:\System Volume Information.

2017-05-16 18:57:31.352 INFO UPLOAD_PROGRESS Uploaded chunk 301210 size 4778347, 35.97MB/s 1 day 07:37:14 25.1%
2017-05-16 18:57:31.687 INFO UPLOAD_PROGRESS Uploaded chunk 301211 size 2300343, 35.97MB/s 1 day 07:37:14 25.1%
2017-05-16 18:57:32.906 INFO UPLOAD_PROGRESS Uploaded chunk 301212 size 6354316, 35.97MB/s 1 day 07:37:16 25.1%
2017-05-16 18:57:33.345 INFO UPLOAD_PROGRESS Uploaded chunk 301213 size 4656704, 35.97MB/s 1 day 07:37:19 25.1%
2017-05-16 18:57:33.707 INFO UPLOAD_PROGRESS Uploaded chunk 301214 size 2680479, 35.97MB/s 1 day 07:37:19 25.1%
2017-05-16 18:57:34.101 INFO UPLOAD_PROGRESS Uploaded chunk 301215 size 6142834, 35.97MB/s 1 day 07:37:21 25.1%
2017-05-16 18:57:34.979 INFO UPLOAD_PROGRESS Uploaded chunk 301216 size 11652732, 35.97MB/s 1 day 07:37:20 25.1%
2017-05-16 18:57:35.811 INFO UPLOAD_PROGRESS Uploaded chunk 301217 size 10109932, 35.97MB/s 1 day 07:37:22 25.1%
2017-05-16 18:57:36.263 INFO UPLOAD_PROGRESS Uploaded chunk 301218 size 4170093, 35.97MB/s 1 day 07:37:24 25.1%
2017-05-16 18:57:37.031 INFO UPLOAD_PROGRESS Uploaded chunk 301219 size 11407268, 35.97MB/s 1 day 07:37:26 25.1%
2017-05-16 18:57:37.451 INFO UPLOAD_PROGRESS Uploaded chunk 301220 size 2712485, 35.97MB/s 1 day 07:37:26 25.1%
2017-05-16 18:57:37.934 INFO UPLOAD_PROGRESS Uploaded chunk 301221 size 4371216, 35.97MB/s 1 day 07:37:25 25.1%
2017-05-16 18:57:38.252 INFO UPLOAD_PROGRESS Uploaded chunk 301222 size 1677083, 35.97MB/s 1 day 07:37:28 25.1%
2017-05-16 18:57:39.348 INFO UPLOAD_PROGRESS Uploaded chunk 301223 size 16777216, 35.97MB/s 1 day 07:37:29 25.1%
2017-05-16 18:57:39.751 INFO UPLOAD_PROGRESS Uploaded chunk 301224 size 4283336, 35.97MB/s 1 day 07:37:29 25.1%
2017-05-16 18:57:40.228 INFO UPLOAD_PROGRESS Uploaded chunk 301225 size 7206354, 35.97MB/s 1 day 07:37:31 25.1%
2017-05-16 18:57:40.579 INFO UPLOAD_PROGRESS Uploaded chunk 301226 size 2676535, 35.97MB/s 1 day 07:37:31 25.1%
2017-05-16 18:57:40.933 INFO UPLOAD_PROGRESS Uploaded chunk 301227 size 2012126, 35.97MB/s 1 day 07:37:31 25.1%
2017-05-16 18:57:41.398 INFO UPLOAD_PROGRESS Uploaded chunk 301228 size 2848430, 35.96MB/s 1 day 07:37:33 25.1%
2017-05-16 18:57:41.729 INFO UPLOAD_PROGRESS Uploaded chunk 301229 size 1334207, 35.96MB/s 1 day 07:37:33 25.1%
2017-05-16 18:57:42.003 INFO UPLOAD_PROGRESS Uploaded chunk 301230 size 1925156, 35.96MB/s 1 day 07:37:36 25.1%
2017-05-16 18:57:43.156 INFO UPLOAD_PROGRESS Uploaded chunk 301231 size 11814879, 35.96MB/s 1 day 07:37:38 25.1%
2017-05-16 18:57:43.719 INFO UPLOAD_PROGRESS Uploaded chunk 301232 size 3451290, 35.96MB/s 1 day 07:37:37 25.1%
2017-05-16 18:57:44.477 INFO UPLOAD_PROGRESS Uploaded chunk 301233 size 9205176, 35.96MB/s 1 day 07:37:39 25.1%
2017-05-16 18:57:44.832 INFO UPLOAD_PROGRESS Uploaded chunk 301234 size 1699071, 35.96MB/s 1 day 07:37:39 25.1%
2017-05-16 18:57:45.168 INFO UPLOAD_PROGRESS Uploaded chunk 301235 size 1564499, 35.96MB/s 1 day 07:37:42 25.1%
2017-05-16 18:57:45.964 INFO UPLOAD_PROGRESS Uploaded chunk 301236 size 9931223, 35.96MB/s 1 day 07:37:41 25.1%
2017-05-16 18:57:46.275 INFO UPLOAD_PROGRESS Uploaded chunk 301237 size 1266517, 35.96MB/s 1 day 07:37:44 25.1%
2017-05-16 18:57:47.102 INFO UPLOAD_PROGRESS Uploaded chunk 301238 size 14956466, 35.96MB/s 1 day 07:37:45 25.1%
2017-05-16 18:57:47.524 INFO UPLOAD_PROGRESS Uploaded chunk 301239 size 5600455, 35.96MB/s 1 day 07:37:44 25.1%
2017-05-16 18:57:48.651 INFO UPLOAD_PROGRESS Uploaded chunk 301240 size 16777216, 35.96MB/s 1 day 07:37:46 25.1%
2017-05-16 18:57:49.250 INFO UPLOAD_PROGRESS Uploaded chunk 301241 size 8099832, 35.96MB/s 1 day 07:37:48 25.1%
2017-05-16 18:57:49.556 INFO UPLOAD_PROGRESS Uploaded chunk 301242 size 1375306, 35.96MB/s 1 day 07:37:48 25.1%
2017-05-16 18:57:49.895 INFO UPLOAD_PROGRESS Uploaded chunk 301243 size 2291442, 35.96MB/s 1 day 07:37:47 25.1%
2017-05-16 18:57:50.226 INFO UPLOAD_PROGRESS Uploaded chunk 301244 size 2074003, 35.96MB/s 1 day 07:37:50 25.1%
2017-05-16 18:57:50.699 INFO UPLOAD_PROGRESS Uploaded chunk 301245 size 6310147, 35.96MB/s 1 day 07:37:49 25.1%
2017-05-16 18:57:51.128 INFO UPLOAD_PROGRESS Uploaded chunk 301246 size 6102071, 35.96MB/s 1 day 07:37:52 25.1%
2017-05-16 18:57:51.545 INFO UPLOAD_PROGRESS Uploaded chunk 301247 size 3112765, 35.96MB/s 1 day 07:37:52 25.1%
2017-05-16 18:57:51.860 INFO UPLOAD_PROGRESS Uploaded chunk 301248 size 2296881, 35.96MB/s 1 day 07:37:51 25.1%
2017-05-16 18:57:52.269 INFO UPLOAD_PROGRESS Uploaded chunk 301249 size 2276301, 35.96MB/s 1 day 07:37:54 25.1%
2017-05-16 18:57:52.693 INFO UPLOAD_PROGRESS Uploaded chunk 301250 size 5147593, 35.96MB/s 1 day 07:37:53 25.1%
2017-05-16 18:57:53.049 INFO UPLOAD_PROGRESS Uploaded chunk 301251 size 3391892, 35.96MB/s 1 day 07:37:56 25.1%
2017-05-16 18:57:53.919 INFO UPLOAD_PROGRESS Uploaded chunk 301252 size 14985564, 35.96MB/s 1 day 07:37:54 25.1%
2017-05-16 18:57:54.371 INFO UPLOAD_PROGRESS Uploaded chunk 301253 size 3663569, 35.96MB/s 1 day 07:37:57 25.1%
2017-05-16 18:57:54.660 INFO UPLOAD_PROGRESS Uploaded chunk 301254 size 1231399, 35.96MB/s 1 day 07:37:57 25.1%
2017-05-16 18:57:55.354 INFO UPLOAD_PROGRESS Uploaded chunk 301255 size 8671454, 35.96MB/s 1 day 07:37:59 25.1%
2017-05-16 18:57:55.700 INFO UPLOAD_PROGRESS Uploaded chunk 301256 size 2874809, 35.96MB/s 1 day 07:37:59 25.1%
2017-05-16 18:57:56.028 INFO UPLOAD_PROGRESS Uploaded chunk 301257 size 1678656, 35.95MB/s 1 day 07:38:02 25.1%
2017-05-16 18:57:56.620 INFO UPLOAD_PROGRESS Uploaded chunk 301258 size 6741827, 35.95MB/s 1 day 07:38:01 25.1%
2017-05-16 18:57:56.917 INFO UPLOAD_PROGRESS Uploaded chunk 301259 size 2216930, 35.95MB/s 1 day 07:38:01 25.1%
2017-05-16 18:57:57.072 ERROR UPLOAD_CHUNK Failed to upload the chunk 3c8bc56bb9bbd558583dc50ff822647beb27107e1e261310ee08d1faec545ea3: googleapi: Error 503: Backend Error, backendError

Ran for about 10 hrs before getting this error (see log above). First uploaded chunk was about 5 hrs before the error, the rest was all skipped.

2017-05-16 13:58:40.095 INFO UPLOAD_PROGRESS Skipped chunk 268774 size 6890693, 59.92MB/s 19:43:18 22.2%
2017-05-16 13:58:40.180 INFO UPLOAD_PROGRESS Skipped chunk 268775 size 4136196, 59.92MB/s 19:43:18 22.2%
2017-05-16 13:58:40.231 INFO UPLOAD_PROGRESS Skipped chunk 268776 size 4459409, 59.92MB/s 19:43:18 22.2%
2017-05-16 13:58:40.432 INFO UPLOAD_PROGRESS Skipped chunk 268777 size 11993642, 59.92MB/s 19:43:17 22.2%
2017-05-16 13:58:40.582 INFO UPLOAD_PROGRESS Skipped chunk 268778 size 9719979, 59.92MB/s 19:43:16 22.2%
2017-05-16 13:58:40.679 INFO UPLOAD_PROGRESS Skipped chunk 268779 size 6204391, 59.92MB/s 19:43:16 22.2%
2017-05-16 13:58:40.731 INFO UPLOAD_PROGRESS Skipped chunk 268780 size 3143608, 59.92MB/s 19:43:16 22.2%
2017-05-16 13:58:40.805 INFO UPLOAD_PROGRESS Skipped chunk 268781 size 5303558, 59.92MB/s 19:43:15 22.2%
2017-05-16 13:58:40.849 INFO UPLOAD_PROGRESS Skipped chunk 268782 size 2129622, 59.92MB/s 19:43:15 22.2%
2017-05-16 13:58:40.891 INFO UPLOAD_PROGRESS Skipped chunk 268783 size 3417456, 59.92MB/s 19:43:15 22.2%
2017-05-16 13:58:40.950 INFO UPLOAD_PROGRESS Skipped chunk 268784 size 7167851, 59.92MB/s 19:43:14 22.2%
2017-05-16 13:58:41.026 INFO UPLOAD_PROGRESS Skipped chunk 268785 size 5628600, 59.92MB/s 19:43:17 22.2%
2017-05-16 13:58:41.152 INFO UPLOAD_PROGRESS Skipped chunk 268786 size 9803939, 59.92MB/s 19:43:17 22.2%
2017-05-16 13:58:41.197 INFO UPLOAD_PROGRESS Skipped chunk 268787 size 1859756, 59.92MB/s 19:43:17 22.2%
2017-05-16 13:58:41.247 INFO UPLOAD_PROGRESS Skipped chunk 268788 size 4054580, 59.92MB/s 19:43:16 22.2%
2017-05-16 13:58:41.356 INFO UPLOAD_PROGRESS Skipped chunk 268789 size 10461919, 59.92MB/s 19:43:16 22.2%
2017-05-16 13:58:41.610 INFO UPLOAD_PROGRESS Skipped chunk 268790 size 9315676, 59.92MB/s 19:43:15 22.2%
2017-05-16 13:58:41.703 INFO UPLOAD_PROGRESS Skipped chunk 268791 size 7178719, 59.92MB/s 19:43:14 22.2%
2017-05-16 13:58:41.750 INFO UPLOAD_PROGRESS Skipped chunk 268792 size 2199389, 59.92MB/s 19:43:14 22.2%
2017-05-16 13:58:41.803 INFO UPLOAD_PROGRESS Skipped chunk 268793 size 2164017, 59.92MB/s 19:43:14 22.2%
2017-05-16 13:58:41.847 INFO UPLOAD_PROGRESS Skipped chunk 268794 size 5326768, 59.92MB/s 19:43:14 22.2%
2017-05-16 13:58:42.405 INFO UPLOAD_PROGRESS Uploaded chunk 268795 size 3133691, 59.92MB/s 19:43:17 22.2%
2017-05-16 13:58:42.980 INFO UPLOAD_PROGRESS Uploaded chunk 268796 size 6482551, 59.92MB/s 19:43:16 22.2%
2017-05-16 13:58:43.388 INFO UPLOAD_PROGRESS Uploaded chunk 268797 size 2773082, 59.92MB/s 19:43:20 22.2%
2017-05-16 13:58:43.751 INFO UPLOAD_PROGRESS Uploaded chunk 268798 size 1462498, 59.92MB/s 19:43:20 22.2%
2017-05-16 13:58:44.112 INFO UPLOAD_PROGRESS Uploaded chunk 268799 size 1810903, 59.92MB/s 19:43:23 22.2%
2017-05-16 13:58:44.627 INFO UPLOAD_PROGRESS Uploaded chunk 268800 size 3987189, 59.92MB/s 19:43:23 22.2%
2017-05-16 13:58:45.239 INFO UPLOAD_PROGRESS Uploaded chunk 268801 size 5770960, 59.91MB/s 19:43:26 22.2%
2017-05-16 13:58:45.729 INFO UPLOAD_PROGRESS Uploaded chunk 268802 size 3485746, 59.91MB/s 19:43:26 22.2%

Although the official GCS client library claims that a retry mechanism is in place for all methods, after some code digging I came to the conclusion that at least the upload method doesn’t support it because of the way the code is written.

I’ll add a retry logic to the outer layer in the GCS storage backend. Should be done by the end of tomorrow.

I thought it was weird when I received the message because I had also read in like one of the first paragraphs of their documentation that all of their requests implement an infinite retry. Thanks for looking into it.

On another note… Unfortunately it’s hard to get a grasp on how fast this application is and how long it has left since I have not gotten it to complete on my larger data set yet.

I did a basic verify and it thinks that I have 200 GB uploaded while GCP reports 1.21 TiB which I’m guessing is because each time it fails and restarts it removes the incomplete snapshot and starts over again, skipping existing chunks with the pre-upload check. This throws off the time estimate quite a bit. It also slows down a retry for such a large dataset quite a bit. I’m having to wait for a lot of skipped chunks before it starts uploading again.

From what I can tell from my smaller backup, it will not take as much time to check every single chunk. In fact, it doesn’t seem to do that at all. Can you confirm this behavior? I’d hate to get all of this uploaded only to find out that each backup takes a couple of days.

Assuming that backups after the initial are faster, is it possible to support partial backups whether due to failure or OS restart or other reasons? I suppose the chunks that were still in progress would have to be re-checked/ uploaded in case they didn’t finish, but it would make this type of situation much faster.

Lastly, with this much data, should I consider changing the chunk size? This may be subject to opinion and situation. I have a mix of many small files as well as some extra large files and every size in between (probably up to ~100 GB for 1 file). I don’t mind starting the upload over if it would be better. I was thinking about doing that anyway to get an idea of how fast the backup is.

If the previous backup successfully finished, Duplicacy will just check the timestamp and size of each file to see if any of them changed from the last backup. It will then run the chunking algorithm on changed files and upload chunks created from these files. This is the default behavior unless the -hash option is specified.

However, if previous backups all failed for some reason, Duplicacy will have to run the chunk algorithm on all files. This is unfortunate as you can see from your tests that a considerable amount of time is spent just to recreate all chunks and then find out they all had already been uploaded. The solution I have in mind is to save the incomplete snapshot locally, which stores the files that have been scanned and uploaded. And then when you specify a -resume option, it will load this incomplete snapshot and only work on files not included or different from those in that snapshot.

I uploaded a new 2.0.0 build that implements the retry logic for uploading chunks. Please give it a try.

It’s running now. It will be about 5 hours again before it starts uploading I’m guessing.

One thing I noticed that is not really a big deal, but may be worth noting in the documentation/ guide what the limitations are (if any):
I upped the thread count from 8 to 12 to try to get passed the skipped chunks part faster and it actually slowed down significantly. I went from 10-20 chunks per second to 1-10. Looking in the task manager, It seemed that only 1 core was being used at a time with 12 threads specified, but when I changed it back to 8, several are being used again.
12 is completely unnecessary and I don’t think I want to use that much processing power anyway, but I thought I’d let you know what I observed.

Did you have any recommendation on chunk size? Should I leave it default? TBH, variable sized chunking is a bit hard to conceptualize how that even works in the deduplication processes.

Maybe I should have just let it keep running at 12. 8 eventually did the same thing (slowed down and started using only about 20% of 1 core. It’s probably just processing a larger file or something when it does that and only 1 thread can access it at a time so it sort of waits. So maybe there is no problem there.

The -threads option only sets the number of uploading threads. There is only one chunking thread that splits files into chunks and then dispatches them to the uploading threads. Therefore, I doubt 8 or 12 uploading threads would make a big difference when all chunks have been uploaded before.

The number one reason to use a large chunk size is to reduce the overhead of uploading chunks. This is because Duplicacy performs 2 API calls for each new chunk, one for checking the existence and the other for actually uploading the chunk. So the larger the chunk, the more time will be spent on the uploading.

In your case you can use multiple threads to take advantage of your fast uplink so I don’t think you need to increase the chunk size. On the other hand, if you have many files smaller than the default chunk size that may change frequently, you may consider decreasing the chunk size instead. Duplicacy may pack multiple small files into a single chunk (think of this as if it were packing all files, large or small, into a huge tar file and then running the variable-size chunking algorithm on the tar file), so these small files will not see the benefit of deduplication between changes.

That makes sense. Thanks for the explanation. Without consistent data sizes and uses I guess I’m kind of in limbo. I’ll just stick with the default for now. If I’m able to achieve a good amount of deduplication, then it’ll be worth it. If not, I may switch it to a higher chunk size and re-upload after some testing while I still have some GCP trial credit. I may create 64 GB of data in one sitting and the faster that gets backed up the better. We’ll see.

It seems to be working now. It says it has been running since Friday and still has a day and a half left (Sunday now) with 8 threads on a Gigabit connection. Not the fastest backup I’ve tried, but very possibly the fastest I’ve tried at this deduplication level. We will see when it finishes.

2017-05-21 16:44:00.530 INFO UPLOAD_PROGRESS Skipped chunk 714171 size 16777216, 17.23MB/s 1 day 12:48:47 58.2%

Another thought occurs to me and perhaps you can provide some insight. Since we know the program is working now, how much of a speed increase do you think we would see if I turned off the detailed logging?

Even though I am using multiple threads, there has to be some point in the application where a context switch takes place to write to the log file (edit: to the screen which I am piping to a log file). There are a number of techniques that could be used to speed this process along but from my own experience, something like that would be implemented further along in the incremental development process.

Do you have any thoughts or experience on how this would affect backup speeds for the current iteration?

PS
Thank you for your excellent level of support and putting up with my constant curiosities about the inner workings and my abrasive opinions and questions. Obviously my interest in the program you have built goes beyond getting my data backed up. I can’t wait to get my hands on the source code.

2017-05-22 10:43:54.064 INFO UPLOAD_PROGRESS Uploaded chunk 846122 size 2431489, 15.30MB/s 1 day 05:58:09 69.8%
2017-05-22 10:43:54.555 INFO UPLOAD_PROGRESS Uploaded chunk 846123 size 4517584, 15.30MB/s 1 day 05:58:09 69.8%
2017-05-22 10:43:55.305 INFO UPLOAD_PROGRESS Uploaded chunk 846124 size 5912045, 15.30MB/s 1 day 05:58:08 69.8%
2017-05-22 10:43:55.892 INFO UPLOAD_PROGRESS Uploaded chunk 846125 size 2124090, 15.30MB/s 1 day 05:58:08 69.8%
2017-05-22 10:43:56.322 INFO UPLOAD_PROGRESS Uploaded chunk 846126 size 1430229, 15.30MB/s 1 day 05:58:09 69.8%
2017-05-22 10:43:56.764 INFO UPLOAD_PROGRESS Uploaded chunk 846127 size 5855485, 15.30MB/s 1 day 05:58:08 69.8%
2017-05-22 10:43:57.201 INFO UPLOAD_PROGRESS Uploaded chunk 846128 size 1945520, 15.30MB/s 1 day 05:58:08 69.8%
2017-05-22 10:43:57.678 INFO UPLOAD_PROGRESS Uploaded chunk 846129 size 4581043, 15.30MB/s 1 day 05:58:08 69.8%
2017-05-22 10:43:58.094 INFO UPLOAD_PROGRESS Uploaded chunk 846130 size 5970921, 15.30MB/s 1 day 05:58:08 69.8%
2017-05-22 10:43:58.406 INFO UPLOAD_PROGRESS Uploaded chunk 846131 size 2172488, 15.30MB/s 1 day 05:58:08 69.8%
2017-05-22 10:43:58.806 INFO UPLOAD_PROGRESS Uploaded chunk 846132 size 2962683, 15.30MB/s 1 day 05:58:07 69.8%
2017-05-22 10:43:59.287 INFO UPLOAD_PROGRESS Uploaded chunk 846133 size 4371412, 15.30MB/s 1 day 05:58:07 69.8%
2017-05-22 10:43:59.678 INFO UPLOAD_PROGRESS Uploaded chunk 846134 size 4819426, 15.30MB/s 1 day 05:58:07 69.8%
2017-05-22 10:44:00.157 INFO UPLOAD_PROGRESS Uploaded chunk 846135 size 5437442, 15.30MB/s 1 day 05:58:07 69.8%
2017-05-22 10:44:01.048 INFO UPLOAD_PROGRESS Uploaded chunk 846136 size 13229446, 15.30MB/s 1 day 05:58:06 69.8%
2017-05-22 10:44:01.449 INFO UPLOAD_PROGRESS Uploaded chunk 846137 size 4054628, 15.30MB/s 1 day 05:58:06 69.8%
2017-05-22 10:44:01.900 INFO UPLOAD_PROGRESS Uploaded chunk 846138 size 1072854, 15.30MB/s 1 day 05:58:06 69.8%
2017-05-22 10:44:02.522 INFO UPLOAD_PROGRESS Uploaded chunk 846139 size 7562051, 15.30MB/s 1 day 05:58:05 69.8%
2017-05-22 10:44:02.994 INFO UPLOAD_PROGRESS Uploaded chunk 846140 size 2883132, 15.30MB/s 1 day 05:58:05 69.8%
2017-05-22 10:44:03.398 INFO UPLOAD_PROGRESS Uploaded chunk 846141 size 3689545, 15.30MB/s 1 day 05:58:05 69.8%
2017-05-22 10:44:03.753 INFO UPLOAD_PROGRESS Uploaded chunk 846142 size 1576206, 15.30MB/s 1 day 05:58:05 69.8%
2017-05-22 10:44:04.233 INFO UPLOAD_PROGRESS Uploaded chunk 846143 size 3350373, 15.30MB/s 1 day 05:58:05 69.8%
2017-05-22 10:44:04.584 INFO UPLOAD_PROGRESS Uploaded chunk 846144 size 2011109, 15.30MB/s 1 day 05:58:05 69.8%
2017-05-22 10:44:04.928 INFO UPLOAD_PROGRESS Uploaded chunk 846145 size 2492654, 15.30MB/s 1 day 05:58:05 69.8%
2017-05-22 10:44:05.260 INFO UPLOAD_PROGRESS Uploaded chunk 846146 size 1675592, 15.30MB/s 1 day 05:58:05 69.8%

I don’t think detailed logging would slow down the backup to the degree that it becomes noticeable, although there is no data to back this claim.

The upload speed is a bit slow for a 1 Gbps uplink. I’ll add performance profiling to my to do list.

The source code is now available from the github repository

Oh how exciting. I’ll have to wait to look at that until this weekend most likely.

FACEPALM But it’s so close…

2017-05-23 11:59:28.582 INFO UPLOAD_PROGRESS Uploaded chunk 1112591 size 10407000, 14.82MB/s 07:56:32 92.2%
2017-05-23 11:59:29.279 INFO UPLOAD_PROGRESS Uploaded chunk 1112592 size 5680803, 14.82MB/s 07:56:32 92.2%
2017-05-23 11:59:29.579 INFO UPLOAD_PROGRESS Uploaded chunk 1112593 size 3775219, 14.82MB/s 07:56:32 92.2%
2017-05-23 11:59:29.907 INFO UPLOAD_PROGRESS Uploaded chunk 1112594 size 3586406, 14.82MB/s 07:56:32 92.2%
2017-05-23 11:59:30.399 INFO UPLOAD_PROGRESS Uploaded chunk 1112595 size 10377805, 14.82MB/s 07:56:31 92.2%
2017-05-23 11:59:30.529 ERROR CHUNK_MAKER Failed to read 0 bytes: read E:\/.duplicacy\shadow\\/Shares/Access/VirtualMachines/MINTMINE/MINTMINE-Snapshot1.vmem: The device is not ready.
2017-05-23 11:59:30.669 INFO UPLOAD_PROGRESS Uploaded chunk 1112596 size 1966714, 14.82MB/s 07:56:31 92.2%
2017-05-23 12:01:01.273 INFO VSS_DELETE The shadow copy has been successfully deleted

There is probably no need to backup the .vmem file. I’m not sure if that was a VSS issue (i.e., the VM hypervisor didn’t implement the VSS writer correctly) or if that kind of file was not supposed to be readable even with VSS enabled. In any case, you can add an exclude pattern -*.vmem to .duplicacy/filters and try again.

I suppose this will work as a temporary solution. None of my other backup solutions have had a problem with this though. I don’t know how they handle it.