"Empty", bad revisions cause very long backup times - after network failure followed by recovery

Today instead of usual 15 minutes my normal backup took 8 or 9 hours.
I investigated and found that the network share to be backed up was not accessible during couple of previous days, then the NAS was restarted and went back online. While it was offline logs show this:

2019-04-15 02:16:27.401 INFO REPOSITORY_SET Repository set to S:/all-photos
2019-04-15 02:16:27.402 INFO STORAGE_SET Storage set to one://z-duplicacy-backups
2019-04-15 02:16:30.145 INFO BACKUP_START Last backup at revision 22 found
2019-04-15 02:16:30.145 INFO BACKUP_INDEXING Indexing S:\all-photos
2019-04-15 02:18:30.364 WARN LIST_FAILURE Failed to list subdirectory: open S:\all-photos: The specified network name is no longer available.
2019-04-15 02:18:30.720 WARN SKIP_DIRECTORY Subdirectory  cannot be listed
2019-04-15 02:18:30.738 INFO BACKUP_END Backup for S:\all-photos at revision 23 completed
2019-04-15 02:18:30.738 INFO BACKUP_STATS Files: 0 total, 0 bytes; 0 new, 0 bytes
2019-04-15 02:18:30.738 INFO BACKUP_STATS File chunks: 0 total, 0 bytes; 0 new, 0 bytes, 0 bytes uploaded
2019-04-15 02:18:30.738 INFO BACKUP_STATS Metadata chunks: 3 total, 8 bytes; 0 new, 0 bytes, 0 bytes uploaded
2019-04-15 02:18:30.738 INFO BACKUP_STATS All chunks: 3 total, 8 bytes; 0 new, 0 bytes, 0 bytes uploaded
2019-04-15 02:18:30.738 INFO BACKUP_STATS Total running time: 00:02:01
2019-04-15 02:18:30.738 WARN BACKUP_SKIPPED 1 directory was not included due to access errors

Why in this case revision 23 was even created??? Those “bad” revisions (22, 23, 24) are useless anyway and they forced full rescan for revision 25. Log below.

               snap | rev |                          | files |    bytes | chunks |    bytes |  uniq |    bytes |   new |    bytes |
 SY-home-all-photos |   1 | @ 2019-03-24 02:02 -hash |  5114 |   3,221M |    612 |   3,134M |     3 |     511K |   612 |   3,134M |
 SY-home-all-photos |   8 | @ 2019-03-31 02:02       | 32158 |  71,717M |  14579 |  71,281M |     3 |   3,371K | 13970 |  68,147M |
 SY-home-all-photos |  15 | @ 2019-04-07 02:02       | 49108 | 220,990M |  44953 | 220,356M |   160 |   1,058M | 30377 | 149,078M |
 SY-home-all-photos |  22 | @ 2019-04-14 02:15       |       |          |      2 |      588 |     0 |        0 |     2 |      588 |
 SY-home-all-photos |  23 | @ 2019-04-15 02:16       |       |          |      2 |      588 |     0 |        0 |     0 |        0 |
 SY-home-all-photos |  24 | @ 2019-04-16 02:16       |       |          |      2 |      588 |     0 |        0 |     0 |        0 |
 SY-home-all-photos |  25 | @ 2019-04-17 08:43       | 49109 | 220,958M |  44937 | 220,262M |   184 |   1,162M |   192 |   1,204M |
 SY-home-all-photos | all |                          |       |          |  45153 | 221,565M | 29080 | 143,870M |       |          |

I believe instead of “WARN” “SKIP_DIRECTORY Subdirectory cannot be listed” it should be full stop and failure?

It does indeed seem that if a skipped directory isn’t actually a subdirectory but in fact the root directory of a repository, then Duplicacy should probably abort and error instead of warn!

TBH, it might also make sense to have some kind of configurable threshold (say 100 files/folders by default), where it aborts if more than that amount is skipped.

1 Like

Is this the same issue as this? I found that as well as the skipped folder, any other folders/mounts present in the same folder as the failed mount are also skipped. When the failed mount was in the root folder then everything else gets skipped too and you get an empty backup with the following backup needing to rechunk everything from scratch - in my case taking a couple of days to complete.

My temporary workaround until its fixed is to first run a script to make sure all network mounts are active. Since doing that first I’ve now gone several months without another empty backup.