Restart backup after network error

I’ve only just started with duplicacy so apoligies if this well known (I did search the docs and the forum).

I’m seeding a new backup which will take months. It’s running it in a docker in TrueNas scale (i.e. debian based). A few hours after starting it, it failed with a network error caused by a power outage. Is there a way to have it automatically retry after (network) errors?

This is the log:

Running backup command from /cache/localhost/0 to back up /PhatShare
Options: [-log backup -storage e2-phat-bucket -threads 4 -stats]
2023-03-26 19:59:40.212 INFO REPOSITORY_SET Repository set to /PhatShare
2023-03-26 19:59:40.246 INFO STORAGE_SET Storage set to s3://e2@r1f4.la.idrivee2-16.com/phat-backup
2023-03-26 19:59:42.340 INFO BACKUP_START Last backup at revision 3 found
2023-03-26 19:59:42.344 INFO BACKUP_INDEXING Indexing /PhatShare
2023-03-26 19:59:42.344 INFO SNAPSHOT_FILTER Parsing filter file /cache/localhost/0/.duplicacy/filters
2023-03-26 19:59:42.344 INFO SNAPSHOT_FILTER Loaded 0 include/exclude pattern(s)
2023-03-27 01:07:52.681 ERROR UPLOAD_CHUNK Failed to upload the chunk 19777e9247780adb95b4b0dfc387330fa3956f799a837789e2444080f85e78c1: RequestError: send request failed
caused by: Put "https://phat-backup.r1f4.la.idrivee2-16.com/chunks/19/777e9247780adb95b4b0dfc387330fa3956f799a837789e2444080f85e78c1": write tcp 172.16.0.16:42914->72.26.105.8:443: use of closed network connection
Failed to upload the chunk 19777e9247780adb95b4b0dfc387330fa3956f799a837789e2444080f85e78c1: RequestError: send request failed

As far as I understand backups are picked up from where they are left off if you run it again. So if you’re using the cli, you can set it to run every minuet with a cron job (just make sure to use locks to keep only one thread of duplicacy going at a time) and this should make it so it picks back up.

Thanks for the suggestion, maybe that’s what I’ll do. It’s in inelegant solution for an obvious problem, though. It’s also a pain to implement in my case because I am running duplicacy in a docker in a NAS applicance and I access it through the web UI.

Setting up very frequent backups in the Web UI is very simple. I’m aware this solution is not exactly elegant, but it should be easy.

This might work during the seeding process. Once, the backup is seeded I only want it to run once a week because the file system scan takes about 30-60 mins. I think I have the option to get an email notification when it finishes.

But to be honest this seems to be fairly serious deficiency though, network errors are not uncommon (especially where I live).

This does not work. Dupilcacy fails so frequently it is just spending most of its time resyncing. Unless there is another solution, this software isn’t ready for production use and I won’t be trusting my data with it.