Struggling with stability during backup operation

I’ve been building my home OMV6-based NAS and have set up 4 different B2 buckets and using Duplicacy in Docker. 3 of the 4 have completed their initial uploads without any issues but the 4th and largest for my music media doesn’t.

The ~540GB backup should be taking about 35 hours, but after ~21 hours it stops making progress, seemingly with a crash of duplicacy. I’m using saspus/duplicacy-web:mini (1.6.3).

I’ve tried purging the container, cache, bucket and restarting the backup several times to no avail. Every time it reaches approx 325-326GB in size the web ui stops updating status. The Web log continues to show /get_backup_status entries but I’m not showing that anything is updating. The backup log ends with a usual entry and nothing unusual.

At the point it stops:
If I catch soon after it happens, I’m seeing in my dashboard that iowait consumes a whole core’s worth of time. At that point I can stop the docker container, but on a few occasions I’ve caught it a bit later, the docker container has crashed and I can’t stop or kill it. Worse, I’ve seen in my logs where the entire system locks up and reboots. I’ve set the duplicacy container to not restart automatically so when I come back I’ll see my system had rebooted. If I try restarting the backup, it will finish initial indexing and then crash again, seemingly when it would normally start uploading again.

I’ve tried adding the -debug switch to see if there’s anything amiss, but not seeing anything unusual.

Thanks in advance for any assistance, tips, or recommendations!

How much RAM does your NAS have? Could be that it’s running out of memory and crashing. You should be able to shell into the container and check the running processes…

Obviously it’d be wise to get to the bottom of what’s happening, but if resource usage is the problem, you might be able to mitigate by completing the initial backup with a subset of files - either through filtering, or moving files temporarily out of the way.

Once the initial backup completes, add the rest back in and try again. Incremental backups tend not to use as much resources.

16GB RAM. On prior versions of Duplicacy I noticed that it used to use up my swap file up during indexing so I now use an oversized (16GB) swap. I haven’t taken notice of memory usage lately but I’ll have a look will now.

Good thought- I’ll create a backup job that hits only half the files tonight and see if I can keep going from there.

Are you sure you’re not running out of available disk space, especially on tmp / cache partitions? The symptoms that you’re describing are fairly consistent with this hypothesis. RAM/disk hardware failure is also an option. Your backup size is unlikely to present issues with 16GB RAM (unless perhaps you have bazillion of small files?), and your system shouldn’t completely lock up and reboot if running out of memory, you normally see random processes getting killed first. You may end up with semi-failing system, but it shouldn’t reboot/lockup unless something else is in play.

Update-

Systematically including/excluding directories in my backup process I was able to slowly narrow down the behavior to three unique directories (out of about 1,200…).

I also found that the crashes weren’t uniquely caused by the backup operation. When any file operations were carried out on these specific files, including at the command line (cp, etc.) the system would crash/hang. With that finding, we can close this out.

Unmounting the file system and running an e2fsck didn’t resolve the symptoms either.

I have a “staging” area and a “live” area, and 2 of the 3 directories were fixed by deleting the live files and shifting the others over. For the 3rd, even my cloud backup files would still crash my linux system when I tried to move/copy/backup them. Rather nerve-wracking to have such basic file operations lock up a system…

Thanks for all the great suggestions!