Incomplete snapshot errors - what happened?

Can anyone tell me what/why this happened?
[Duplicacy-web running in docker container on Synology DS920+]

2022-05-17 09:14:57.810 INFO INCOMPLETE_LOAD Incomplete snapshot loaded from /cache/localhost/0/.duplicacy/incomplete
2022-05-17 09:14:57.811 INFO BACKUP_LIST Listing all chunks
2022-05-17 09:15:07.677 INFO FILE_SKIP Skipped 11172 files from previous incomplete backup
2022-05-17 09:15:07.697 INFO BACKUP_THREADS Use 10 uploading threads
2022-05-17 11:31:06.544 INFO INCOMPLETE_SAVE Incomplete snapshot saved to /cache/localhost/0/.duplicacy/incomplete
Duplicacy was aborted

Duplicacy ran out of ram? How much free ram do you have there?

8GB total. Interesting, resource monitor history shows steady use at 20% of 8GB ram until about 12 minutes before this happened, then it drops to 10% memory use, presumably because of this termination.

I had been running two backups in parallel (different sources, different b2 target buckets) at the time of the abort. Ran just one afterward. I am still ingesting the first full backup so they’re running 24/7. I’ll stick with one backup at a time and see if it happens again.

Also running a cache nvme ssd on the Syno which eats a bit of ram. Thinking about taking that out. Not sure it does much more than eat ram…

Not a lot, but not nothing either (see below)

Does not look like ram exhaustion at all. Do you have somewhere any settings that limits for how long is duplicacy allowed to run? IIRC there was something like max duration or somthing along those line, unless I’m misremembering something.

Just to be sure – look in /var/log/messages for any events related to duplciacy: if the system killed the processs there will be evidence.

Run in parallelel, so the issue reproduces. There is nothign inherently wrong with runnign them in parallel, but depending on how many files you have in the backup set memory may become a problem. Maybe resource monitor is lying. I would not trust synology on that one.

Definitely remove SSD cache. Before SSD cache becomes useful – first max out the ram. DS918+ works with at least 32GB. And after you max out the ram, still don’t use SSD in those devices (you won’t need to either: in-ram cache woudl be enough, and it will be large enough to not get evicted completely when apps ask for more ram, thereby not murdering responsiveness of your nas).

Forget those m.2 ports exist. You can read more on Synology sub on reddit. (TLDR – it’s an ill-conceived broken feature that affects data integrity in a negative way. You can start here: https://www.reddit.com/r/synology/comments/j10i3q/ssd_cache_1_on_synologynas1_has_degraded/ /. 918+ was the guinea pig project where this feature was being tested out, as businesses would not tolerate that crap; as a result of this test drive, Synology released their own branded SSD stick to address those inherently unavoidable in the way this features was implemented stability issues.)

Might I suggest using the new cli build with the RAM optimization?

I’ve been using it on an ancient 8 years old NAS with 8 GB RAM.
I run 2 VMs and a backup job without problems.
Forget about cache and I would also advise against syno and qnap docker implementations.

You may suggest it, but, alas, I do like gui’s. I’m comfortable in the terminal, but it’s not my preference.

I tried putting 16gb in the nas and it would not boot. I used the same ram that virtually everyone on reddit has said worked for them and it would not boot for me. I may try another dimm and see if my luck changes.

I know all about the various opines on the SSD’s, but I had a new one laying around and thought I’d see if it actually improved anything - it didn’t, so I’m I’ve already unmounted it and will remove it tonight.

Cache is forgotten as of tonight. Up until it didn’t, it was working like a champ. Still is as I speak…(fingers crossed). I had hoped Duplicacy might be a step above Hyper Backup and would release me from my reliance on a Synology app to backup my nas data. If it can’t run in a docker successfully, then I’m not sure I’m interested in using it. I suppose I could try running it on my desktop and backup mapped nas shares, but I’m not keen to move the backup action off the nas.

Oh, don’t worry.

Once the RAM optimized version is actually released it will be automatically downloaded and used by the GUI.
So just hold on a little longer. RAM usage will go down dramatically.

Duplicacy is awesome and is the way to go.
Never trust proprietary solutions for backups.

That’s a too categorical and dramatic statement. There are plenty of high quality closed source backup tools. Development, distribution, and source management model has nothing to do with software quality or trustworthiness.

Also, according to your logic, you should not use Duplicacy Web: it’s a closed-source opaque piece of software that downloads and runs binaries from the internet on your computer, no less. Do you trust it?

Few potential culprits:

  • It’s really hard to push memory into slot. Often people don’t push them hard enough, till a click
  • On first boot after configuration change the unit runs memory test. It can take a while. You can interrupt it, by resetting the unit, but it’s best to wait for it to finish.
  • Not all ram configurations and ranks are supported. Look at nascompares for the working configurations. (for example, single-rank ram is universally does not work)

I’ve been seating cpus and memory dimms for decades. No doubt it was properly seated the first time. Yep, left it in for “a good long while” - Tried several times with no joy. I have since seen confirmation reports that this dimm did not work for all ds920’s. Not sure why. After RE-reviewing several threads and websites, I’ve ordered a different 16gb dimm that will arrive today. fingers cross.

1 Like

Yes, maybe I was a bit too dramatic. :sweat_smile:

I just meant that all else being equal, it is preferable to use an open-source solution or at least an open format.
It’s not a trust issue it’s a longevity issue. Try to restore a 12 year old backup.

The beauty of the web GUI is precisely that it is just a nice interface on top of the cli.
cli download can be disabled.

I use the web-gui mainly to support duplicacy.
And because I think that at some point it could be the basis of a centralized backup solution with many clients using gRPC. That would add a ton of value.

1 Like

new ram arrived and installed. Looking at 20GB now. nvme ssd cache removed.

backup failed with same error this afternoon. Not happy about this. Don’t like vague errors and ungraceful abortions in the middle of a backup. Is there a magical prune I can run to clean up my blocks & whatever?

Did you look in /var/log/messages and duplicacy_web.log? If the duplicacy cli engine is being killed you wont see anything in its log obviously. But there will be something in either of the aforementioned files, depending on who is the killer

Anecdote time. I have about a quarter century old price of a rather peculiar equipment, which is irreplaceable in a way but still useful. It can be configured with their own closed sourced software that only works under specific version of windows 95. Last i needed to use it was about 5 years ago. Of course, windows 95 won’t even boot on modern hardware but it does just fine in a vm. So that’s what I do: I keep a windows 95 vm around. This is to illustrate that 25 years is nothing for discontinued closed source software. This is especially true for backup and similar file manipulation tools.

While many backup tools offer opensource free restore utility this too is not strictly needed, and was never a deciding factor for me. If I had to migrate from a closed source tool to another one today I would just write a script to restore/change time/backup/ rinse, repeat for every revision. Run it somewhere and go about my day.

Agree on webUI. It has a long way to being usable, it too bought a few licenses, and yet don’t use it.

Okay, well this is embarrassing…

2022/05/20 10:38:14 Stopped schedule daily due to max run time exceeded

For some reason, it timed out at a strange time. the max time run was set to stop the backup at 05:45 and start again at 0600. I had max time set for 23:45, only because I set it to stop so my check & prune schedule can run every morning at 05:50 and thought the backup needed to be down when that happened. Does setting the max run time to 00:00 disable it???

Awesome! That’s exactly what I meant by

I should have been more insistent on getting an answer :slight_smile:

Duplicacy is fully concurrent. No need to stop anything for anything else. There is one corner case but it won’t apply to your scenario (IRRC it’s over 7-days long backup and concurrent exhaustive prune)

That I’m not sure about. Try it.

hmmm. was going on advice from somehwere… good to know. BUT, does that mean I can run prunes concurrent with the backup that is being pruned??

Tried it and seems to be the case. Been running consistently overnight…

Yes, you can, outside of the corner case mentioned.

The snapshot file is uploaded in the very end, so until backup is finished it does not exist.

Prune also does not delete stuff immediately, it fossilizes chunks first. If the chunk happens to be needed later it will be recovered.

This is also why you if you run check you should run it with -fossils flag. That should have been a default but historically isn’t.

More here: duplicacy/duplicacy_paper.pdf at master · gilbertchen/duplicacy · GitHub