Prune fails...what next?

What is the target storage?

Click on the text that says that if failed, on the Schedules tab, in the Status column; the log will open in the new browser tab. See what’s there. If inconclusive – add -d global flag to prune, run it again, and check a new, more verbose log.

This is expected false positive, since the prune did not complete – the snapshots that were to be pruned are now incomplete. You can get rid of that false positive somewhat by adding -fossils flag to check.

Thanks for your reply!
So, there’s a million lines with something like
2021-11-12 11:03:14.187 WARN CHUNK_FOSSILIZE Chunk 33de0d22034318fa6e0efe2c04e7e7d0cca77c0asdoaishda8712116823jkahssgdb is already a fossil
Not sure the meaning of fossilizing chunks though. Then it ends with:
2021-11-12 11:04:36.655 ERROR CHUNK_DELETE Failed to fossilize the chunk 3ef0dce1da6f073f1ecfd0ea19ebb2c96d6asdhjasa961ca4a: Move https://eu-central-1@s3.eu-central-1.wasabisys.com/duplicacy-backup/chunks/3e/3ef0dce1da6f073f1ecfd0ea19ebb2c96d6asdhjasa961ca4a: net/http: TLS handshake timeout
Failed to fossilize the chunk 3ef0dce1da6f073f1ecfd0ea19ebb2c96d6asdhjasa961ca4a: Move https://eu-central-1@s3.eu-central-1.wasabisys.com/duplicacy-backup/chunks/3e/3ef0dce1da6f073f1ecfd0ea19ebb2c96d6asdhjasa961ca4a: net/http: TLS handshake timeout

What can I do next? It never timed out with Duplicati or other solutions so far, I don’t think Wasabi has any reliability issues.

Thanks!

It’s wasabi issue. TLS handshake timeout.

Another possibility is that (I’m not sure how exactly wasabi api works, so speculating here) if temporary tokens are involved — maybe those expire because it’s such a long process?

Have you noticed if it tends to fail always around the same time since starting the prune?

If this is the case then it needs to be fixed in duplicacy.

Edit. Does it immediately fail on the first attempt to move the file on the server or does it manage to do a few and then fails?

Did you configure storage as S3 endpoint or Wasabi?

If you mean in Duplicacy, I believe it’s set as Wasabi. The backup address in Duplicacy goes like wasabi://eu-central-1@s3.eu-central-1.wasabisys.com/duplicacy-backup

Regarding timing, I think you might be onto something. Both times it took really long. Let’s see, from the log I can see it starting at 2021-11-12 02:21:05.847, and timing out at 2021-11-12 11:04:36.655. I’m going to run it again to see how long it takes to fail this time, I’ll post back in a few hours.

Ok…so this attempt fails with an EOF??
It also runs for a very long time. It starts at 2021-11-14 07:04:53.469 and crashes by 2021-11-15 08:59:59.502

2021-11-15 08:59:59.502 ERROR CHUNK_DELETE Failed to fossilize the chunk asiduahsdiuasdh8291388hagsd87t1233e12928kashdkjah: Move https://eu-central-1@s3.eu-central-1.wasabisys.com/duplicacy-backup/chunks/44/asiduahsdiuasdh8291388hagsd87t1233e12928kashdkjah: EOF
Failed to fossilize the chunk asiduahsdiuasdh8291388hagsd87t1233e12928kashdkjah: Move https://eu-central-1@s3.eu-central-1.wasabisys.com/duplicacy-backup/chunks/44/asiduahsdiuasdh8291388hagsd87t1233e12928kashdkjah: EOF

There was somewhat relevant thread with similar EOF errors and TLS Timeouts popping up with Wasabi. Unfortunately no resolution there

Similar issue happens with rclone (that thread however has an explanation of the root cause and a (bad) workaround offered):

I would switch to another provider, I personally had similar terrible experience with wasabi in the past; apparently nothing changed.

Thanks…This next run failed with:

2021-11-16 05:58:16.143 ERROR CHUNK_FIND Chunk a8eb7c817864f3b61d4c941dcea7e0a223eeed273416ddd54ac0591ff does not exist in the storage
Chunk a8eb7c817864f3b61d4c941dcea7e0a223eeed273416ddd54ac0591ff does not exist in the storage

Any idea about why there might be chunks missing? Any of the failed prunes might have caused this?
It keeps saying lots of chunks are already fossilized…what does that mean?

Yes, definitely.

You can delete orphaned snapshots manually (from subfolder “snapshots”) on OneDrive, then clear local duplicacy cache, and run prune -exhaustive to cleanup orphaned chunks.

You can also recover fossilized chunks with check -fossils -resurrect but since those snapshots were going to be pruned anyway it would be extra work for nothing.

So, Wasabi’s response…

The other 26 are 400 Request Timeout responses. A 400 returned from us is the result of the client end opening a TCP connection on the S3 end, but no subsequent request successfully coming in for 60k ms (60 seconds) so we close out the connection due to timeout.

Wasabi says I might try to get a smaller chunk size, ideally ~10MB. Is this possible? That would involve re-doing my backups. I was hoping to be able to prune them out, but prune seems to keep repeatedly failing. Is there any other solution? I don’t want to change providers at this stage, Wasabi has been working reliably for me for years while using Duplicati and other applications. It’s Duplicacy I’m having issues with.

I don’t think this is a root cause. This sounds like a fallout of some other issue happening before it (speculating here):

  1. somethign happens, wasabi fails the request.
  2. Duplicacy cannot proceed, but does not clean up properly leaving some connections open.
  3. Wasabi closes connections.

Duplicacy can probably do better job cleaning up after failed requests, but ultimately this would not fix much – the problem has already occurred and wasn’t handled.

What needs to be addressed first is why does the very first failure occurr and why does not duplicacy retry.

Can you post the full log somewhere? You can anonymize filenames, but leave timestamps and other messages.

I don’t see how chunk size should matter either.

I’d rather not post my backup logs online at all, and if I were to anonymize each individual file hash it seems a PITA. Is it possible to send it privately? I can’t find the button to privately message you.

Additionally, where can I get the logs for each backup attempt? Clicking on “Failed” only allows to see the last log. How can I see previous attempts?

I use Wasabi with chunks sizes varying from 1 to ~4 MB (as storage configuration, obviously the chunk sizes vary more), but I don’t think this is the cause.

And about prune, as I don’t prune at all, I’m sorry but i can’t help with this…

I really have to find some time to create my prune scripts, but I am always stalling… :see_no_evil:

Backups are working fine, and space is not my concern at the moment. The only problem is a - very small - loss of performance when running checks and other commands.

Thanks…all these concerns started as I was testing a second backup method. I wouldn’t mind temporarily duplicating my wasabi storage while testing this new system. But I noticed there was an issue actually on the bill. By now it’s 5x times the original size of my previous backup, as it’s been running daily like my other backup, but this one has not been pruning old versions. Hence my increasing need for a pruning method to work.

and if I were to anonymize each individual file hash it seems a PITA

I thought more along the lines of doing it automatically with sed,

When you click on my username do you have Message button there?

image

On the first page, on the history graph. You can also look up location for log files in the Settings.

You can add the -exclusive option to the prune command to skip the fossilization step. Just make sure that no backups are running during prune.

If prune with -exclusive still complains about missing chunks, run a check to find out which revisions have missing chunks, then delete these revisions manually in OneDrive. Once the check passes, prune should not fail anymore (unless the OneDrive server issue remains, but that is less unlikely with -exclusive). After that you can add -exhaustive to prune to clean up unreferenced chunks.

1 Like

Nope, I guess since I registered recently and this is my first post, I don’t have the permission to send private messages yet?

Hmmm the settings tab says my logs should be in /.duplicacy-web/logs, but there’s only a duplicacy_web.log file with a few entries from May 2021…which is about when I was testing the install I think.
EDIT: Scratch that, it’s under a different user. Since the full path is not specified, this can be confusing. I believe the full path should be shown in the UI instead of just ~/.duplicacy-web/logs.
But still, there’s only like 12 lines from 9 days ago. Nothing referencing the current or recent backups.

Sorry do you mean the MS solution…as storage? I’m using Wasabi, not sure if this changes anything.

I just promoted you to the next level, check if the button now appears.

What OS is this? If that is windows and you installed it as a service the config folder will be under c:\ProgramData.

There are some logs under the cache folder, if I remember correctly, where the CLI engine keeps config. Something like ~\.duplicacy_web\cache\localhost\<id>\.duplicacy\logs.

Probably mix up with the other thread. You can delete the bad snapshots manually in wasabi web interface, and then cleanup the repo with prune -exhaustive. But first let’s find out why does it fail.

Sorry for the delay…a few busy past days.

Ubuntu 20.04

I can’t find any cache folder in .duplicacy_web. Only bin, logs, repositories and stats. And logs has only duplicacy_web.log, with 861 bytes total.

Basically it keeps repeating a few times the following:

2021/11/10 03:29:37 Created a new configuration.
2021/11/10 03:29:38 Failed to request a license: This computer is not eligible for the 30-day trial
2021/11/10 03:29:38 Temporary directory set to /root/.duplicacy-web/repositories
2021/11/10 03:29:38 Duplicacy Web Edition 1.5.0 (BAFF49)

That’s the cache folder.

find /root/.duplicacy-web/repositories -name “*.log” — nothing there?

The standard log location however will be listed in the settings tab and/or in settings.json