Storage suddenly not initialized - backup job same day as Daylight Savings Time

So yesterday morning I woke up to find that I had a couple hundred emails from my Duplicacy job status telling me my backup job failed as the storage was not initialized. These all came through in exactly 1 hour - from 1am to 2am. I have 2 Unraid servers each running Duplicacy and I backup between them regularly. I’ve changed nothing on my end and both were up and running just fine for months.

Wild guess, but something about my job kicking off at 2am and the fact that it was Daylight Savings Time so the clocks here “moved forward” an hour. That said, it is weird because that should’ve happened at 2am and moved them to 3am - I just find it more than coincidental that it happened on that day. I stagger my days between the two so “Server B” wasn’t due until tomorrow night actually, but I just manually ran the maintenance and backup jobs to “Server A” just fine. Unfortunately, “Server A” is giving me the storage not initialized error and failing, through the connection is fine and again, nothing has changed.

I’ve changed the backup day from Sunday and changed the start time to 3am so DST is a non-factor in the future in any event. Can someone tell me if there is a way to fix my storage, as the keys and everything should still be valid, or do I have to delete the storage and start over?

It was a known bug that jobs scheduled between 2am and 3am would run repeatedly on the first day of DST, which is fixed in 1.5.3: Duplicacy Web Edition 1.6.0 beta builds

I don’t know what caused the storage not initialized error. Did the log have a more detailed error message?

Nothing in the alert email proper - I’ll have to dig deeper into the logs. Bummer, this is an Unraid app and it looks like it is still on 1.50 even though it has the :latest tag. Guess I’ll have to look at pulling it directly from Dockerhub maybe.

Yeah, so I don’t see anything. It should be working, and it isn’t. Am I out of luck or is there a way to fix this issue without having to go through the rigmarole of having to create the storage and jobs?

Did you see any error in the main log? You can open the main log by clicking the log icon in the rightmost column on the unraid docker page.

Crap. I typed up a reply to you days ago and I guess I didn’t submit it somehow. Anyway, the log isn’t telling me anything. I just tried it manually once more and this is still all those logs show:

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 00-env-file-init: executing...
[cont-init.d] 00-env-file-init: exited 0.
[cont-init.d] 00-start-container: executing...

----------------------------------------------------------------------
ENVIRONMENT
----------------------------------------------------------------------
PUID=99
PGID=100

2
TZ=America/Los_Angeles
----------------------------------------------------------------------

Executing usermod...
Applying permissions to /config
Applying permissions to /cache
Applying permissions to /logs
[cont-init.d] 00-start-container: exited 0.
[cont-init.d] 01-config-app: executing...
[cont-init.d] 01-config-app: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
Log directory set to /logs
Duplicacy Web Edition 1.5.0 (BAFF49)
Starting the web server at http://[::]:3875

Welp. Turns out I’m just an idiot. I guess I tweaked the mount point name when I rebuilt my Unraid server, but I didn’t adjust it properly in the container template so it was just seeing an empty directory. Fixed it and it appears to be working again. Too tired to look now but I guess that rebuild just happened to occur right before the DST change so I didn’t notice it.

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.