Input/Output error backing up pg_wal folder

I’ve found other posts mentioning I/O errors but the solution did not seem applicable to my setup.

I’m trying to backup my Unraid appdata folder but I’m encountering an I/O error that I don’t understand:

2024-07-24 06:06:20.191 ERROR CHUNK_MAKER Failed to read 0 bytes: read /backuproot/mnt/user/appdata/paperless/pgdata/pg_wal/000000010000000000000023: input/output error
2024-07-24 06:06:29.674 INFO INCOMPLETE_SAVE Incomplete snapshot saved to /cache/localhost/0/.duplicacy/cache/external-disk/incomplete_snapshot
Failed to read 0 bytes: read /backuproot/mnt/user/appdata/paperless/pgdata/pg_wal/000000010000000000000023: input/output error

I’ve made sure that I can manually copy this file from the specified location to /tmp and it works as expected. The Duplicacy container is running in privileged mode, so I don’t think it should be a permissions issue. The backup target is an external hard drive mounted to /mnt/external-backup/. I also have a backup pointing to a Google Drive directory and get the exact same error. Please let me know if you need more details!

Does this error always happen every time you run the backup? Can you temporarily skip this file by adding an exclude pattern?

Also check the syslogs to see if there are any errors there.

It’s happened every time I try to backup appdata, no matter the destination. I just attempted the backup excluding the pg_wal directory and it worked fine, but I want to make sure I’m backing up my all of paperless data as this contains important documents. I don’t see any errors in Duplicacy’s logs or in the syslogs. Do you have any suggestions on how I can fix the pg_wal directory for backing up?

The files there are likey locked in exclusive mode and the filesystem misreports the error.

“Wal” usually means “write ahead logging”. You likely don’t need to backup that folder, it contains journaling entries, likely for SQLite, to recover in case of application crash mid-write to database. In fact, unless you do that atomically, you shall not backup that folder: restoring it will almost guaranteed will end up in database corruption.

If you still want to back them up, and filesystem on entails is zfs, you could backup filesystem snapshots instead of live data. This will ensure atomicity and avoid access issue too. You can read about pre- and post- backup scripts here: Pre Command and Post Command Scripts · gilbertchen/duplicacy Wiki · GitHub.

Or you can make snapshots mountpoint visible and backup that with duplicacy, after configuring periodic snapshots with limited TTL.
(It’s trivial to do on TrueNAS, I’m sure unraid would have implemented this unnecessarily cumbersomely)

Else, if you have ext4 or other ancient filesystem that does not support atomic snapshots — add that folder to duplicacy exclusion list.

Very good information, thank you! I’ve added it to my exclusions. I’ll make sure to do a restore test soon to ensure the backup is solid.