Help with job failures

I’ve been struggling to get things running smoothly but keep running into different problems. Hoping I can get things sorted but not sure where to go from here.

I am running Duplicacy on a QNAP and using the web interface. I have 3 jobs backing up to one storage that is on an old Drobo connected on the same network via SMB.

Currently one job has data to backup but it fails with the following message:
Failed to upload the chunk 3ae0e17a20fc3816e1b7c6e8cfa88e455035b7148590a65f8d5bd3f730a6a964: stat Duplicacy Backup\chunks\3a: connection error: EOF

Most of the topics I found regarding EOF were related to connection issues. The connection is fine as far as I can tell. I can read and write from the SMB share without other problems.

I also have a Check job that is failing to run with the error:
WARN DOWNLOAD_CHUNK Chunk ce8300416dd55738d219755219ead39aa6228ea24cc89b8bdd6f7ee6411a3179 can’t be found
ERROR SNAPSHOT_CHUNK Failed to load chunks for snapshot rb-prod-qnap-audio at revision 26: invalid character ‘c’ after top-level value

I haven’t been able to find any info on this error.

My Prune job also fails complaining about the same chunk as the Check.
ERROR DOWNLOAD_CHUNK Chunk ce8300416dd55738d219755219ead39aa6228ea24cc89b8bdd6f7ee6411a3179 can’t be found

I have tried clearing the cache and that has not helped with these issues. Any help you can give would be appreciated.

Does duplicacy connect via SMB to drobo, or does qnap mount the share locally and duplicacy backs up to a local mounted folder?

That looks like corruption.

I configured the SMB connection to the Drobo directly from the Duplicacy interface when setting up the storage. Do you see any advantage to mounting it on the QNAP and pointing Duplicacy to the “local” drive?

What’s the best way to deal with corrupted chunks?

Great, this is the best approach.

No, but plenty of disadvantages.

After you verified the filesystem consistency on drobo you can try to recreate the bad chunks.

  1. Delete corrupted chunks from storage.
  2. Create another temporary snapshot ID
  3. Backup all your data again into that snapshot ID. The expectation is that the backup will produce the same chunks, vast majority of them would be already on the storage, and those that would not — will be uploaded. The hope is that the chunk we deleted would be one of the missing ones.
  4. Delete the temporary snapshot
  5. Run prune -exhaustive to clean up orphaned chunks from the temp snapshot.
  6. Run check on your original snapshot id again.

If some chunks are still missing, you would want to delete the affected revisions:

  1. run check with -persist option
  2. Collect list of affected revisions
  3. Delete them from the storage (under snapshots/snapshotID folder)
  4. Run prune -exhaustive to clear out orphaned chunks.

Thanks for this info. I’ll give it a shot and see how it goes!