4 posts were split to a new topic: Missing chunks (ZFS volume)
@gchen: hasn’t this been fixed when you added the support for multiple nested levels for the
From what i remember (and i have a bad memory) duplicacy now searches for some versions by default (even if that file which contains the
nested levels doesn’t exist).
@gchen is there a need for the repo to be re -created? (i just want to delete the “re” if it’s ok)
As per the response in FATAL DOWNLOAD_CHUNK Chunk (w/ Wasabi) it should be pointed out that if someone deletes the snapshots they should also delete the .duplicacy/cache to make sure it works.
I’ve updated the guide. Thanks for pointing it out.
Can this please be automatic? i.e. if I run
duplicacy -prune -r 1000-1003 I’d expect duplicacy to manage cache accordingly and keep it up to date. Or simply nuke it for me.
I knew about this and yet wasted few minutes today with this issue again… I would not expect users to go and read the documentation; they will panic and go create new support topic…
In the current implement the prune command does delete the copy from the cache when deleting a snapshot file, but it only does it in the cache under the current repository. It can’t do it for other repositories on the same computer or a different one.
Oh, you are absolutely right. Now thinking about it that’s exactly what happened. Maybe Duplicacy should annotate the storage with which client last performed prune and clients would distrust cache if it wasn’t them?
I think the solution is to compare the timestamp of the cached copy with that of the file in the storage. However, due to an oversight in the design, the backend API doesn’t return the modification times when listing files in the storage (although most storages should support it).
Where do I find that file in a duplicacy-web install (on linux)?
preferences files are auto-generated in the web GUI so it is not recommended to modify them. If you want to change the repository id (which is called a backup id in the web GUI), just create a new backup with a new backup id.
Is there a way to duplicate and modify an existing backup? In order to follow the above instructions I obviously also need to use the same filters…
You can edit
~/.duplicacy-web/duplicacy.json directly – find the backup in
repositories and then change the id.
Changing that doesn’t change the backup ID in the UI. Will it still work?
Forgot to mention that you’ll need to restart the web GUI for the changes in
duplicacy.json to take effect. Better yet, edit
duplicacy.json while the web GUI is not running otherwise your changes may be overwritten.
Can I restart the web-ui while a backup job is running?
Here is my reply from the other thread earlier today:
The CLI can be terminated any time and it shouldn’t leave any half-uploaded files on the cloud storage server, if the server behaves properly, because the content length is always set and the server should never store an incomplete chunk file shorter than the content length. OneDrive for Business is an exception but we’ve fixed that in the latest CLI release by using a different upload API.
For non-cloud storages like sftp and local disk, the CLI uploads to a temporary file first and then rename the temporary file once the upload completes. Aborting should cause any partial upload.
So are you saying that restarting the web-ui will stopp the cli but ir doesn’t matter?
BTW: you can quote text from other topics/threads. That will create links between those topics.
So I just waited for the backup to finish and then edited
duplicacy.json, then restarted the web-ui. The new backup-ID showed up in the ui, and the backup went through without problems. But I don’t think it worked as intended because it uploaded tons of files that were supposed to be excluded (and which were excluded before renaming the ID). Might it be that renaming the repo in the .json file results in duplicacy displaying filters in the web-ui but not actually applying them?
There is also a case with B2 that the chunk exists, but there are multiple versions of it and the latest one is zero size. I did not find instructions what to do in this case.
The solution seems to be to log in B2, locate the “missing” chunks and delete the zero-sized versions. I have no idea why those have been created in the first place, but I suspect an interrupted backup. I am not just sure does this method guarantee that the chunk content is valid anymore…