For this to work in duplicacy, basically you would have 3 issues to deal with:
#1
Any duplicacy config files that get changed during every backup. This could be fixed by using a new config file copy every time, and deleting the old config files after the lock expires.
For example, instead of using config.data (or whatever), you would use
config.data.00001
then
config.data.00002
It should always copy the highest numbered file to a new one. The old ones can be deleted out as their locks expire.
#2
Pruning / deleting files from the backup. Just don’t set any pruning for less than however long the lock period is. (Or don’t worry about this issue at all, and silently ignore lock errors when deleting, knowing that eventually one day, the delete will work. Every prune will try to delete these, and eventually it will work.)
#3
Keep old files locked somehow. Since duplicacy reuses existing chunks forever, you don’t know how long to lock them for. So, after every prune, it would need to relock any files that are not locked (but ONLY files that are not locked). This would be tricky. Some files would be expire the lock for some time before the next prune to relock them and would be vulnerable during that time. There would instead need to be something indicate that a file needs to be allowed to expire the lock, and otherwise continually renew the lock.
Interestingly, B2 also has “Legal Holds” now in addition to the new locks. I’m not sure how the legal holds work within b2, and they don’t seem to be documented anywhere yet, but this could potentially be used as a different kind of locking mechanism we could use.