Prune Fails: GCD_RETRY: The user does not have sufficient permissions for this file

Bingo!

% ./gcd_delete ./gcd-token-duplicacy-arrogant-full-access.json 'test1.txt'
2021/07/18 21:15:15 test1.txt: 1_GpYzHvj608U7OqKg0wUzuJjb1DAi491
2021/07/18 21:15:16 test1.txt has been successfully deleted
% ./gcd_delete ./gcd-token-duplicacy-saspus-shared-access.json 'test2.txt'
2021/07/18 21:15:34 test2.txt: 1l8ACiJHfnuMowm-4g58wDF2munddPkmk
2021/07/18 21:15:34 Failed to delete test2.txt: googleapi: Error 403: The user does not have sufficient permissions for this file., insufficientFilePermissions

Now, both test1 and test2 are created by user x@arrogantrabbit.com and the shared folder is shared to user x@saspus.com, as a content manager, which is supposed to have delete permissions, according to the screenshot

Why can’t we delete file then?

Edit. As a matter of testing, I switched the sharing to Manager (as opposed to “Content Manager”) and deletion succeeded:

% ./gcd_delete ./gcd-token-duplicacy-saspus-shared-access.json 'test2.txt'
2021/07/18 21:27:29 test2.txt: 1l8ACiJHfnuMowm-4g58wDF2munddPkmk
2021/07/18 21:27:29 test2.txt has been successfully deleted

Then I uploaded a file again, changed the sharing mode to Content Manager again, and it failed again:

% ./gcd_delete ./gcd-token-duplicacy-saspus-shared-access.json 'test2.txt'
2021/07/18 21:28:39 test2.txt: 1Zu6PBhDtlOEFVzo39o8ZvQ59_pNIWp_H
2021/07/18 21:28:40 Failed to delete test2.txt: googleapi: Error 403: The user does not have sufficient permissions for this file., insufficientFilePermissions

So, is this is google bug? Content Manager is supposed to be able to delete files, or it is duplicacy/test app/go google module issue by perhaps requesting too wide permissions to perform delete? (logging in to drive.google.com with x@saspus.com account allows to delete the file in both cases, so the google permissions seem to work correctly; it has to be some programmatic issue on the app or library side)

Edit. And lastly, perhaps when pruning duplicity should delete snapshots first, and chunks last: otherwise my datastore is now in a bit bad shape – the chunks were deleted, but snapshots not, and now check fails due to those ghost snapshots. This could also happen if the prune is interrupted. The idea being that it’s better to leave extra chunks behind than end up with a visible snapshot referencing missing chunks.

I’ve run into this issue a few times before, where I came to realise the reason for oodles of missing chunks was due to the prune job not properly finishing due to one reason or another.

I can’t quite remember exactly, but I think this caused a cascading issue where the next prune was trying to remove yet more snapshots, but aborted due to not finding expected chunks to delete. And then even more chunks were removed from storage.

Perhaps Duplicacy should rename the snapshot file to e.g 1.del first, delete chunks, then delete the marked snapshots? (I realise it doesn’t help to have *.del files left around, since they’re technically unusable, but they’d be an indication that something went wrong and that the storage may need a cleanup via -exhaustive).

1 Like

Did you mean that you can assign the user account as a Manager instead of a Content manager of the shared drive to fix the issue?

Was there a difference in creating test1.txt and test2.txt that caused them to behave differently when deleted?

Snapshots are deleted in the fossil collection step while the chunks are deleted in the fossil deletion step. If some snapshots are deleted and other are not (due to interruption or failures) then all chunks are still there but in the form of fossils.

If snapshots are deleted first then you may end up with lots of unreferenced chunks which can be only fixed by prune with -exhaustive.

Correct. Both the test app and duplicacy prune succeeds.

No difference, both uploaded from a local disk via browser to main account. The difference was different json token file provided — attempting to delete test1 using connection to main account and test2 using connection to second account. There is no other reason, I could use the same file.

The main takeaway here is as you said above — when sharing as Manager deletion via app works, but when sharing as content manager —does not. Deletion via google web app works in both cases, so something must be with the app/framework/public api.

I understand that; it’s just I feel this arguably is preferable to a scary message that check failed.

Not a big deal really.

This doesn’t apply to local file system (and SSH?) storage backends am I correct? On this tangent, would it be a good idea to ‘fossilise’ snapshot files when deleting chunks? To make sure an interrupted or failed prune doesn’t leave bad snapshots around.

So… shall we leave it at that – with a workaround of making a user a manager – or shall we maybe bubble up the issue further up the libraries?