Prune Fails: GCD_RETRY: The user does not have sufficient permissions for this file

Duplicacy fails during prune GCD storage like so:

Running prune command from /Library/Caches/Duplicacy/localhost/all
Options: [-log -d prune -storage Rabbit -keep 0:1800 -keep 7:30 -keep 1:7 -a]
2021-07-04 15:46:03.433 INFO STORAGE_SET Storage set to gcd://Backups@Duplicacy
2021-07-04 15:46:03.433 DEBUG PASSWORD_ENV_VAR Reading the environment variable DUPLICACY_RABBIT_GCD_TOKEN

... [loading a bunch of stuff from snapshot cache]...
... [reporting a bunch of chunks as being already fossils]... 

2021-07-04 16:01:28.671 WARN CHUNK_FOSSILIZE Chunk 748091b4b48852fd2130131f4cbc7616971c173780cfeb612961811288fdc2e1 is already a fossil
2021-07-04 16:01:29.266 WARN CHUNK_FOSSILIZE Chunk 6baa13baea489bcee11cf4eb17e7baeff3869a5c6301cb2884875d7ceb51dfe0 is already a fossil
2021-07-04 16:01:29.871 WARN CHUNK_FOSSILIZE Chunk 352c17bc27839816b362b3513f54348a671233eeb1e464834204ee6d8b883340 is already a fossil
2021-07-04 16:01:30.017 INFO FOSSIL_COLLECT Fossil collection 2 saved
2021-07-04 16:01:30.824 DEBUG GCD_RETRY [0] The user does not have sufficient permissions for this file.; retrying after 2.77 seconds (backoff: 2, attempts: 1)
2021-07-04 16:01:34.110 DEBUG GCD_RETRY [0] The user does not have sufficient permissions for this file.; retrying after 6.23 seconds (backoff: 4, attempts: 2)
2021-07-04 16:01:40.825 DEBUG GCD_RETRY [0] The user does not have sufficient permissions for this file.; retrying after 6.46 seconds (backoff: 8, attempts: 3)

Clearing the cache does not help.

What can I look for to triage it further?

In case relevant: The repository is located on the shared folder in google drive, that folder is shard with another google account with ā€œContent Managerā€ permissions; that other account is used to setup Duplicacy.

Does the log show what file that is? Can you check its permission to see if it can be deleted by the account that runs the prune job?

No, thatā€™s the entire tail of the log. There is no mention of which file did it encounter a failure with.

Attempt one: prune-20210704-160929.log

2021-07-04 16:29:29.398 WARN CHUNK_FOSSILIZE Chunk be5d223b4a0266f4b218edecf345657a1401e649c464522c2f94899b6e22841c is already a fossil
2021-07-04 16:29:29.970 WARN CHUNK_FOSSILIZE Chunk 6576d4f49125159558d4b691cd4b7682056d7a99f8634b83745dbef70bc6da63 is already a fossil
2021-07-04 16:29:30.540 WARN CHUNK_FOSSILIZE Chunk f16ab189465900475364a9c3f74ee5c432195aa974117bd63088a8cfd71c670c is already a fossil
2021-07-04 16:29:30.640 INFO FOSSIL_COLLECT Fossil collection 1 saved
2021-07-04 16:29:31.400 DEBUG GCD_RETRY [0] The user does not have sufficient permissions for this file.; retrying after 0.32 seconds (backoff: 2, attempts: 1)
2021-07-04 16:29:32.190 DEBUG GCD_RETRY [0] The user does not have sufficient permissions for this file.; retrying after 7.64 seconds (backoff: 4, attempts: 2)
2021-07-04 16:29:40.290 DEBUG GCD_RETRY [0] The user does not have sufficient permissions for this file.; retrying after 13.07 seconds (backoff: 8, attempts: 3)
2021-07-04 16:29:53.861 DEBUG GCD_RETRY [0] The user does not have sufficient permissions for this file.; retrying after 23.69 seconds (backoff: 16, attempts: 4)
2021-07-04 16:30:18.054 DEBUG GCD_RETRY [0] The user does not have sufficient permissions for this file.; retrying after 54.28 seconds (backoff: 32, attempts: 5)
2021-07-04 16:31:12.793 DEBUG GCD_RETRY [0] The user does not have sufficient permissions for this file.; retrying after 110.84 seconds (backoff: 64, attempts: 6)
2021-07-04 16:33:04.483 DEBUG GCD_RETRY [0] The user does not have sufficient permissions for this file.; retrying after 122.48 seconds (backoff: 64, attempts: 7)

Attempt two: prune-20210704-163533.log

2021-07-04 16:49:24.660 WARN CHUNK_FOSSILIZE Chunk 420c301e7ec96021b8ab8f802664fabbdda943aace9875b296ece08d28d5b3d6 is already a fossil
2021-07-04 16:49:25.167 WARN CHUNK_FOSSILIZE Chunk 5c362b02f36e4d7d14ea26c0e0c2165a7f304eaf7ca4706cc921ade92fa6583c is already a fossil
2021-07-04 16:49:25.706 WARN CHUNK_FOSSILIZE Chunk fa9a5d7cb3fca8ef77a2612fb5a96e95652b2c3ce0d9ec84baf01386147f4886 is already a fossil
2021-07-04 16:49:25.978 INFO FOSSIL_COLLECT Fossil collection 2 saved
2021-07-04 16:49:26.766 DEBUG GCD_RETRY [0] The user does not have sufficient permissions for this file.; retrying after 1.37 seconds (backoff: 2, attempts: 1)
2021-07-04 16:49:28.565 DEBUG GCD_RETRY [0] The user does not have sufficient permissions for this file.; retrying after 4.72 seconds (backoff: 4, attempts: 2)
2021-07-04 16:49:33.848 DEBUG GCD_RETRY [0] The user does not have sufficient permissions for this file.; retrying after 0.26 seconds (backoff: 8, attempts: 3)
2021-07-04 16:49:34.688 DEBUG GCD_RETRY [0] The user does not have sufficient permissions for this file.; retrying after 16.79 seconds (backoff: 16, attempts: 4)
2021-07-04 16:49:51.915 DEBUG GCD_RETRY [0] The user does not have sufficient permissions for this file.; retrying after 48.22 seconds (backoff: 32, attempts: 5)

Iā€™ve logged in to drive.google.com with the user duplicacy is using and created and deleted a new folder there successfully, so google permissions seem OK.

How can I figure out which file does it complain about? Is it the last ā€œalready a fossilā€ chunk logged? First?

Not sure if relevant ā€“ in the beginning the message

2021-07-04 16:09:32.689 DEBUG STORAGE_NESTING Chunk read levels: [1], write level: 1

while the directory structure is chunks/XX/XXXXX..XX ā€“ i.e. it is not single level.

Edit.

Prune logs under /Library/Caches/Duplicacy/localhost/all/.duplicacy/logs/ contain only

Marked fossil 0ec236fb81269dadefc02144cf6f54326ed87ace42d7542762608a7ad06a5dce
Marked fossil d34577c35e5abdf31d8350b816cd9e05e527fa68f9e261b3e2f18e77ef1f1eef
Marked fossil 1880fb12e95efb3e63ecb8e22f1691a2412b8eba9ad5eb48c7c97b4d3a2f5829
Marked fossil be5d223b4a0266f4b218edecf345657a1401e649c464522c2f94899b6e22841c
Marked fossil 6576d4f49125159558d4b691cd4b7682056d7a99f8634b83745dbef70bc6da63
Marked fossil f16ab189465900475364a9c3f74ee5c432195aa974117bd63088a8cfd71c670c
Fossil collection 1 saved

and

Marked fossil 5b1be728795a1d1683e1f708bec73ceafe8d0dfec4d21d025260d35ef27fd298
Marked fossil 15476679e68311bf78ca5eea4ede94e0102ef3b1e54847347e743dc4064e325e
Marked fossil 420c301e7ec96021b8ab8f802664fabbdda943aace9875b296ece08d28d5b3d6
Marked fossil 5c362b02f36e4d7d14ea26c0e0c2165a7f304eaf7ca4706cc921ade92fa6583c
Marked fossil fa9a5d7cb3fca8ef77a2612fb5a96e95652b2c3ce0d9ec84baf01386147f4886
Fossil collection 2 saved

ending with Fossil Collection Saved respectively, but no other messages

Maybe it fails at something that is trying to do after dealing with fossils?

I believe it is one of the revision files that couldnā€™t be removed. You can run the prune job with the same options but add -dry-run to find the first revision to be deleted which should be the one that has the permission issue. Or check the fossil collection file 1 or 2 under .duplicacy/cache/fossils and look for the deleted_revisions list. The first revision that is in this list and still appears in the storage is the revision in problem.

Ok, the /Library/Caches/Duplicacy/localhost/all/.duplicacy/cache/Rabbit/fossils

{"end_time":1625442565,
"last_revisions":{"obsidian-users":71},"deleted_revisions":{"obsidian-users":[4,5,6,7,8,9,10,11,12,13,14,15,16]},"fossils":
["chunks/68/679dc1dd72346bdc220bea371e2bf67d0043eabece29aaea1e9a35b2e80a84.fsl",
"chunks/26/1d0baaafeccb054d1b050f77156ed67ab489a31381ea4075d77bd436d0fdc6.fsl",

Using Cyber Duck I see the same chunk not under chunks folder, but under fossil folders:

And I have successfully deleted it manually in Cyberduck, so the permissions are not an issue:

Just for reference, there is no corresponding file (68/679dc1dd72346bdc220bea371e2bf67d0043eabece29aaea1e9a35b2e80a84) in the chunks folder:

BTW now, once I deleted that chunk manually, I get errors:

2021-07-08 14:35:56.604 ERROR CHUNK_FIND Chunk 68679dc1dd72346bdc220bea371e2bf67d0043eabece29aaea1e9a35b2e80a84 does not exist in the storage
Chunk 68679dc1dd72346bdc220bea371e2bf67d0043eabece29aaea1e9a35b2e80a84 does not exist in the storage

Probably that was a bad ideaā€¦ Iā€™ll start a new backup to the same storage and hope it will be re-created.

Iā€™ve put the file back and re-run the prune again. This time I got different symptoms:

2021-07-08 16:12:28.422 WARN CHUNK_FOSSILIZE Chunk aebdce4d147077aa8b28654c672d308ab0c4ff30249b5930c826632afcd8d6a9 is already a fossil
2021-07-08 16:12:29.068 WARN CHUNK_FOSSILIZE Chunk ea6d086d9072be2a7f5cc6c78a21001b23bb96a2a5af243d4aae903fe1115fcc is already a fossil
2021-07-08 16:12:59.835 DEBUG GCD_RETRY [0] Patch https://www.googleapis.com/drive/v3/files/1HgbehPSotYVhAGG3BBeOVoTuHSkvvs5D?addParents=1UJ9sylpllXJvay8j5NGHjjWgBxd14_38&alt=json&prettyPrint=false&removeParents=1vNsrIw5y_ay-OQ-mICmK5bF-R0k5QfT9&supportsAllDrives=true: Post https://duplicacy.com/gcd_refresh: dial tcp: lookup duplicacy.com: no such host; retrying after 2.02 seconds (backoff: 2, attempts: 1)
2021-07-08 16:13:32.051 DEBUG GCD_RETRY [0] Patch https://www.googleapis.com/drive/v3/files/1HgbehPSotYVhAGG3BBeOVoTuHSkvvs5D?addParents=1UJ9sylpllXJvay8j5NGHjjWgBxd14_38&alt=json&prettyPrint=false&removeParents=1vNsrIw5y_ay-OQ-mICmK5bF-R0k5QfT9&supportsAllDrives=true: Post https://duplicacy.com/gcd_refresh: dial tcp: lookup duplicacy.com: no such host; retrying after 7.59 seconds (backoff: 4, attempts: 2)
2021-07-08 16:14:09.822 DEBUG GCD_RETRY [0] Patch https://www.googleapis.com/drive/v3/files/1HgbehPSotYVhAGG3BBeOVoTuHSkvvs5D?addParents=1UJ9sylpllXJvay8j5NGHjjWgBxd14_38&alt=json&prettyPrint=false&removeParents=1vNsrIw5y_ay-OQ-mICmK5bF-R0k5QfT9&supportsAllDrives=true: Post https://duplicacy.com/gcd_refresh: dial tcp: lookup duplicacy.com: no such host; retrying after 12.37 seconds (backoff: 8, attempts: 3)
2

So the error now is lookup duplicacy.com: no such host. Iā€™ll now enable logging on the dns server and retry.

As a side note ā€“

There was no need for that, I asked Google to restore the state of the shared folder.

Ok, I ran it again and got the same original error:

2021-07-08 21:38:33.743 TRACE CHUNK_FOSSILIZE The chunk 5708800574be091362a8f7f54155cadf3c646fa7ab3092fd7697815137ad5e3e has been marked as a fossil
2021-07-08 21:38:34.321 WARN CHUNK_FOSSILIZE Chunk c8be6e7355aea991931d5228464dd445386af742a6d90b1c652c91b0d7dea96a is already a fossil
2021-07-08 21:38:34.462 INFO FOSSIL_COLLECT Fossil collection 1 saved
2021-07-08 21:38:35.290 DEBUG GCD_RETRY [0] The user does not have sufficient permissions for this file.; retrying after 2.32 seconds (backoff: 2, attempts: 1)
2021-07-08 21:38:38.229 DEBUG GCD_RETRY [0] The user does not have sufficient permissions for this file.; retrying after 2.07 seconds (backoff: 4, attempts: 2)
2021-07-08 21:38:40.790 DEBUG GCD_RETRY [0] The user does not have sufficient permissions for this file.; retrying after 14.87 seconds (backoff: 8, attempts: 3)
2021-07-08 21:38:56.265 DEBUG GCD_RETRY [0] The user does not have sufficient permissions for this file.; retrying after 16.40 seconds (backoff: 16, attempts: 4)

Iā€™m going to run with dry run and see what does it try to do at this point.

Edit. It has been a few hours and it progressed past the point it failed before.

2021-07-09 00:13:22.645 WARN CHUNK_FOSSILIZE Chunk c8be6e7355aea991931d5228464dd445386af742a6d90b1c652c91b0d7dea96a is already a fossil
2021-07-09 00:13:23.261 WARN CHUNK_FOSSILIZE Chunk b62d72d11bafc5a5a67bb4857c18358558e452c6486c71bf057449b3d2b17d18 is already a fossil
2021-07-09 00:13:24.867 WARN CHUNK_FOSSILIZE Chunk 9d9801ab2a63580597e496baf227b072a6770c4b6c4abfd9ad7e7e508975bf5b is already a fossil
2021-07-09 00:13:26.292 WARN CHUNK_FOSSILIZE Chunk 968bc7ec83b2e255e081559e134aba77c07cf0e50102b0c3639ffa06d57df6f0 is already a fossil
...

Could it be intermittent issues at google? Or it could be that now there are more stuff to fossilize as itā€™s a new day. Watching it progressā€¦

After some more experimenting (see full logs here if interested https://1drv.ms/u/s!ArUmnWK8FkGnkvhVY_zPz802KgFqOg?e=wV6tMw) it looks like it was failing to SNAPSHOT_DELETE.

I created another storage which points directly to the google account that created the shared folder, as opposed to the account that shared folder was shared with, and the prune completed successfully.

2021-07-09 19:04:58.585 WARN CHUNK_FOSSILIZE Chunk 6ca17f466d2fa26e4e3b53c938adac0a38a6b60fe981833fa673b2170e8c21dc is already a fossil
2021-07-09 19:04:59.273 WARN CHUNK_FOSSILIZE Chunk 7844ba28a9aa46a1642c030b95c58259c37c82641aa6bdd6565083d226107d7c is already a fossil
2021-07-09 19:04:59.944 WARN CHUNK_FOSSILIZE Chunk e23dae6f5358746cabf3fc93aa8e9245b0f0f060b1affe1c8c912c2dd0aff2db is already a fossil
2021-07-09 19:05:00.617 WARN CHUNK_FOSSILIZE Chunk 273efbfe18c7b1eee64735417405be70646ad96adea7413f3b8222546b6c559d is already a fossil
2021-07-09 19:05:00.734 INFO FOSSIL_COLLECT Fossil collection 1 saved
2021-07-09 19:05:01.934 INFO SNAPSHOT_DELETE The snapshot obsidian-users at revision 4 has been removed
2021-07-09 19:05:03.067 INFO SNAPSHOT_DELETE The snapshot obsidian-users at revision 5 has been removed
2021-07-09 19:05:04.323 INFO SNAPSHOT_DELETE The snapshot obsidian-users at revision 6 has been removed
2021-07-09 19:05:05.617 INFO SNAPSHOT_DELETE The snapshot obsidian-users at revision 7 has been removed
2021-07-09 19:05:06.869 INFO SNAPSHOT_DELETE The snapshot obsidian-users at revision 8 has been removed
2021-07-09 19:05:08.077 INFO SNAPSHOT_DELETE The snapshot obsidian-users at revision 9 has been removed
...

Any idea why that could be the case? I confirmed that the snapshot files can be deleted by CyberDuck using the other google account: maybe there is some issue with duplicacy doing things differently deleting the stuff via shared account?

For now as a workaround I can just use the connection to main account to do prune, but it would be great to get to the bottom of this.

Oh, and as a separate discussion: i think retrying on genuine permission issues makes no sense: instead, duplicacy should skip the problematic file and continue.

This looks like the exact bug in Prune failure on Google team drives.

The code to remove a file in Google Drive is pretty straightforward and I donā€™t know what can go wrong. If this happens again I can send you a small go program that can only delete a file using the standard Google Drive api (which is what Duplicacy does) to see if it reports any error.

Indeed it does. Not sure why I did not find that topic; but the issue wasnā€™t resolved there either anyway.

For me it is 100% reproducible, when using the other account, that the shared drive is shared with.

I have weekly prune schedule on Sunday night ā€“ so here is the log from today (in its entirety; itā€™s smaller because I pruned using the main account yesterday and there are fewer revisions to delete)

Running prune command from /Library/Caches/Duplicacy/localhost/all
Options: [-log prune -storage Rabbit -keep 0:1800 -keep 7:30 -keep 1:7 -a]
2021-07-11 00:20:41.928 INFO STORAGE_SET Storage set to gcd://Backups@Duplicacy
2021-07-11 00:20:45.106 INFO RETENTION_POLICY Keep no snapshots older than 1800 days
2021-07-11 00:20:45.106 INFO RETENTION_POLICY Keep 1 snapshot every 7 day(s) if older than 30 day(s)
2021-07-11 00:20:45.106 INFO RETENTION_POLICY Keep 1 snapshot every 1 day(s) if older than 7 day(s)
2021-07-11 00:21:11.915 INFO SNAPSHOT_DELETE Deleting snapshot obsidian-users at revision 65
2021-07-11 00:21:12.598 INFO SNAPSHOT_DELETE Deleting snapshot obsidian-users at revision 66
2021-07-11 00:21:13.246 INFO SNAPSHOT_DELETE Deleting snapshot obsidian-users at revision 67
2021-07-11 00:21:13.883 INFO SNAPSHOT_DELETE Deleting snapshot obsidian-users at revision 69
2021-07-11 00:21:14.491 INFO SNAPSHOT_DELETE Deleting snapshot obsidian-users at revision 70
2021-07-11 00:21:15.097 INFO SNAPSHOT_DELETE Deleting snapshot obsidian-users at revision 71
2021-07-11 00:21:15.712 INFO SNAPSHOT_DELETE Deleting snapshot obsidian-users at revision 72
2021-07-11 00:21:16.317 INFO SNAPSHOT_DELETE Deleting snapshot obsidian-users at revision 73
2021-07-11 01:07:25.190 INFO FOSSIL_COLLECT Fossil collection 1 saved
2021-07-11 01:21:05.834 INFO GCD_RETRY [0] Maximum number of retries reached (backoff: 64, attempts: 15)
2021-07-11 01:21:05.834 ERROR SNAPSHOT_DELETE Failed to delete the snapshot obsidian-users at revision 65: googleapi: Error 403: The user does not have sufficient permissions for this file., insufficientFilePermissions
Failed to delete the snapshot obsidian-users at revision 65: googleapi: Error 403: The user does not have sufficient permissions for this file., insufficientFilePermissions

Lets try the small proof-of-concept app.

Here is a program that can delete a given file. Youā€™ll need to compile it first and then provide a token file as well as the file to delete:

go build gcd_delete.go
./gcd_delete <token file> <file to delete>
package main

import (
	"encoding/json"
	"fmt"
	"io/ioutil"
	"log"
	"os"
	"strings"

	"golang.org/x/net/context"
	"golang.org/x/oauth2"
	"golang.org/x/oauth2/google"
	"google.golang.org/api/drive/v3"

	"google.golang.org/api/option"
)

type GCDConfig struct {
	ClientID     string          `json:"client_id"`
	ClientSecret string          `json:"client_secret"`
	Endpoint     oauth2.Endpoint `json:"end_point"`
	Token        oauth2.Token    `json:"token"`
}

func getInfo(service *drive.Service, parent string, name string) (exist bool, fileID string) {
	query := "name = '" + name + "' and '" + parent + "' in parents and trashed != true"
	if parent == "" {
		query = "name = '" + name + "' and ('root' in parents or sharedWithMe) and trashed != true"
	}
	fileList, err := service.Files.List().Q(query).Fields("files(name, mimeType, id, size)").Do()
	if err != nil {
		fmt.Printf("Fail to check %s: %v\n", name, err)
		return false, ""
	}

	if len(fileList.Files) == 0 {
		return false, ""
	}

	if len(fileList.Files) > 1 {
		fmt.Printf("More than one file named '%s' exists\n", name)
		for _, file := range fileList.Files {
			fmt.Printf("%s %s %s %d\n", file.Name, file.MimeType, file.Id, file.Size)
		}
	}

	return true, fileList.Files[0].Id
}

func main() {
	ctx := context.Background()

	description, err := ioutil.ReadFile(os.Args[1])
	if err != nil {
		log.Fatalf("Unable to read the token file: %v", err)
		return
	}

	var object map[string]interface{}

	err = json.Unmarshal(description, &object)
	if err != nil {
		log.Fatalf("Failed to parse the token file: %v", err)
		return
	}

	isServiceAccount := false
	if value, ok := object["type"]; ok {
		if authType, ok := value.(string); ok && authType == "service_account" {
			isServiceAccount = true
		}
	}

	var tokenSource oauth2.TokenSource

	scope := drive.DriveScope

	if isServiceAccount {
		if newScope, ok := object["scope"]; ok {
			scope = newScope.(string)
		}
		config, err := google.JWTConfigFromJSON(description, scope)
		if err != nil {
			log.Fatalf("Failed to create the oauth config: %v", err)
			return
		}
		if subject, ok := object["subject"]; ok {
			config.Subject = subject.(string)
		}
		tokenSource = config.TokenSource(ctx)
	} else {
		gcdConfig := &GCDConfig{}
		if err := json.Unmarshal(description, gcdConfig); err != nil {
			log.Fatalf("Failed to create the oauth config: %v", err)
			return
		}

		config := oauth2.Config{
			ClientID:     gcdConfig.ClientID,
			ClientSecret: gcdConfig.ClientSecret,
			Endpoint:     gcdConfig.Endpoint,
		}
		tokenSource = config.TokenSource(ctx, &gcdConfig.Token)
	}

	service, err := drive.NewService(ctx, option.WithTokenSource(tokenSource))
	if err != nil {
		log.Fatalf("Unable to create drive Client %v", err)
		return
	}

	filePath := os.Args[2]
	names := strings.Split(filePath, "/")
	parentID := ""
	for _, name := range names {
		exist, fileID := getInfo(service, parentID, name)
		if !exist {
		        log.Fatalf("%s doesn't exist", filePath)
			return
		}
		parentID = fileID
	}

	log.Printf("%s: %s", filePath, parentID)
	err = service.Files.Delete(parentID).SupportsAllDrives(true).Fields("id").Do()
	if err != nil {
		log.Fatalf("Failed to delete %s: %v", filePath, err)
	} else {
		log.Printf("%s has been successfully deleted", filePath)
	}

}

How would I specify path to a file that resides on shared drive?

e.g. Shared drives/Backups/test1.txt ?

I seemingly tried al combinations:

% ./gcd_delete ./gcd-token-full-access.json "Shared drives/Backups/test1.txt"
2021/07/14 22:33:49 Shared drives/Backups/test1.txt doesn't exist
% ./gcd_delete ./gcd-token-full-access.json "/Shared drives/Backups/test1.txt"
2021/07/14 22:33:56 /Shared drives/Backups/test1.txt doesn't exist
% ./gcd_delete ./gcd-token-full-access.json "Backups/test1.txt"
2021/07/14 22:34:04 Backups/test1.txt doesn't exist
% ./gcd_delete ./gcd-token-full-access.json "/Shared drives/Backups/test1.txt"
2021/07/14 22:34:21 /Shared drives/Backups/test1.txt doesn't exist
% ./gcd_delete ./gcd-token-full-access.json "/Backups/test1.txt"
2021/07/14 22:34:37 /Backups/test1.txt doesn't exist
% ./gcd_delete ./gcd-token-full-access.json "Backups/test1.txt"
2021/07/14 22:36:08 Backups/test1.txt doesn't exist
% ./gcd_delete ./gcd-token-full-access.json "test1.txt"
2021/07/14 22:38:35 test1.txt doesn't exist
% ./gcd_delete ./gcd-token-full-access.json "//Backups/test1.txt"
2021/07/14 22:39:30 //Backups/test1.txt doesn't exist
% ./gcd_delete ./gcd-token-full-access.json "/Backups/test1.txt"
2021/07/14 22:39:35 /Backups/test1.txt doesn't exist
% ./gcd_delete ./gcd-token-full-access.json "/Shared Drives/Backups/test1.txt"
2021/07/14 22:39:43 /Shared Drives/Backups/test1.txt doesn't exist
% ./gcd_delete ./gcd-token-full-access.json "Backups/test1.txt"
2021/07/14 22:41:35 Backups/test1.txt doesn't exist
% ./gcd_delete ./gcd-token-full-access.json "/Backups/test1.txt"
2021/07/14 22:41:39 /Backups/test1.txt doesn't exist

Out of curiosity, couldnā€™t Rclone help with deleting this stubborn file?

It seems duplicacy cannot delete the snapshot files and only when accessed via shared account; CyberDuck and Transmit have no issues deleting those files; I havenā€™t tried with rclone. I will try, using the same duplicacy token, thatā€™s a good idea.

If you change this line:

		query = "name = '" + name + "' and ('root' in parents or sharedWithMe) and trashed != true"

to:

		query = "name = '" + name + "' and ('drive_id' in parents or sharedWithMe) and trashed != true"

It should be able to find files in the shared drive. The id of the shared drive should be the last component in the url when you open the shared drive in a browser.

So, this is what the URL looks like:

I replace ā€˜rootā€™ with ā€˜0AEl_WsV3iBEbUk9PVAā€™ and then try to delete the file test1.txt which resides directly there:

% vim gcd_delete.go
% ag 'parents or sharedWithMe'
gcd_delete.go
29:		query = "name = '" + name + "' and ('0AEl_WsV3iBEbUk9PVA' in parents or sharedWithMe) and trashed != true"
% GO111MODULE="off" go build gcd_delete.go
% ./gcd_delete ./gcd-token-full-access.json 'test1.txt'
2021/07/16 16:12:18 test1.txt doesn't exist

Iā€™m probably doing something silly hereā€¦

My bad. The drive id isnā€™t specified that way. Youā€™ll need to add a few API calls such as this:

	query := "name = '" + name + "' and '" + parent + "' in parents and trashed != true"
	if parent == "" {
		query = "name = '" + name + "' and ('root' in parents or sharedWithMe) and trashed != true"
	}
    query = query.DriveId('0AEl_WsV3iBEbUk9PVA').IncludeItemsFromAllDrives(true).Corpora("drive").SupportsAllDrives(true)

Not sure what you mean here. DriveID is not a member of string of course, calling that on Q did not work eitherā€¦

Sorry I just copied that line from Duplicacyā€™s source code not realizing that variables are different.

This has been tested to work:

        query := "name = '" + name + "' and '" + parent + "' in parents and trashed != true"
        if parent == "" {
                query = "name = '" + name + "' and ('0AEl_WsV3iBEbUk9PVA' in parents or sharedWithMe) and trashed != true"
        }
        fileList, err := service.Files.List().Q(query).Fields("files(name, mimeType, id, size)").DriveId("0AEl_WsV3iBEbUk9PVA").IncludeItemsFromAllDrives(true).Corpora("drive").SupportsAllDrives(true).Do()

The last fix to make it work is to change 'root' in parents to 'drive_id' in parents.

1 Like