Google drive filling up

Hi. Fairly new to duplicacy so sorry if these questions are basic. I’m running duplicacy gui on unraid and backup my appdata folder weekly to google drive. I recently noted my google drive is filling up and I think it is due to these backups

Two questions

  1. If I delete the old backups in duplicacy, will they be removed from Google drive?
  2. Is there a way to make it so that only one or two backups are kept and old backups are automatically deleted?

Thanks

What is that appdata folder? What is there? Maybe there is a lot of transient temporary data that you don’t really want to backup? You can exclude that using filters.

Yes, see prune command with keep flags.

Appdata is the folder that my docker containers store their data. I don’t think there’s much that’s unnecessary to backup. The idea being that if my server dies, I can immediately restore all the docker containers using the backup.

I’ll have a look at the prune command. Thanks

Got it. It makes sense. I confused with Windows’s APPDATA folder that does contain heaps of transient data.

Did you actually confirm that it’s duplicacy datastore that takes space? Unless your containers create a lot of transient data (maybe not all of that data needs to be backed up?) the backups should not grow much — each one is incremental.

Not sure how to check. The drive went from aboit 50gb to 100gb in the last few weeks since using duplicacy. When I got to google drive manage storage page, I can’t see the backups folder and there doesn’t appear to be any other big files that I can see. Do you know how to see the size of the backups folder? It doesn’t seem to show in google drive. I downloaded the client onto my Mac, and when I go to the backups folder to measure size it just hangs on ‘calculating size’

How did you setup Google Drive? If you set it up try default it would have created a folder under your “My Drive” folder. Otherwise if you have used service account or other special workarounds it would be elsewhere.

Oh, so you do see the backup folder in MyDrive? I misunderstood your previous statement then.

It make take a long while — duplicacy creates a lot of small files, and enumerating them can take hours.

I’ve waited about 3 hours and it hasn’t come up with a size
Furthermore, i tried to add a prune job to my schedule but it keeps failing due to
ERROR CHUNK_FIND Chunk ** does not exist in the storage
Chunk **does not exist in the storage

I tried adding a check job as well, and there are a series of WARN SNAPSHOT_VALIDATE errors. I tried deleting the cache folder in my appdata and this hasn’t solved the issue. Any idea what i can do to fix this?
Thanks

Acutally, now that i’ve deleted the cache folder, i get an initialization failure on running any job :frowning:

post the full log please.

Are you using CLI or WebUI?

I would delete the storage destination and create it again, with the same name. Maybe some tokens expired or some other authentication issue.

I wasn’t sure how to post the full log but i have made a pastebin of the duplicacy_web.log file

I’m using duplicacy webUI

I see a few issues here:

2023/07/10 18:35:26 Failed to list the directory '': googleapi: Error 403: Quota exceeded for quota metric 'Queries' and limit 'Queries per minute' of service 'drive.googleapis.com' for consumer 'project_number:243147021227'.

This is a limit you will be occasionally hitting because of the all credentials are issued from the same google project. You can workaround this by using your own google project, you can search forum for instructions, but this does not seem to be the issue at hand now.

The recent issue is this:

2023/08/06 17:17:37 Failed to save the preferences file: open /cache/localhost/all/.duplicacy/preferences.8b763a1b: stale NFS file handle

Duplicacy-web cannot save a preference file for the CLI to run on.

Are you using a docker container? What folder is mounted into /cache? There seems to be some NFS issue, I would reboot your unRAID and see if this fixes the problem.

Have a look at this: unraid "stale NFS file handle" at DuckDuckGo

IT is an unraid template docker container and /mnt/user/appdata/duplicacy/cache is mounted into /cache. I’ll try rebooting and see if it helps
Also i haven’t been able to found out how to create new google project in the forums. Do i need a special google account or can i do it with a personal one? Thanks

Actually i found on a thread regarding stale NFS errors on unraid that you need to add noserverino to mount flags. How can i do this on the duplicacy?

Which container?

How is the volume mounted? Maybe you can use bind mounts and specify options there? (I’m not familiar with unraid and don’t know what is and isn’t available through the ui)

You should be able to create it with personal account but I believe in this case you would also need to provide a web app for token renewals.

There is a better alternative if you use google workspace: through the use of service account, that does not require renewals:

here is how: Duplicacy backup to Google Drive with Service Account | Trinkets, Odds, and Ends

As a temporary workaround you can omit mounting the /cache folder into the container altogether. Then the temporary duplicacy data and stats will be created inside a container and will get lost if the container is replaced. Deepening on which container you use and how often it gets updated it maybe an acceptable tradeoff.

I use hotio’s container and am not sure how to check how the volume is mounted.

Thanks for linking the Google workspace walkthrough, I’ll give it a go tonight. As for the stale NFS handle error, I’ll try rebooting the server tonight and see if it fixes the issue. Thanks for your help

I just checked and NFS is disabled on unraid by default. Does it need to be enabled for duplicacy to work?

Likely not.

You might get better outcome searching unraid forums for the error message along with “container” as this is not specific to duplicacy, but is specific to unraid and the whatever container engine they are using, that may be relying on NFS for volume mounts.

Maybe they’ll suggest how to workaround it, e.g. by replacing volume mounts with bind mounts, where you can explicitly specify parameters.

1 Like

Also i attempted to follow this walkthrough but it seems you can’t do this without paying for google workspace. I don’t have workspace so i don’t think this will work. Perhaps i should just use OneDrive to avoid these hassles

Don’t worry about it. That error only happened few months ago in your log. I don’t think it’s such a frequent problem to justify worrying about.

With OneDrive you will have significantly more hassles: They throttle more aggressively.

There seems to be a new error in the log too:
Failed to get the value from the keyring: keyring/dbus: Error connecting to dbus session, not registering SecretService provider: exec: “dbus-launch”: executable file not found in $PATH

duplicacy_web(1).zip (47.0 KB)