Duplicacy upload to google drive very slow after recent token change

Hi, I’ve been running duplicacy on docker to back up my unraid server to google drive for the past few years and the schedules I’ve set up have functioned well until about 6 weeks ago when the backups stopped. I read on the duplicacy forums that this may be due to a change with google drive tokens so I replaced the token and setup the storage in duplicacy again. Now when I got to run the backup schedule, it runs at <1mb/s no matter how many threads I assign, Actually it started at around 12mb/s and drops to <1mb/s.

Can anyone help me to increase the backup speed?
Thanks

Post your complete backup command.

Do you see any errors/warnings in the log?

I’m running duplicacy through the webGUI so not using a CLI command but here are the contents from the most recent backup log:

Running backup command from /cache/localhost/2 to back up /backups
Options: [-log backup -storage goog -threads 4 -stats]
2024-12-29 02:03:20.374 INFO REPOSITORY_SET Repository set to /backups
2024-12-29 02:03:20.374 INFO STORAGE_SET Storage set to gcd://Backups
2024-12-29 02:03:28.151 INFO BACKUP_KEY RSA encryption is enabled
2024-12-29 02:03:28.957 INFO BACKUP_START No previous backup found
2024-12-29 02:03:28.966 INFO INCOMPLETE_LOAD Previous incomplete backup contains 15722 files and 134 chunks
2024-12-29 02:03:28.966 INFO BACKUP_LIST Listing all chunks
2024-12-29 02:04:03.506 INFO BACKUP_INDEXING Indexing /backups
2024-12-29 02:04:03.506 INFO SNAPSHOT_FILTER Parsing filter file /cache/localhost/2/.duplicacy/filters
2024-12-29 02:04:03.506 INFO SNAPSHOT_FILTER Loaded 0 include/exclude pattern(s)

The most recent check log shows this (not sure if it’s normal to show empty values on the last line):

Running check command from /cache/localhost/all
Options: [-log check -storage goog -a -tabular]
2024-12-29 02:00:01.357 INFO STORAGE_SET Storage set to gcd://Backups
2024-12-29 02:00:09.956 INFO SNAPSHOT_CHECK Listing all chunks
2024-12-29 02:03:11.726 INFO SNAPSHOT_CHECK 1 snapshots and 0 revisions
2024-12-29 02:03:11.726 INFO SNAPSHOT_CHECK Total chunk size is 10,024M in 2112 chunks
2024-12-29 02:03:11.726 INFO SNAPSHOT_CHECK 
  snap | rev |  | files | bytes | chunks | bytes | uniq | bytes | new | bytes |
     3 | all |  |       |       |      0 |     0 |    0 |     0 |     |       |

As for the duplicacy web log, I’ve included a paste bin from the the most recent months below. The only error I can see is the occasional " Failed to get the value from the keyring: keyring/dbus: Error connecting to dbus session, not registering SecretService provider: exec: “dbus-launch”: executable file not found in $PATH"

See in that log, around the time where slowdown occurs, if there are any retries/throttling. Try reducing the number of threads to 2 or 1.

This is harmless/not related.

Btw ` marks up inline code. You wanted ```, which is pre-formatted block. I’ve adjusted formatting in your post. See more here: Formatting posts using markdown, BBCode, and HTML - Using Discourse - Discourse Meta

Thanks for the tips regarding my formatting, I’ll keep that in mind for future. As for the backup log, I can’t see any obvious throttling errors it does say ‘no previous backups found’ when indeed I should have more than year of backups already present. I think this all stems from having to set up the google driven token again since I had to set up both the storage and backup options again. When I set up the backup again, I included new backup IDs. Should i have just included the old one?

EDIT: i tried stopping the backup, and setting up a new backup with the old backup ID but it still says no previous backup found

If you created a new token, it won’t let duplicacy see old backups — because of the new scope, it can only see data it created. You would want to create a new storage in a new folder with the new token and duplicacy copy the data from the old location.

I vaguely remember there was a thread recently on this issue. I’ll try to find it. this one Duplicacy doesn’t recognize backups after migrating to a new Google Drive account – Need help! - #4 by saspus

But this does not look like your usecase — you didn’t copy the dataset, so the app shall continue seeing it. Is the path for n Google drive the same as before? Is the config file still there?

The path to google drive was the same initially as before (gcd://Backups) but I’ve now since cancelled that backup and set up a new storage with a new path (gcd://Duplicacy) and then manually copied the contents from the old google drive folder to the new one. When i did this and went to set up the new backup to this location, it detected the previous backup and asked me to input a new name and password. I’ve done this and it stated at 90mb/s but has now slowed to 3mb/s again and still says no previous backups found

Perhaps it’s because I copied the data manually. How do u do duplicacy copy thru the webgui?

The same way you create any other operation: create a schedule to do it, but unclick every day of the week, and then click to run it once.

OK thanks. I think I worked out what happened tho. I have a schedule that checks, prunes and backups every week and I hadn’t paid attention for the last couple of months when the backups started failing due to the google token issue. I think the prune command has deleted all my previous backups due to them being too old so now i have to start fresh. It’s slated to take 5 days at the current speed (2.5mb/s), is that normal?

Prune would not have deleted all revisions, regardless of settings (unless you have used -exclusive flag). If token is bad — prune and check would not have worked in the first place.I thought you have already found the culprit to be copying data manually.

Two and a half megabits per second is extremely slow even for Google drive.

Check if you are CPU bound.

Sorry for all the questions but I’m not particularly well versed with unraid or duplicacy. How can i check if I’m CPU bound? Is that by trying different numbers of threads?

Actually I just checked on the unraid dashboard and my CPU is only under very light (<10%) load so I don’t think I’m CPU bound. Any other steps I can take to troubleshoot? Thanks

I don’t know how does unraid show cpu utilization — Unix style or windows style. I.e. if you have 16 CPU cores and all are busy — will it show 100% or 1600%?

Depending on that 10% can mean fully saturated core: if you backup in one thread and you have 16CPU cores for example.

I would also check disk queue sizes/disk busy/disk IO. Maybe you are disk IO limited?

Lastly, I would run duplicacy benchmark command on a remote. (I don’t remember if you can do it in the web ui — if not, use ssh to run duplicacy CLI from a new n empty repository connected to the same storage. This shall show performance of various operations as seen by Duplicacy.

It’s a fairly new CPU and I don’t have other processes running so don’t think that’s where the bottleneck is. As for disk read/writes, they’re within normal range too

I tried to run a duplicacy benchmark command from within the docker container but it says that the duplicacy command doesn’t exist. I might just wait out the 7 days this backup will take and hopefully future backups will be much quicker

Duplicacy CLI is located under /config/bin in one of the containers floating around. If you created your won — then you would know where it is.

You would need to cd to caches folder (to where .duplicacy folder is located) and run it from there. You can use find to find those locations

find / -type d -name '.duplicacy'
find / -type f -name 'duplicacy*'

But running it in the container is cumberome. Instead run it on the unraid host in the temp folder.

  • Ssh to unraid.
  • cd to /tmp.
  • Download duplicacy CLI there with wget or curl
  • unzip it
  • set execute flag (chmod +x …)
  • Initialize new empty repository there
  • Run benchmark.

You then don’t have to clean anything up — /tmp folder is emptied on reboot. Or you could delete .duplicacy folder.

What is that “normal range”?

To be clear, there are several issues here:

  • your duplicacy not being able to access its datastore on Google drive because it was copied. You need to fix this fired.
  • as a result it’s making a full backup, touching every file, and I suspect is limited by your disk subsystem performance,( especially if you are use various unraid features). This would need to be addressed next. But when you fix the first one, the second one will hide, so I would start with the second one.