Trying to delete 1 repository out of 2 in the same target location

Hi,

One year ago (!!) I asked this question: 2 repositories in same target location : I have to juggle with config files to avoid "Chunk xxxxxxxxx can't be found"

I didn’t go through these steps because making a duplicate of my backups would have used way too much disk space.
Now I only would like to delete all files related to the backup created from a different storage.

To sum it up again:

  • I have 3 backups in the same Google Drive folder.
  • Backup1 and Backup2 share the same config file.
  • Backup3 was created separately.
  • I want to delete everything related to Backup3.

Using the relevant Backup3 config file I try to prune the snapshot (prune -id Backup3) but I get an error message. Apparently when I type this command Duplicacy is also trying to access the repo of Backup2. What would be the way to do what I want?

Thanks a lot!

To delete everything related to Backup3, first remove the folder under snapshots in Google Drive whose name is whatever repository id Backup3 uses.

Then run this command:

duplicacy prune -exclusive -exhaustive

Make sure no backups are running when you run this prune command.

2 Likes

Thank you for your support, I really appreciate!
I did execute this command and for now nothing happens, but I guess Duplicacy is doing its thing with the thousands of chunks to scan.
I will keep you updated to say if I succeeded.

:d: take a long while when running -exhaustive since it has to go and check each and every chunk from the storage. there may be billions if your backups are large enough, or if you have a long history. Plus google drive is extremely slow in itself, so this will for sure take a while.

For a 1.5TB repository using google drive, a prune took me more than 21 hours.

2 Likes

Well well… I have 9TB of data, so I guess I won’t prune anything…

try running the prune like this

duplicacy -d -log prune -all -threads 30 -keep 0:360 -keep 30:180 -keep 7:30 -keep 1:7 -exclusive -exhaustive

runs multithreaded and shows way more logging

1 Like

Why the -keep commands?

Thanks!

ah, the command was copied from another thread so it contained -keep. Since you’re pruning, i though you still wanted to prune other repos as well, and that’s the keeping that i use

Ok so you think 30 threads could really speed up the process? I’ll give a try with

duplicacy -d -log prune -all -threads 30 -exclusive -exhaustive

1 Like

I went with 50 threads. The very positive thing is that eventually all chunks were listed in a couple hours and the pruning has started. Of course I’m getting a ton of API quota exceeded but also hundreds of chunks deleted every minute. I have around 700Gb of data to delete and it will probably take many hours but at least it’s working.

On google drive i think that’s a bad decision. Google has more rate limiting than what is shown on their website. I think everything more than 30 threads will lead to a temporary ban after a while, even on my business account. By temporary i mean you will only receive 429 rate limit exceeded for every request, even those sent every 30 seconds, unless you make at least a 30-60 minute break.

I’ve been there and had quite a few complains about how :d: could workaround this.

1 Like

It went well even with 50 threads! Job completed in 10 hours. Almost 1Tb of data deleted.
Thanks a lot :slight_smile:

2 Likes

For anyone: Feel free to use the :heart: button on the posts that you found useful.

For the OP of any #support topic: you can mark the post that solved your issue by ticking the image box under the post. That of course may include your own post :slight_smile: