Prune fails on GCD with "The limit for this folder's number of children (files and folders) has been exceeded."

I just noticed that prunes on a backup have started to fail:

$ duplicacy prune -exclusive -exhaustive -keep 0:360 -keep 30:90 -keep 7:14 -keep 1:3
Storage set to gcd://Backups/…
Keep no snapshots older than 360 days
Keep 1 snapshot every 30 day(s) if older than 90 day(s)
Keep 1 snapshot every 7 day(s) if older than 14 day(s)
Keep 1 snapshot every 1 day(s) if older than 3 day(s)
Fossil collection 46 found
Ignore snapshot … whose last revision was created 7 days ago
Fossils from collection 46 is eligible for deletion
Snapshot … revision 17735 was created after collection 46
[0] Maximum number of retries reached (backoff: 64, attempts: 15)
Failed to resurrect chunk a9…: googleapi: Error 403: The limit for this folder's number of children (files and folders) has been exceeded., numChildrenInNonRootLimitExceeded

Any idea how to get out of this state, or if any tweaks could be made to prevent it?

There is a limit of 500K files per folder on Google Drive. If you’re hitting this limit, either your backup is too large or the chunks folder is a flat one (which happens if you started using Duplicacy before a default nesting level of 1 was introduced).

You can fix it by creating a file named nesting in Google Drive (next to the config file) with the following content:

{
    "read-levels": [1, 2],
    "write-level": 2
}

But if the current nesting level is 0 (which means all chunks are placed right under the chunks folder not in subfolders), then the nesting file should be:

{
    "read-levels": [0, 1],
    "write-level": 1
}

Thanks. It looks like my chunks folder was completely flat (level 0). I made the file, but I’m still getting the same error. Do I need to do something to migrate/free up space to unstick things?

You can try to move that chunk file manually. Find it under fossils, then remove 9a from the file name but keep the rest, then move it to a subdirectory 9a under chunks.

Run a check after this to make sure this chunk can be correctly located. You need to run a more recent version for the nesting file to take effect.

Hopefully you won’t have many such chunks.

So, I’ve made progress! Thanks for that idea. GCD wouldn’t allow me to create any new files or folders in chunks, even after moving some out. I ended up writing a script to migrate all of my chunks to a new directory and then swap them.

I ran duplicacy check and got a bunch of errors referring to chunks which prunes had tried to remove previously. So, I dumped my cache, which helped, then ran a successful check, backup and prune.

Everything was looking okay, but suddenly backups and prunes are both failing with:

Fetching chunk 625e…
Chunk 625e… can't be found

The chunk does not exist in the cache, but I can see it in the GCD storage, under chunks/62/5e…. Any idea why duplicacy can’t find it? Running with -d doesn’t produce any more output.

Do you have the nesting file in the storage? If you run duplicacy -d list it should show this line:

Chunk read levels: [1 2], write level: 2

Interesting, so, when I run duplicacy -d list, I see this:

Chunk read levels: [0], write level: 0

However, here’s a screenshot from Google Drive:

Here’s the content of the nesting file, as downloaded from Drive:

{
    "read-levels": [0, 1],
    "write-level": 1
}

Any idea what I may have done wrong?

Are you running a recent CLI version? I remember there was a bug in handling of the nesting file which was fixed a long time ago.

I am! Currently on 2.7.1.

I believe the issue may be that this storage was created before nesting, so fixed-nesting is missing from the config. Since config is encrypted, I’m not sure how to directly inspect it — I tried duplicacy -d info and got some information but nothing like a raw dump of the unencrypted config file.

Unless it’s possible to modify the config, I wonder if it’s possible to make a new config with -copy and then copy it to the existing storage to enable nesting. Then, since I have a working script to nest all un-nested chunks, I’ll run it to nest the chunks that have been created since I un-stuck things, then copy the new config file into place and remove the nesting file, since 1 seems to be the default nesting level, and run a check again.

Does that sound reasonable/possible? I already tried experimentally moving a new config into place (after making sure no backups were running or scheduled to run!), and duplicacy -d list correctly prints:

Chunk read levels: [0 1], write level: 1

I believe you can create a new config file by running the add command with both -copy and -bit-identical options. Then copy the new config file to overwrite the old one (don’t forget to save a copy first). This new config file should have fixed-nesting set but everything else will be the same.

1 Like

Thanks for all your help. I think this is resolved: those options did, in fact, create a config that seems to be compatible with the old storage, and I ended up using my script to migrate everything to a one-level folder structure and delete the nesting file. Backups and prunes are both working now.

I have some missing chunks, but based on prune logs I think they’re unrelated, so I’m going through the steps in the wiki to clean them up.

1 Like

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.