Cache usage details

Duplicacy maintains a local cache under the .duplicacy/cache folder in the repository. Only snapshot chunks may be stored in this local cache, and file chunks are never cached.

At the end of a backup operation, Duplicacy will clean up the local cache in such a way that only chunks composing the snapshot file from the last backup will stay in the cache and all other chunks will be removed from it. However, if the prune command has been run before (which will leave a .duplicacy/collection folder in the repository), then the backup command won’t perform any cache cleanup and instead defer that to prune.

At the end of a prune operation, Duplicacy will remove all chunks from the local cache except those composing the snapshot file from the last backup (those that would be kept by the backup command), as well as chunks that contain information about chunks referenced by all backups from all repositories connected to the same storage url.

Other commands, such as list, check, do not clean up the local cache at all, so the local cache may keep growing if many of these commands run consecutively. However, once a backup or a prune command is invoked, the local cache should shrink to its normal size.

Cache locations in the Web GUI

For the Web GUI, each backup has a separate repository directory at ~/.duplicacy-web/repositories/localhost/n (where n is the index of each backup), and the cache is still located under each repository (.duplicacy/cache). Other operations (check, prune, and copy) share a single repository at ~/.duplicacy-web/repositories/localhost/all.

Note that if you change the temporary directory in the Settings page then all repositories will be moved to the new temporary directory.

Cache folder is is extremely big! :scream:

Please read Cache folder is is extremely big! 😱.


Duplicacy CLI 2.2.3
OpenSUSE 15.1

I’m doing a cli restore on a new location, and the cache dir is consistently 40MB as the restore is running.

Restore command details mentions that the restore process with -overwrite will benefit from this cache and go much faster if interrupted, but in my experience, that’s not the case, which makes sense considering the cache dir is only 40MB, and the restore is multiple GBs.

Any clarification about why this may be?

I believe the purpose of the -overwrite switch, in addition to actually overwriting existing files, is that it reuses existing chunks within those files to rebuild the new file in place. i.e. it doesn’t have to download a chunk from storage if it already exists at the same location in that file.

As far as the cache is concerned, it’s irrelevant…

The cache is for metadata.


what’s the impact of the cache disappearing? currently i have config/, cache/ and log/ directories on a SSD backed BTRFS RAID10 pool along with all my other container apps/etc but i also have a single SSD for scratch space like transcoding, etc. If i moved the cache dir to the scratch space and the SSD died, what would the impact be to duplicacy when I replace the disk and restart the container without any cache data?

The cache doesn’t affect the correctness of any operations. You can remove the cache any time which will only make Duplicacy take more time to download some metadata chunks.