Securely Cleaning Up After Duplicacy

I’m getting ready to decommission a leased, remote system that’s been running Duplicacy for a few years. Since I don’t own the hardware or have physical access to it, the best I can do to git rid of anything sensitive is scrub them with SRM. Since SRM takes a lot of time to do its job, so I had a poke through Duplicacy’s by-products to identify files where it should be applied. Here’s what I found that’s sensitive:

  • Preferences file: Contains login credentials for storage (which will be revoked) and the passphrase for the repository.
  • Cache of fossils: References to chunks that are going to disappear. Contains references to the names of the repository’s other storage IDs.
  • Cache of snapshots: Contains directories named for all of the storage IDs, but the files themselves just list chunks.

The cache of chunks is just files full of hashed values, so those can just be removed conventionally.

Am I missing anything else?

That should be enough but you can also just remove the entire .duplicacy directory where everything is stored.

Thanks, Gilbert. On this system, SRM takes about a minute per megabyte scrubbed, so I’m trying to trim out as much as I can so I can get this project done with sooner. :slight_smile: