I’m working with Amazon S3 and having a lot of dificulty getting it to work with Deep Archive. It was working fine, until I missed a few days of backups, now to restart it’s trying to check metadata chunks, but they need to be restored first.
It would be amazing if we could store those chunks in another folder, say “/metadata” instead of “/chunks” so we could set lifecycle rules to only move chunks to deep archive and not metadata to avoid this problem. Deep Archive is nearly 1/4th of the price of the next highest option… this could improve the usability of Duplicacy with AWS in a HUGE way and reduce cost for a lot of users.
I’m even willing to offer a bounty for development to make it happen. I’d write the code myself but I’m just not experienced enough. Right now I’m stuck promoting a huge number of chunks just to sort out which ones belong to metadata so I can restart my backups. This feature would remove all that trouble.