So I’ve been experimenting heavily with putting Duplicacy backups on Amazon s3 and transitioning them to s3 Glacier Deep Archive which is extremely affordable.
But I ran into an interesting problem today. I wanted to list the files in a revision of one of my backups from the web gui, but it failed to list them. From looking at the web log, this is because it is trying to download a chunk that is in Deep Archive so I need to restore that file to standard access in S3… which I’m doing…
My fear is that there will be more chunks that need to be restored to see the file list and it takes about 12 hours to restore… so I can’t do it one file at a time.
The main super import question here is… is there a way I can get a list of which chunks contain the file lists for a given revision so I can bulk restore those??
Then I guess the next question would be if its possible to list the chunks needed to restore a particular file, or a whole revision… this was I can script the S3 retrieval process and only restore the chunks I need to avoid much larger costs from amazon… and hopefully only have to do 1 or 2 retrieval requests??