Owing to the recent Backblaze B2 storage corrupt shard problem (Corrupted chunks).
It makes me rethink how data integrity really happens.
S3 Standard-IA stores objects in three available zone and cost 0.0125/GB compared to 0.005/GB by B2. Which sounds not so expensive now. (Just skips the discussion on egress and api cost for now.)
I knew this has been discussed before (S3 Storage Class · Issue #238 · gilbertchen/duplicacy · GitHub), but I’m sincerely hoping duplicacy could officially support S3 Standard-IA storage class. The glacier and glacier deep archive are too bothersome to work with, but Standard-IA could work with duplicacy like standard storage, if I am not wrong.
Or at least, maybe some official statement on using lifecycle rule would be perfect. Is setting a lifecycle rule to transition everything in the “chunk/” folder to Standard-IA after 30 days(the minimum requirement by AWS) then good to go? It’s okey to pay anything at standard price for 1 month if the lifecycle rule works well with duplicacy.
Best Regards,
Tony