I am considering using Storj. I read the discussion here ( Should I change the default minimum, average and maximum chunk size (includes existing chunk analysis / Storj)? - #15 by saspus ) about trying to keep chunk sizes close to 64 GB. I have a backup set of larger files (videos) along with a smattering of smaller document files that would have occasional changes.
How would Duplicacy handle chunk sizing for 64 MB target, 64 MB max and 16 MB min? Would this let the large videos maximize the chunk sizes but let smaller collections of documents clump together more efficiently (especially if they may be edited?) Maybe I’m over thinking it? Thanks!
The default 4 MB chunks would cost $2.20 per TB vs. 64 MB chunks at 13.75 cents per TB.