Hi!
I’ve been using Duplicacy CLI for a while, but didn’t think much about the proper storage design. Now reading through the forum I understand that for generic personal use case (mix of documents and photos mostly) deduplication is more efficient with smaller than default chunk size.
I have my primary backup storage on local NAS box and I copy it with to wasabi bucket. Both storages are encrypted and have variable chunk size with default average (4mb) and hold a number of repositories. Current size of storage folder is ~400GB.
Can you please help me with these questions:
- Does it make sense to migrate to new storage where I set 1mb average chunk size?
- Is there a way to assess deduplication effectiveness for my particular data set, so I could compare 4mb and 1mb chunk?
- If I do the migration, I was planning to use something like this to init my new storages:
duplicacy add -e -c 1M -copy "original_storage_name" "new_local_storage_name" "my_repository_name" "new_local_storage_url" duplicacy add -e -c 1M -copy "original_storage_name" "new_cloud_storage_name" "my_repository_name" "new_wasabi_storage_url"
And then run copy operation for all my repositories, first to local storage, then cloud.
Is it the optimal way to do this? Or I need to change my approach?