This is the issue tracker for Duplicacy. Feel free to report any bug, request a feature, or anything else related to Duplicacy.
To create a new issue or add a comment to an existing issue, you must sign up for an account with a valid email address.
This is the issue tracker for Duplicacy. Feel free to report any bug, request a feature, or anything else related to Duplicacy.
To create a new issue or add a comment to an existing issue, you must sign up for an account with a valid email address.
Hi,
I’m evaluating cloud backup to B2. however a smallish dataset is having a really high rate of transaction C [b2-list-filenames] that will cost a lot more than the eventual storage!
is there any setting/registry where I can reduce the frequency of list? operations otherwise - I see some other cloud backup guis have a polling timer that can be set to reduce this
regards
Chris
For each chunk to be uploaded, Duplicacy needs to check if the same chunk exists by calling b2_list_file_names. The advantage of doing that is, if you have multiple computers sharing the same set of files, then only one copy will need to be saved in B2. In fact, Duplicacy is the only tool on the B2 integration page that can take advantage of such cross-computer deduplication.
I also want to assure you that the extra cost of calling b2_list_file_names is bounded and much lower than you thought. The default average chunk size is 4MB, so for $0.004 you can do 1000 calls which is 4GB. That is about $0.001 per GB, only 20% of what you would pay for 1 GB per month. And it is one-time only, plus if you have another computer happening to have the same files then you end up paying $0.001 per GB one time, instead of $0.005 per GB per month.