Best practices when choosing B2 buckets

I’m experimenting with B2 and so far I’m quite happy.

I don’t really know why, but I’ve ended up creating one bucket per <machine>-<directory> couple. So, for example, if I’m backing up directory /home/foo of machine bar, I create a B2 bucket named bar-home-foo.

After a while I’ve noticed that I’m creating lots of buckets, but on the other hand I feel more in control of what goes where. So if in the future I want to delete a bucket, it will affect only a specific directory.

I’m wondering how Duplicacy is used in production environments when sending data to B2, and if there are tips in this regards.

You can do that, but the bucket names are shared by all B2 users so later you may find out that a bucket name you want may have already registered by another user. It is better to add a prefix to the bucket name that is likely to be used only by you.

Currently Duplicacy doesn’t support using a directory as the storage rather than the entire bucket. Do you think adding this feature may help in your case?

Would love to be able to specify a B2 bucket subfolder as the storage location. As a potential enterprise customer it would be nice to not have to use one of the maximum of 100 B2 buckets per account just to create a separate storage location. I believe you support subfolders for Amazon S3 buckets.

I just started checking out Duplicacy and was confused at first when I found specifying a subfolder is accepted, but appears to be simply ignored by the Duplicacy client. I created a forum account just to reply to this thread :slight_smile:

1 Like

Can you submit a feature request on github for that?

Feature requested added here: https://github.com/gilbertchen/duplicacy/issues/487

Thanks!

I would also like to see this feature, thank you.

+1

I use Hubic, Jottacloud, oneDrive (business), openDrive and B2 and is the only one where I cannot use sub-folders. I will neither blame duplicacy nor B2, but I guess it would be great having this feature.

I hope in next release will be available. I will have then to figure out how we could move data already in b2. I have not see any way to copy/move data between buckets within the b2 webpage :frowning_face:

You can do this easily, there are two options (maybe more):

I think using Rclone will be easier in this case.

My guess is as good as yours, but I don’t see much of a chance that this will happen, given that this feature request hasn’t even been marked as #planned yet. Or did @gchen just forget to do so?

I can agree on this approach, however, there are some storage providers that charge you when you download data (backBlaze for instance), hence copying data from one place to another downloading data with rclone or duplicacy itself, even on the same storage provider, will incur in costs.

In any case, I do really appreciate your response and help. As @Christoph mention above, this feature is not even #planned (sorry, I understood from @ben-ptl comment it was #requested , however that does not mean it is #planned :smile: ). I can survive with different buckets.

Keep doing such a good work. You have my 5 years license support (even when I only use the Linux CLI), and you will have my next 5 too. I encourage all the users to buy a license of this good piece of work.

2 Likes