Deduplication and storage bucket best practices?

Currently using Backblaze B2 as cloud storage with Duplicacy via WebUI. When I set up the storage initially, I didn’t really understand how deduplication worked so I ended up creating separate buckets for each job. For example:

  1. <servername>-pics - this bucket would catch all photos/videos backed up to the server from clients running Immich
  2. <servername>-docs - this would catch all important documents backed up to the server from various clients or direct upload

There are more, but this gives the idea of how I have it set up on B2 side with a different backup job in Duplicacy for each. Now understanding a little bit more about deduplication, I’m thinking I should just have a single bucket where everything goes so that any dedupe actions can occur making more efficient use (and cheaper) of the storage. Is that a correct assumption? Can deduplication occur across folders if they’re backed up to the same bucket? For example, on the server I have /mnt/user/backups which contains important docs, server configs, as well as desktop/laptop configs and also /mnt/user/Pictures which has the Immich library. Which is a mess due to originally being a Picasa library, then iPhoto, then Google Photo. Then exported via Google Takeout and imported into Immich, but also auto-uploading new pics/videos from my clients.

Just trying to wrap my head around best practices when setting this up to give Duplicacy the best scenario for it’s cool feature set and I’ve confused myself.