Recreating backup job for future scheduling on new computer

Hello again everyone. Here’s my problem and I hope someone can give me a pointer in the right direction.

I recently wanted to back up all the content of an external HD to my B2 bucket where all my Duplicacy snapshots are located and all my storage profiles point to. Part of this backup would have been simple deduplication but a large percentage of it involved backing up fresh files.

Because Duplicacy has a tendency to badly slow my laptop’s general functionality while conducting a backup (we’re talking about roughly 1.4TB requiring a few days here) I had it run on a second laptop that I have on standby, under a new backup ID and new storage (with the same name as my main storage on my original laptop) and pointed to my original, one B2 bucket where everything else already is.

The backup completed successfully, deduplicating at first and then slowing down upload speed as it uploaded fresh files.

Now I want to reconnect my external HD back to my original laptop and recreate the same backup job from the standby laptop for scheduling daily incremental backups and revisions.

How do I do this? Do I need to recreate everything from scratch and let DPCY “upload” everything again while deduplicating or is there some means of recreating the backup so it just runs revisions for changes to files right off the bat?

Also:

Will the deduplication process consume bandwidth (the upload counter does show a large quantity of MB/s as it runs a backup while deduplicating)?

One other quick thing. I noticed that all my storage profiles under “restore” in duplicacy reference all the same backup IDs despite me having pointed some of those backup IDs to different storage profiles (though all in the same B2 bucket) Is this normal?

My Bucket has encryption enabled by the way.

Thanks!

Soz I didn’t get around to replying to your previous post - been a bit busy IRL. :slight_smile:

Basically, if you switch out the source data to a new machine on a one-time basis using the same backup ID, you’ll be fine. There’s nothing intrinsically linked between the data and the ID - they’re mainly used as a reference to the previous snapshot so it can do incrementals more efficiently. One set of data = one backup ID.

So long as you make sure to select the exact same data from the correct root with the same filters, the next backup will run fine. (Even if you don’t do it precisely, you get the opportunity to fix it next time - i.e. you won’t lose data unless you start pruning away.) Either way, it shouldn’t re-upload all chunks again - just a handful representing any slight differences in metadata (maybe).

I don’t quite know what you mean by this, are you saying a backup ID is listed twice under the same storage? I’ve seen this and is most likely a visual bug with the pull-down menu, but can’t remember if it was fixed in a later version of the GUI or not. The latest is v1.8.3 if you haven’t tried updating yet…