Advice on Moving Systems

Been using Duplicacy for a year now and it has been great, though my hardware is about to be replaced and all of the files manually migrated to a new system in a different directory structure. I am preparing now for that move and would like it to be as seamless as possible, so I have a few questions. Hoping for some general advice:

License migration
Haven’t dug into this yet, but I think if the host name changes (which it will in my case) I just need to re-register the existing license with the new hostname. It sounds like when I do the transfer to the new system, I will not be able to use backups on the old system. (this kind of ties into the topic below)

Data migration
I am currently using Wasabi for storage and plan to continue, but I am curious how much data that is stored in the old backups will be useful to the new system? For example, while the files being backed up are the same, they will be in different directories coming from a different system. Also, not sure if the new system will “pick up and recognize” what is already in Wasabi vs what is local.

It seems to me that I might want to begin a brand new backup set and then remove the old data at some point. This will orphan the old set which will have to be removed manually. Also it will force a full backup set upload as things are starting new.

The alternative is to have the new system continue to use the old backup set (somehow it would need to be smart enough to deal with that). Ideally, this means it “picks up” with the existing old set and somehow recognize that there were directory changes, but that the files are the same.

I appreciate any thoughts on best direction to take.

Cheers.

It seems to me that you can continue the backups normally, just by changing the snapshot IDs to represent your new folder structure on the new computer. Chunks that have already been sent to Wasabi will be used for deduplication and you will only have new snapshot IDs.

It all depends on what was the organization of repositories -> storages (Wasabi buckets) on your old computer and how it will be on the new one.

Well, that seems encouraging. I run one backup, one check and one prune daily using one backup id (called bid-1).

Are you saying on the new system, I should create bid-2, point it to the same wasabi location (i.e. wasabi://us-east-2@s3.us-east-2.wasabisys.com/12345-duplicacy/nas-backups) and just run things normally? I’m not sure what you mean by snapshot id (the backup ID is what I have assigned)

Somehow bid-2 will know about files in wasabi from bid-1, but also will keep the pruning policy for the new and the old set? (i.e. and expire both as needed). Or am I missing something?

Here are some log details on the set now running, if that helps…

Backup
Options: [-log prune -storage main-nas-backups -keep 0:1800 -keep 90:730 -keep 30:365 -keep 7:180 -keep 3:90 -threads 8 -a]
2021-05-05 00:20:49.024 INFO STORAGE_SET Storage set to wasabi://us-east-2@s3.us-east-2.wasabisys.com/12345-duplicacy/nas-backups

Check
2021-05-05 00:04:20.793 INFO SNAPSHOT_CHECK Listing all chunks
2021-05-05 00:06:00.519 INFO SNAPSHOT_CHECK 1 snapshots and 145 revisions
2021-05-05 00:06:00.534 INFO SNAPSHOT_CHECK Total chunk size is 1677G in 371673 chunks
2021-05-05 00:06:00.558 INFO SNAPSHOT_CHECK All chunks referenced by snapshot bid-1 at revision 1 exist
2021-05-05 00:06:03.578 INFO SNAPSHOT_CHECK All chunks referenced by snapshot bid-1 at revision 35 exist
2021-05-05 00:06:05.774 INFO SNAPSHOT_CHECK All chunks referenced by snapshot bid-1 at revision 71 exist

Prune
2021-05-05 00:20:49.873 INFO RETENTION_POLICY Keep no snapshots older than 1800 days
2021-05-05 00:20:49.873 INFO RETENTION_POLICY Keep 1 snapshot every 90 day(s) if older than 730 day(s)
2021-05-05 00:20:49.873 INFO RETENTION_POLICY Keep 1 snapshot every 30 day(s) if older than 365 day(s)
2021-05-05 00:20:49.873 INFO RETENTION_POLICY Keep 1 snapshot every 7 day(s) if older than 180 day(s)
2021-05-05 00:20:49.873 INFO RETENTION_POLICY Keep 1 snapshot every 3 day(s) if older than 90 day(s)
2021-05-05 00:20:56.747 INFO FOSSIL_COLLECT Fossil collection 2 found
2021-05-05 00:20:56.747 INFO FOSSIL_DELETABLE Fossils from collection 2 is eligible for deletion
2021-05-05 00:20:56.748 INFO PRUNE_NEWSNAPSHOT Snapshot bid-1 revision 481 was created after collection 2
2021-05-05 00:20:58.505 INFO CHUNK_DELETE The chunk xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx has been permanently removed
2021-05-05 00:20:58.549 INFO CHUNK_DELETE The chunk xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx has been permanently removed

Exactly. To create bid-2 you should add (add command) with -copy option, which will make the new storage bid-2 compatible with the old bid-1.

When you do your first bid-2 backup, you will notice that Duplicacy will do a full scan of your repository and split the files into chunks for upload. The vast majority of these chunks will already be in Wasabi and will not need to be uploaded / stored again. This is the deduplication working.

There is only one detail: as you mentioned that the structure of the folder has been changed, it may be that the chunks related to the “cut points” (speaking in a layman’s way) at the end of the files are different, and these new chunks will be sent to Wasabi.

The proportion between the chunks that will be used (the ones that will not need to be uploaded) and the new ones that will be uploaded will depend on how “radical” this change in the folder structure was.


P.S.:

I think it’s just a difference in nomenclature between the CLI and the Web version
CLI - snaphot-id
Web - backup ID

This is only necessary if he’s creating a new copy-compatible storage, though from what I understand, he’s just migrating the source repository - so just need to initialize on the existing storage URL and use a different backup ID.

Right, he’s changing the repository, not the storage. I reasoned in reverse. My mistake. :flushed: