Second backup to local disk too big and took a long time

I’m trying out Duplicacy to possibly replace my current backup method.
I’'m primarily backing up to B2 but decided to do a local backup as well since I had a spare external drive.
The first backup to both B2 and the local disk seemed to go fine.
I decided to do a second backup to the local disk just to see how long it would take as compared to the first. After running for a LOT longer than I was expecting I left it to go overnight. Came in the next morning and the external drive was full and the backup stalled out.

Here’s a bit about my setup.
Running on an Unraid box, Dell R720XD with 24 cores (48 threads) and 128 GB of RAM.
Duplicacy Web 1.6.3 running in Docker.
Storage is to an external 2.5" 2TB HDD over USB3
First backup was 1.5 TB total and used an exclusion list in the UI.
I expected the second backup to check things out, see that pretty much nothing changed and not take up that much more room, if any, bu it ran the disk out of space.

Settings for the storage:
Password protected
5:2 Erasure coding
Copy compatible with the b2 backup
210,529 chunks

I kept the same config as I set for the b2 backup but I realize that the 5:2 encoding probably isn’t necessary. And yeah, I get that an old 2.5 external USB disk is not a great backup target, not planning on relying on it, was mainly testing.

Questions:

  1. Is this expected due to the way Duplicacy backs things up?
  2. Should the incremental backup have taken many hours?
  3. Did it not pick up the exclusions list maybe?
  4. Thoughts? Recommendations? Is this a bug or expected?

I’m a little scared to fire off an incremental to B2 as now I’m not sure what to expect.

The second backup should be really fast if nothing has been changed. Can you post the log from the backup job here?

Sure, it’s pretty short.

Options: [-log backup -storage ExternalBackups -threads 1 -stats]
2023-01-05 17:10:37.869 INFO REPOSITORY_SET Repository set to /backup_data
2023-01-05 17:10:37.870 INFO STORAGE_SET Storage set to /backup_external
2023-01-05 17:10:37.891 INFO BACKUP_ERASURECODING Erasure coding is enabled with 5 data shards and 2 parity shards
2023-01-05 17:10:37.912 INFO BACKUP_START Last backup at revision 1 found
2023-01-05 17:10:39.725 INFO BACKUP_INDEXING Indexing /backup_data
2023-01-05 17:10:39.726 INFO SNAPSHOT_FILTER Parsing filter file /cache/localhost/1/.duplicacy/filters
2023-01-05 17:10:39.726 INFO SNAPSHOT_FILTER Loaded 15 include/exclude pattern(s)
2023-01-06 07:59:22.876 ERROR UPLOAD_CHUNK Failed to upload the chunk 59dca0e84fc6d76d9ae4473fbf8515ac80b69dba64ebc02e4920d3791f096cf5: write /backup_external/chunks/59/dca0e84fc6d76d9ae4473fbf8515ac80b69dba64ebc02e4920d3791f096cf5.mzsfajfv.tmp: no space left on device
Failed to upload the chunk 59dca0e84fc6d76d9ae4473fbf8515ac80b69dba64ebc02e4920d3791f096cf5: write /backup_external/chunks/59/dca0e84fc6d76d9ae4473fbf8515ac80b69dba64ebc02e4920d3791f096cf5.mzsfajfv.tmp: no space left on device```

Add -d as a global option to the backup job and run the job again. This is to enable debug-level logging. At least you’ll see if files are excluded/included correctly from the new log.

The disk is still full. How would I know what chunks to delete?

You can run the backup job with -d even if the disk is full, just to see if the include/exclude patterns are applied correct. That is, if there are any files that you don’t want to back up are indeed included.

If there are no issues with include/exclude patterns, then the next step why the second backup fills up the disk. You can run a prune job with -exclusive -exhaustive to clean up chunks uploaded by the failed second backup. Then run the backup with -d which will tell you why it needs to upload more chunks.

The debug flag helped me figure it out.
The backup source directory included the new Unraid share that was the backup destination so it was trying to backup its own backup, which is not a good thing.