Setup:
8 hour backups to local storage
Once a day copy → remote storage, followed by a check.
Once a day prune of local
Once a day prune of remote -keep 90:1000 -keep 30:180 -keep 7:90 -keep 1:30 -a -threads 10
For some reason, I’m in a weird cycle where chunks are pruned (correctly), but then the next copy from local → remote reuploads them.
I have no idea how to stop this, or even diagnose it - any pointers?
[Added]
So I’ve been testing this out for the last few hours, and yup, for some reason there is just an endless cycle of revisions and chunks being pruned, and then recopied.
A sample of the prune log:
Running prune command from /Users/<username>/.duplicacy-web/repositories/localhost/all
Options: [-log prune -storage <storage-name> -keep 90:1000 -keep 30:180 -keep 7:90 -keep 1:30 -a -threads 10]
2025-07-27 13:28:30.624 INFO STORAGE_SET Storage set to <storage-location>
2025-07-27 13:28:30.912 INFO RETENTION_POLICY Keep 1 snapshot every 90 day(s) if older than 1000 day(s)
2025-07-27 13:28:30.912 INFO RETENTION_POLICY Keep 1 snapshot every 30 day(s) if older than 180 day(s)
2025-07-27 13:28:30.912 INFO RETENTION_POLICY Keep 1 snapshot every 7 day(s) if older than 90 day(s)
2025-07-27 13:28:30.912 INFO RETENTION_POLICY Keep 1 snapshot every 1 day(s) if older than 30 day(s)
2025-07-27 13:28:44.745 INFO FOSSIL_GHOSTSNAPSHOT Snapshot <backup id> revision 2 should have been deleted already
2025-07-27 13:28:44.745 INFO FOSSIL_GHOSTSNAPSHOT Snapshot <backup id> revision 3 should have been deleted already
2025-07-27 13:28:44.745 INFO FOSSIL_GHOSTSNAPSHOT Snapshot <backup id> revision 4 should have been deleted already
2025-07-27 13:28:44.745 INFO FOSSIL_GHOSTSNAPSHOT Snapshot <backup id> revision 5 should have been deleted already
2025-07-27 13:28:44.745 INFO FOSSIL_GHOSTSNAPSHOT Snapshot <backup id> revision 6 should have been deleted already
2025-07-27 13:28:44.745 INFO FOSSIL_GHOSTSNAPSHOT Snapshot <backup id> revision 7 should have been deleted already
2025-07-27 13:28:44.745 INFO FOSSIL_GHOSTSNAPSHOT Snapshot <backup id> revision 8 should have been deleted already
...
...
2025-07-27 13:28:44.747 INFO FOSSIL_IGNORE The fossil collection file fossils/1 has been ignored due to ghost snapshots
2025-07-27 13:28:44.747 INFO SNAPSHOT_DELETE Deleting snapshot <backup id> at revision 2
2025-07-27 13:28:44.815 INFO SNAPSHOT_DELETE Deleting snapshot <backup id> at revision 3
2025-07-27 13:28:44.859 INFO SNAPSHOT_DELETE Deleting snapshot <backup id> at revision 4
2025-07-27 13:28:44.900 INFO SNAPSHOT_DELETE Deleting snapshot <backup id> at revision 5
2025-07-27 13:28:44.940 INFO SNAPSHOT_DELETE Deleting snapshot <backup id> at revision 6
2025-07-27 13:28:44.983 INFO SNAPSHOT_DELETE Deleting snapshot <backup id> at revision 7
2025-07-27 13:28:45.026 INFO SNAPSHOT_DELETE Deleting snapshot <backup id> at revision 8
...
...
2025-07-27 13:35:50.249 INFO FOSSIL_COLLECT Fossil collection 2 saved
2025-07-27 13:35:50.292 INFO SNAPSHOT_DELETE The snapshot <backup id> at revision 2 has been removed
2025-07-27 13:35:50.338 INFO SNAPSHOT_DELETE The snapshot <backup id> at revision 3 has been removed
2025-07-27 13:35:50.384 INFO SNAPSHOT_DELETE The snapshot <backup id> at revision 4 has been removed
2025-07-27 13:35:50.432 INFO SNAPSHOT_DELETE The snapshot <backup id> at revision 5 has been removed
2025-07-27 13:35:50.485 INFO SNAPSHOT_DELETE The snapshot <backup id> at revision 6 has been removed
2025-07-27 13:35:50.529 INFO SNAPSHOT_DELETE The snapshot <backup id> at revision 7 has been removed
2025-07-27 13:35:50.575 INFO SNAPSHOT_DELETE The snapshot <backup id> at revision 8 has been removed
After this prune, in the remote storage, the relevant revisions have been deleted from the snapshots folder
and then doing a copy:
Running copy command from /Users/<username>/.duplicacy-web/repositories/localhost/all
Options: [-log copy -from WD-8TB -to <storagename> -threads 8]
2025-07-27 13:41:41.827 INFO STORAGE_SET Source storage set to <fromstorage>
2025-07-27 13:41:42.270 INFO STORAGE_SET Destination storage set to <storagelocation>
2025-07-27 13:41:44.188 INFO SNAPSHOT_EXIST Snapshot <backup id> at revision 1 already exists at the destination storage
2025-07-27 13:41:45.177 INFO SNAPSHOT_EXIST Snapshot <backup id> at revision 22 already exists at the destination storage
2025-07-27 13:41:46.025 INFO SNAPSHOT_EXIST Snapshot <backup id> at revision 40 already exists at the destination storage
2025-07-27 13:41:46.899 INFO SNAPSHOT_EXIST Snapshot <backup id> at revision 58 already exists at the destination storage
2025-07-27 13:41:47.780 INFO SNAPSHOT_EXIST Snapshot <backup id> at revision 76 already exists at the destination storage
...
...
2025-07-27 13:42:31.879 INFO SNAPSHOT_COPY Chunks to copy: 17436, to skip: 44194, total: 61630
2025-07-27 13:42:32.465 INFO COPY_PROGRESS Copied chunk 4490a12f171243f5ba98f630001321054ef92fd19ad16d38e311e95f14bc2911 (1/17436) 659KB/s 02:50:22 0.0%
2025-07-27 13:42:32.721 INFO COPY_PROGRESS Copied chunk da1451fa16229ed0851b83e798634f6a4e354ec9ae75f31dc61ca1a87d1935da (2/17436) 2.31MB/s 02:02:16 0.0%
2025-07-27 13:42:32.839 INFO COPY_PROGRESS Copied chunk 5cce3ab7dddb3fdd0daa44555e39dd84835f192d81d33a7d0a17a472b053f6b1 (3/17436) 2.45MB/s 01:33:00 0.0%
2025-07-27 13:42:32.910 INFO COPY_PROGRESS Copied chunk 3266d0defda5d5d05bb240a43cc884eccfdc1de2f8a785071ebb40a9c9979864 (4/17436) 5.63MB/s 01:14:51 0.0%
2025-07-27 13:42:33.040 INFO COPY_PROGRESS Copied chunk 880fdb7e6083b241905c5ed00356b712b4574257feff18fedfdb9f4e87715892 (6/17436) 5.90MB/s 00:56:13 0.0%
2025-07-27 13:42:33.056 INFO COPY_PROGRESS Copied chunk 06c731f3301957451e8bb243d5dd528c94ae27a384f59d2d5aad19d41e9d71ef (7/17436) 7.22MB/s 00:48:51 0.0%
2025-07-27 13:42:33.300 INFO COPY_PROGRESS Copied chunk 2a9f9571d3b0e66a6db82847560f1b6935b1ea53bec91336ec12b638f39479a7 (5/17436) 13.05MB/s 01:22:32 0.0%
2025-07-27 13:42:33.493 INFO COPY_PROGRESS Copied chunk ecf6f344ab22b19cae9c485b3bf90dfeddc3fe16227a059e821daf3649db33f8 (10/17436) 11.72MB/s 00:46:52 0.1%
2025-07-27 13:42:33.547 INFO COPY_PROGRESS Copied chunk 492afad8bd08466fecf1b0339533fe5cff76fcca5af593934bce60237c49c063 (13/17436) 11.62MB/s 00:37:15 0.1%
2025-07-27 13:42:33.652 INFO COPY_PROGRESS Copied chunk 43b8e04008cec0ba88e18c3706bbc217a44944df25813f954675b37e86b1ba0d (11/17436) 15.16MB/s 00:46:48 0.1%
After this copy completes, the revisions that had been correctly deleted by the prune are now back in the snapshots folder.
I’d be ok with somehow nuking this particular id in teh remote storage (which also contains other ids) and starting again, but I don’t want to completely re-initialise the remote storage as it has years of history now from older computers.