Copy command results in different chunk counts?

I am doing a local backup, then a copy command to BackBlaze B2 and OneDrive Business

My file size and chunk count to OneDrive is the samea, but to BackBlaze it is 2 files off, weird…

Is there a way to track down why this is and correct it? I would think that the copy command would check for all existing chunks first and copy what’s needed?

A simple way would be to use the check command of Rclone to make this comparison

Is there a reason though that the counts are different and not being auto-corrected by Duplicacy?

I’ve run exhaustive prunes on all storages, so anything “extra” sitting on any of them should be removed.
Then the copy command should copy anything that doesn’t exist from local to backbalze.

I’m not following how there would ever be a discrepancy.

My guess is that this is caused by the b2 storage having a different set of revisions due to the different prune times. You can compare the tabulated stats table in the check logs to see if this is the case.

So the prune happens at very slightly different times, but so does the onedrive as well.
Here is my dialy schedule. I’ve run an exhaustive prune recently as well but the numbers being 2 off for BackBlaze issue is still present.

What exactly am I looking for in the tabulated stats of the check logs? I found the section at the bottom of the logs, but could you provide a little more guidance on exactly what I would be looking for to identify an issue?

One possible explanation…

Try list the snapshots on all storages to see if they’re the same revision numbers. Ideally, they need to be exactly the same sets of revisions, with the same prune retention periods, run on the same day. See:

IF your revision numbers are not in sync between storages, you could be in a cycle of copying and pruning the same revisions within the gaps of the retention period.

I just checked the tabulated stats in the check log, and all the revisions are the same across both OneDrive and BackBlaze. Same revision numbers on all of them.

Also I run prune on the same schedule for all storages. Only time that wouldn’t finish is if OneDrive or B2 goes down, but it would pick back up the next night.

My full schedule that runs daily is posted above.

Backup to local.
Prune local first.
Copy to B2 and OneDrive
Prune B2 and OneDrive

You can compare the per-revision chunk numbers under the chunks column in the tabled stats to find where the differences are.

So that’s interesting. I just looked and all the numbers in the “chunks” colum match up exactly between OneDrive and B2 in the logs.

Same under “unique” column as well.

The only time the chunk count is different from what I see is at the very start of the “Check” logs. Everything in the tabulated section apperas identical.

B2 Shows
2021-03-16 05:10:50.442 INFO SNAPSHOT_CHECK 9 snapshots and 122 revisions
2021-03-16 05:10:50.470 INFO SNAPSHOT_CHECK Total chunk size is 501,745M in 111872 chunks

OneDrive and Local show
2021-03-16 05:18:34.172 INFO SNAPSHOT_CHECK 9 snapshots and 122 revisions
2021-03-16 05:18:34.194 INFO SNAPSHOT_CHECK Total chunk size is 501,751M in 111874 chunks

Tabulated data is in different order at the end, but the numbers all match up there. I copy/pasted and did a “all” line for each backu section to verify they match on the other log as well. Found each one of them for an exact match.

I forgot about this – the number in the SNAPSHOT_CHECK message is the number returned by the listing function and may include orphaned chunks or temporary chunks that were not deleted successfully. You can run prune -exclusive -exhaustive (while no backups are running) to clean up these chunks.

So I ran an exhaustive prune on each storage.

-keep 0:30 -keep 7:14 -all -exhaustive

Then I ran a re-check on all storages as well, but they’re still off.


Here’s local and B2 exhaustive prune logs as well.

LOCAL

Running prune command from /cache/localhost/all
Options: [-log prune -storage Local -keep 0:30 -keep 7:14 -all -exhaustive]
2021-03-17 07:10:44.136 INFO STORAGE_SET Storage set to /backups/Duplicacy
2021-03-17 07:10:44.225 INFO RETENTION_POLICY Keep no snapshots older than 30 days
2021-03-17 07:10:44.225 INFO RETENTION_POLICY Keep 1 snapshot every 7 day(s) if older than 14 day(s)
2021-03-17 07:10:48.361 INFO FOSSIL_COLLECT Fossil collection 5 found
2021-03-17 07:10:48.361 INFO FOSSIL_POSTPONE Fossils from collection 5 can't be deleted because deletion criteria aren't met
2021-03-17 07:12:03.213 INFO FOSSIL_COLLECT Fossil collection 6 saved

BACKBLAZE

Running prune command from /cache/localhost/all
Options: [-log prune -storage Backblaze -keep 0:30 -keep 7:14 -all -exhaustive]
2021-03-17 07:12:03.759 INFO STORAGE_SET Storage set to b2://xxxxxxxxxx
2021-03-17 07:12:04.290 INFO BACKBLAZE_URL download URL is: https://f002.backblazeb2.com
2021-03-17 07:12:05.555 INFO RETENTION_POLICY Keep no snapshots older than 30 days
2021-03-17 07:12:05.555 INFO RETENTION_POLICY Keep 1 snapshot every 7 day(s) if older than 14 day(s)
2021-03-17 07:12:17.856 INFO FOSSIL_COLLECT Fossil collection 3 found
2021-03-17 07:12:17.856 INFO FOSSIL_POSTPONE Fossils from collection 3 can't be deleted because deletion criteria aren't met
2021-03-17 07:12:53.340 INFO FOSSIL_COLLECT Fossil collection 4 saved

Just bumping this back up - still can’t figure out what’s going on here.

-exhaustive alone doesn’t delete orphaned or temporary chunks. It only puts them in the fossil collection. Adding -exclusive should delete them immediately (again, only when no backups are running). If you don’t want to use the -exclusive option, wait a few days until the fossil connection can be safely deleted.

THANK YOU! The – exclusive fixed it because of the temp files.
Weird issue w/ the way onedrive saved temp files, it didn’t know to delete them - even though I ran -exhaustive -excluseive, maybe that’s a bug?

LOCAL

2021-03-19 10:02:51.096 INFO CHUNK_DELETE Deleted file chunks/cb/b1e57d6948a45f1c314a152c2eba4888b5943e2461811b99783c7b230e77f8.vhwjdvgw.tmp from the storage
2021-03-19 10:02:51.351 INFO CHUNK_DELETE Deleted file chunks/82/37faebdce1c91ebc42ac01aa9853243a56e299644530f2d982674e1dc96229.jwgetphb.tmp from the storage

OneDrive said this though…

2021-03-19 10:12:16.913 WARN CHUNK_UNKNOWN_FILE File 2c/~tmpA4_ba123e932cb68824d2221c2ccfb453c352a80dcbf72b455515c815392ad12e is not a chunk
2021-03-19 10:12:17.531 WARN CHUNK_UNKNOWN_FILE File d3/~tmpE5_4d776550acc43eeb1ad81fc45fc7635641c98907ad56e800093614143b4850 is not a chunk

For OneDrive, I had to manually go in and delete those two files, but now after re-checking all storages sync up with exactly 112,607 chunks :slight_smile:

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.