Understand default storage usage

This could be a really dumb question but I think I may not be grasping one of the basics of Duplicacy. I’m using CLI MacOS 2.0.9 to OneDrive

My backup model had me create two storage locations under a single OneDrive account at its top level:

backup.pictures
backup.video

I initialised two repositories one each to the storage location with the same name:

cd /folder/pictures
~/apps/duplicacy_osx_x64_2.0.9 init -e pictures one://backup.pictures

And

cd /folder/video

~/apps/duplicacy_osx_x64_2.0.9 init -e video one://backup.video

I backed up using the following command:

cd /folder/pictures

~/apps/duplicacy_osx_x64_2.0.9 backup -threads 2 -stats -limit-rate 700

one://backup.pictures is 170GB with 35,521 chunks

I then repeated for /folder/video (which is in progress and yet to complete)

I expected that these would be treated as completely separate storage locations but after a successful 1st backup a check of the chunks in pictures, e.g.

cd /folder/pictures

~/apps/duplicacy_osx_x64_2.0.9 check -r 1

generates 1,000s of errors:

Chunk d9a5378b0fc7ac397be4433da315709a4e6b841ed1b0727075acbcbddc06dc15 referenced by snapshot pictures at revision 1 does not exist

I can search in the oneDrive web UI and find any of the chunks.

In re-reading the guide I think I may have not understood one of the basics:

"The initialized storage will then become the default storage for other commands if the -storage option is not specified for those commands. This default storage actually has a name, default. "

In my case both storage locations will be called default? I had assumed that the storage locations one://backup.pictures and one://backup.video would be compartmentalized and independent.

It’s ok if I’m not using it correctly I’ll just backup both /folder/video and /folder/pictures to one://backup.pictures only but wanted to make sure I understood correctly.

Many thanks, Guy.

Your usage is correct. It is ok for both storage locations to be called default, as they belong to different repositories and the storage name only becomes significant if you have multiple storage locations within the same repository.

Can you run ~/apps/duplicacy_osx_x64_2.0.9 -d check -r 1 and post the output here?

$ ~/apps/duplicacy_osx_x64_2.0.9 -d check -r 1
Storage set to one://backup.pictures
Reading the environment variable DUPLICACY_ONE_TOKEN
Reading one_token from preferences
GET https://api.onedrive.com/v1.0/drive/root:/backup.pictures?select=id,name,size,folder
GET https://api.onedrive.com/v1.0/drive/root:/backup.pictures/chunks?select=id,name,size,folder
GET https://api.onedrive.com/v1.0/drive/root:/backup.pictures/fossils?select=id,name,size,folder
GET https://api.onedrive.com/v1.0/drive/root:/backup.pictures/snapshots?select=id,name,size,folder
Reading the environment variable DUPLICACY_ONE_TOKEN
Reading one_token from preferences
Reading the environment variable DUPLICACY_PASSWORD
Reading password from preferences
GET https://api.onedrive.com/v1.0/drive/root:/backup.pictures/config?select=id,name,size,folder
GET https://api.onedrive.com/v1.0/drive/items/root:/backup.pictures/config:/content
Compression level: 100
Average chunk size: 4194304
Maximum chunk size: 16777216
Minimum chunk size: 1048576
Chunk seed: 0d5bcfe234a860254831eda438159606c01f783ef1e43b7dcc2ba888bdb36f04
Reading the environment variable DUPLICACY_PASSWORD
Reading password from preferences
id: pictures, revisions: [1], tag: , showStatistics: false, checkFiles: false, searchFossils: false, resurrect: false
Listing all chunks
Listing chunks/
GET https://api.onedrive.com/v1.0/drive/root:/backup.pictures/chunks:/children?top=1000&select=name,size,folder
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MTAwMQ
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MjAwMQ
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MzAwMQ
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=NDAwMQ
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=NTAwMQ
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=NjAwMQ
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=NzAwMQ
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=ODAwMQ
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=OTAwMQ
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MTAwMDE
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MTEwMDE
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MTIwMDE
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MTMwMDE
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MTQwMDE
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MTUwMDE
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MTYwMDE
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MTcwMDE
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MTgwMDE
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MTkwMDE
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MjAwMDE
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MjEwMDE
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MjIwMDE
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MjMwMDE
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MjQwMDE
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MjUwMDE
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MjYwMDE
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MjcwMDE
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MjgwMDE
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MjkwMDE
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MzAwMDE
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MzEwMDE
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MzIwMDE
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MzMwMDE
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MzQwMDE
GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MzUwMDE
GET https://api.onedrive.com/v1.0/drive/root:/backup.pictures/fossils:/children?top=1000&select=name,size,folder
GET https://api.onedrive.com/v1.0/drive/root:/backup.pictures/snapshots?select=id,name,size,folder
POST https://api.onedrive.com/v1.0/drive/items/9C448783DD0C4A72!4353/children
GET https://api.onedrive.com/v1.0/drive/root:/backup.pictures/snapshots/pictures/1?select=id,name,size,folder
Loaded file snapshots/pictures/1 from the snapshot cache
Chunk fa41ad7db45c3240b736fb9b5141d94fe181e98ab916fdc3d7095d6cc3f7ebff has been loaded from the snapshot cache

Then follows many, many messages:
Chunk 3aadae365f22719caf91985f049910db65c7c74c78b59ed96d372ddeb718c411 referenced by snapshot pictures at revision 1 does not exist

Also note my other comment in case it is relevant, while running the checks a backup of /folder/video is in progress. I also tried a line count of the command without the -d, e.g.

~/apps/duplicacy_osx_x64_2.0.9 check -r 1 | wc

to see if the number of error messages matched the number of chunks and it did not and the count varied each time I ran it.

hostname:pictures $ ~/apps/duplicacy_osx_x64_2.0.9 check -r 1 | wc
   11130  133542 1468890
hostname:pictures $ ~/apps/duplicacy_osx_x64_2.0.9 check -r 1 | wc
   10853  130218 1432326

G.

I should also make clear my infrastructure in case relevant.

The root of the pictures repository ‘/folder’ is a network share on a NAS drive. It is mounted to the Mac using the autofs program. Mac and NAS are connected by 1GigE wired ethernet.
G.

Can you try this special build https://acrosync.com/duplicacy/duplicacy_osx_x64_2.0.9_debug?

cd /folder/pictures
/path/to/duplicacy_osx_x64_2.0.9_debug check -r 1

This build will print a lot of messages like “Chunk: xxxxx”, one for each chunk it finds in the storage. Then you can manually check if the missing chunk it reports actually exists.

hostname:pictures $ ~/apps/duplicacy_osx_x64_2.0.9_debug -d check -r 1
~/apps/duplicacy_osx_x64_2.0.9_debug: line 1: syntax error near unexpected token `newline'
~/apps/duplicacy_osx_x64_2.0.9_debug: line 1: `!<arch>'

and

hostname:pictures $ ~/apps/duplicacy_osx_x64_2.0.9_debug check -r 1
/Users/guy/apps/duplicacy_osx_x64_2.0.9_debug: line 1: syntax error near unexpected token `newline'
/Users/guy/apps/duplicacy_osx_x64_2.0.9_debug: line 1: `!<arch>'

Am I doing something wrong? The _debug version is smaller than duplicacy_osx_x64_2.0.9, 1.9MB vs 21.4MB? 

G.

Sorry my fault. uploaded an executable built from ./src rather than duplicacy/duplicacy_main.go.

https://acrosync.com/duplicacy/duplicacy_osx_x64_2.0.9_debug should work now. Can you try it again?

Tests ran as follows:

hostname:pictures $ ~/apps/duplicacy_osx_x64_2.0.9_debug check -r 1 > ~/dup-foo-5
hostname:pictures $ ~/apps/duplicacy_osx_x64_2.0.9_debug check -r 1 > ~/dup-foo-6
hostname:pictures $ ~/apps/duplicacy_osx_x64_2.0.9_debug -d check -r 1 > ~/dup-foo-7

The captured output of multiple runs of _debug lists the correct number of chunks (number of files in oneDrive folder UI) at 35,526:

hostname:~ $ cat dup-foo-5 | grep Chunk: | wc
   35526   71052 2557872
hostname:~ $ cat dup-foo-6 | grep Chunk: | wc
   35526   71052 2557872
hostname:~ $ cat dup-foo-7 | grep Chunk: | wc
   35526   71052 2557872

The number of listed ‘missing’ chunks varies with each run of the command:

hostname:~ $ cat dup-foo-5 | grep not | wc
   14698  176376 1940136
hostname:~ $ cat dup-foo-6 | grep not | wc
   13220  158640 1745040
hostname:~ $ cat dup-foo-7 | grep not | wc
   12344  148128 1629408

However, in an as yet unrepeatable pattern some of the chunks are listed more than once, for example:

hostname:~ $ cat dup-foo-7 | grep baec8d10af852d6209b1172590b89ed261a6374f6a91279179c25acc0681c216
Chunk: baec8d10af852d6209b1172590b89ed261a6374f6a91279179c25acc0681c216
Chunk: baec8d10af852d6209b1172590b89ed261a6374f6a91279179c25acc0681c216

Or 

hostname:~ $ cat dup-foo-5 | grep ac1e044855c3cb7960299300009c4877ed33f1e3566413194dde41d09b2730e0
Chunk: ac1e044855c3cb7960299300009c4877ed33f1e3566413194dde41d09b2730e0
Chunk: ac1e044855c3cb7960299300009c4877ed33f1e3566413194dde41d09b2730e0
Chunk: ac1e044855c3cb7960299300009c4877ed33f1e3566413194dde41d09b2730e0

But the chunk above is listed as missing in run 6:

hostname:~ $ cat dup-foo-6 | grep ac1e044855c3cb7960299300009c4877ed33f1e3566413194dde41d09b2730e0
Chunk ac1e044855c3cb7960299300009c4877ed33f1e3566413194dde41d09b2730e0 referenced by snapshot pictures at revision 1 does not exist

In all cases chunks are present on oneDrive when using the web UI search.

With a bit more digging so far the only other pattern seems to be a definite change at the 1000th ‘Chunk:’ listing, note how it lists in order but then jumps at the 1,000 instance. Also the two fragments are different and I would expect them to be the same.

hostname:~ $ cat dup-foo-7 | grep Chunk: | head -1005
[lines removed]
Chunk: baf2cbe825454f31be275edc5c42911cf162a1a4534e287d1d3da5d815330861
Chunk: baf3e384e3d5273b2cfc55a88e49caa98cafc80ab2c4767b1760609657d252de
Chunk: baf74aaf4ddbdd79902061abc02827e15a59cbc15f0f4b1be16957e1203f5454
Chunk: baf90adae45ef6ece7b474dc3a8bcbbff5d56e2aa217f380292e12c30860d753
Chunk: baf487ba6d2433679eb587ca69abf67ea308aa1083e8ff6db796f184ad48ef2f
Chunk: 7fa52d6f9a59b1227444cdf5d896f11792c62fe24d0eba9ed087d04813f6a66c
Chunk: 7fa77f452f9556777e599de78449e38c73a433be631174e6186e1f8a163ce68a
Chunk: 7fb0bbde62b8eda6488b7cb65a5f543fa86423eef8730f83167983cba2cc0f2e
Chunk: 7fb2aeb0985052b4a0840ad492d9ac757fd18fcb30f29af53aeddea4004b224e
Chunk: 7fb5e1ad1fb36d545ae1f428efb3cbf08b4f180af37bf2eec08500b33fe6e764
hostname:~ $

hostname:~ $ cat dup-foo-6 | grep Chunk: | head -1005
[lines removed]
Chunk: 767268249e1f24daad920e41dce3c0f57ee37ef9f909eed058538b336637b834
Chunk: 768812783b41f20a3b3f778e91b483ff25a7d82c42e59a9cb6ed4646bf30ae49
Chunk: 0774018506ec38a84adea80353b54f6bf4d5d926c75caec7545a116cd498e716
Chunk: 774430939f857c88d5bcaa2dfa497f5970255b93737f92f1a83f2be41fdaa4da
Chunk: 778294737b65969c1e3b6f76826bdbb3456bbdfe116226084d974b4c91066dd1
Chunk: 15efd2e7f5dc3e4f37db2de273ad3e68629a9decbfcd72967acfbc4006a7701e
Chunk: 15f0c34092371f51161e76a1a15fafb8bab6a1b127ef10e5afd4888448bfd5f2
Chunk: 15f7be907218f5142b8366065dd207eda9f0a4ab94b80c6c885ba1e9f94cef66
Chunk: 15f7989af9018f3c23c5fca996cdcc5f6aa6f22111797caeef588c1a8747d892
Chunk: 16a2b25b5f9d744d0aeccf0df42ac6967183864bb8ab53f070c429067fdac207
hostname:~ $

To satisfy myself that it was not the /folder/video back-up interfering at an API level I stopped the other back-up and repeated the test twice more:

hostname:pictures $ ~/apps/duplicacy_osx_x64_2.0.9_debug check -r 1 > ~/dup-foo-8
hostname:pictures $ ~/apps/duplicacy_osx_x64_2.0.9_debug check -r 1 > ~/dup-foo-9

hostname:~ $ cat dup-foo-8 | grep Chunk: | wc
   35526   71052 2557872
hostname:~ $ cat dup-foo-9 | grep Chunk: | wc
   35526   71052 2557872
hostname:~ $ cat dup-foo-8 | grep not | wc
   12670  152040 1672440
hostname:~ $ cat dup-foo-9 | grep not | wc
   11992  143904 1582944

hostname:~ $ cat dup-foo-8 | grep Chunk: | head -1005
[lines removed]
Chunk: 90e7090e7d5920a22e35d3d424b2ef0d699be8503fd0baa02f787f226745663e
Chunk: 90e374636d0aa8e41baa15b1b2b5dc87ec30b8ed6dcae2ca1fdfecb3f907af5d
Chunk: 90ea81ccb5182c6821001ee58bf4a9e6f64159733474183af3409519662c777c
Chunk: 90eaf0bbee05a4e0114b16c2ab30c37508604a038292a13445d0991c6ffe8ba0
Chunk: 90eb6864eaa56ab823793427d9120ef3d7a9f6da958dd91139e58058be426061
Chunk: 5d64fb553873e9779786c9582cfcadcc307843909ce2bc0c8f45bbd5be1b49dc
Chunk: 5d71c281b9a12e65a43b4610381d2c994287a34b2235be86d066eb12f8d55cfb
Chunk: 5d088a10c39ff337d9ac76bc9a04d81013e0c87ee492b833c789347986d87e63
Chunk: 5d98c03525b72702ac813c9af2c2fc9957dab5cbd9dfbea9fbfc578d838b6b12
Chunk: 5d328d6e922cced4775a217145516060ca66fed030293205cbfd260917ca6d0a
hostname:~ $ cat dup-foo-9 | grep Chunk: | head -1005
[lines removed]
Chunk: 15e2931194c86883f779658fdfdc2a706f17e9bff67c3aa06ce4f3e8ec245bb3
Chunk: 15e8410174e4f4d8b080190682bbf45c7eab90a0d9afdb5a9b78b58214239715
Chunk: 15eac18cc0a722033cc09b7ee72b5d5f5613a486ee516f21ceda71209e551622
Chunk: 15edc3962d59c9df7d6e6d5f17a1a392b93d8fb416cd2398a4ac25a5b7d0b7e2
Chunk: 15efa5a286ee98174999b27fd74dd27254f7d4abb734a2ac415929d683797122
Chunk: 569fcc043692ce159d5164d64968f10f1e870e8e52b6b5c2a3b8134d1026f2b8
Chunk: 570a0da138556fabf79b0c210394c787d9ab2b21a946dfd0410b2b97562bf418
Chunk: 0571acb839cad9d493c5ac04b4c6acea0b73ee60bfb80ae6198a1cdf6994c532
Chunk: 0571f6c0c1919c9365e574aceb3d3c3b461061a839047e2b2f29d39a838253ce
Chunk: 0572b7ec571bd19339789c40f322d2d1095c72a159c035e4b5dda4f150052760
hostname:~ $ 

I’ll carry on digging a bit more later, whatever you want to try just let me know . . .
Guy.

Ok I maybe going down a blind alley but here goes.

The output order of the list of chunks seems inconsistent. It looks like you call the list of chunks in batches of 1000 using the @odata.nextLink oneDrive API function to get a URL for the next batch. If the code just asks the oneDrive API for the next batch using the URL contained in @odata.nextLink from the previous return code then that sounds like the API, through the skiptoken, is not supplying the next list of chunks consistently? The naming and order of the skiptoken strings in all versions of my ‘~/dup-foo-’ files are consistent which could point back to the way the entries are consolidated? I also noted that the first returned URL for the second batch of 1000 has the same skiptoken for different storage locations:

GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.pictures%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MTAwMQ

and 

GET https://api.onedrive.com/v1.0/drives('me')/items('root%252Fbackup.users%252Fchunks')/children?$top=1000&$select=name,size,folder&$skiptoken=MTAwMQ

NB I am not a coder of Go or even a coder. A Python hobbyist at best.

Hi again. The /folder/video has completed and it too shows the same symptoms (49,862 chunks). I’ll do some more analysis. I also ran a test to generate a smaller backup, 2,005 chunks and this works ok.

This looks more like a OneDrive bug. The chunk names shouldn’t jump this much between pages:

hostname:~ $ cat dup-foo-8 | grep Chunk: | head -1005
[lines removed]
Chunk: 90e7090e7d5920a22e35d3d424b2ef0d699be8503fd0baa02f787f226745663e
Chunk: 90e374636d0aa8e41baa15b1b2b5dc87ec30b8ed6dcae2ca1fdfecb3f907af5d
Chunk: 90ea81ccb5182c6821001ee58bf4a9e6f64159733474183af3409519662c777c
Chunk: 90eaf0bbee05a4e0114b16c2ab30c37508604a038292a13445d0991c6ffe8ba0
Chunk: 90eb6864eaa56ab823793427d9120ef3d7a9f6da958dd91139e58058be426061
Chunk: 5d64fb553873e9779786c9582cfcadcc307843909ce2bc0c8f45bbd5be1b49dc
Chunk: 5d71c281b9a12e65a43b4610381d2c994287a34b2235be86d066eb12f8d55cfb
Chunk: 5d088a10c39ff337d9ac76bc9a04d81013e0c87ee492b833c789347986d87e63
Chunk: 5d98c03525b72702ac813c9af2c2fc9957dab5cbd9dfbea9fbfc578d838b6b12
Chunk: 5d328d6e922cced4775a217145516060ca66fed030293205cbfd260917ca6d0a

And they are out-of-order too. I’ll report this to OneDrive.

Yes, /folder/video shows the same discontinuity:

hostname:~ $ cat dup-foo-20 | grep Chunk: | head -1005
[lines removed]
Chunk: 2c4b19875b7295e1fd482ce3ce2685ff29b1251c3f651aacdab53c273581b01b
Chunk: 2c4cbdc219e6af41d4cfa7e4a1cd26069d74cdd3703b9291a259f5ab84d0ede1
Chunk: 2c4cd02ac21a0ef864410c2f401515dbe9d5e540c1d1c338ad07b10cf4b8c700
Chunk: 2c4cd7fa4e09bb6192cf897b34b95f3b0971d0e615a60548765b348817cc541c
Chunk: 2c4fea19238e9523044aca844606b72fe5907bfe700a5d63f400982840275ef9
Chunk: 2c4fed9ac0a6173b2e4f9f15a2bc5ad0ab7b119bf0501b774c18532ce03b42e9
Chunk: 143f800a22dee663e53cfe0ac88fc48e9dca09b0c348a7fcb47d459dc9057f43
Chunk: 143f9822c855a3db0002b4a226548bc080e87150d6f7b366e7d2b7c6a0c65d0a
Chunk: 143fe9eb4865910066ce350717b240e9521c96b77b4ade4a8568b9012e7ff7ac
Chunk: 145a6fe3e14d1f8fc219b7c1ac4b3684ec59c16d380cbcdfd86b9c54194f770a
Chunk: 145dfb850e2dedb89030c3c5db838bcafa73f8b3a72e832d5f826da96c096558

Will this bug effect the other duplicacy functions or just check?

Mostly the check command. The -exhaustive option of the prune command will not work either. In addition, the initial backup will become less efficient as it fails to list all existing chunks, so for each chunk it misses it will need to issue an API call to check if it exists. Despite this inefficiency, all backups will still be good.

The github issue is here: https://github.com/OneDrive/onedrive-api-docs/issues/740

Hi gchen, your comment in the oneDrive issue says ‘This doesn’t seem to affect everyone. At least for me the listing is still complete and in order.’ Can you confirm what size of backup and number of chunk files remain complete? There are historical posts that suggest oneDrive and/or Sharepoint had a 20,000 file limit. Other information suggest these limits were removed but could there be some legacy constraints?

Thanks for the update and continuing to work on this problem. The nesting sounds perfect - is it automatic or is there a guide? I noticed you dropped a new release this morning 2.0.10 is it in there - the release notes mention something about nesting? If not are there instructions about how to build for MacOS I really am a bit of a amateur.

I’m also creating a new backup using a 1m chunk size to quickly get to nearer 20k files and then step up through 20k to try and see if it is exactly 20k, just curious unless you managed to confirm already?

Yes, the release 2.0.10 supports nested chunk levels by default and should be able to work around the OneDrive issue.

No, I only tested 13k and 26k chunks. It would be interesting to see what exactly the number is.