Checking very large backup volumes is not only slow, but also creates additional cost on cloud services often charging for downloads.
There should be an option to select chunk ids and/or read those from file/stdin and have the duplicacy check to check only those chunks. Similar logic should be implemented for the -files param too.
I am now running the duplicacy check -chunks only for two chunks. There are 9k chunks in total for this single snapshot revision.