I set Duplicacy up to back up my movies to iDrive e2. I split my libraries to reduce the individual size. Yet, I am still running into issues with Duplicacy taking a long time to index each library which then results in it failing when comparing the file list to the cloud.
I suspect it fails because of the nightly reset of my IP - a behaviour dictated by my ISP I cannot change. The few seconds my connection drops due to this does not seem to influence uploads but when it happens during indexing, I think, the indexing fails with the error: ERROR LIST_FILES Failed to list the directory chunks/: RequestError: send request failed
.
The directories in question range from 500GB to 5TB with 300 to 3500 files. I am running Duplicacy in a Docker container under unRAID on shucked WD Elements 12TB drives with read speeds of 100-190MB/s (depending on data location on the platter). The system is equipped with an i5 11600k and 32GB of RAM. I see around 2-4MB/s of reads due to Duplicacy on the drive with 100MB of RAM used and 0% of CPU used according to unRAID. The indexing seems to be IOPS-limited, which I cannot change.
What is my best bet here? Should I split up my movie directories further to allow Duplicacy easier indexing? I’ll be moving from 37Mb/s upload to 100Mb/s shortly - that might help too but optimally, I’d have a complete backup before moving.