I have storage arrays that come online and go offline as needed. Sometimes weeks go by without them connecting to the server (they hold 20-24 spinning drives, over 100TB of files and pull a lot of power while running). I have scheduled (WebUI) backups to run every few days that include source data that live on those drives.
I noticed that if the schedule fires off a backup while the storage arrays are off, it results in a failed backup. Fair enough. However, when the next backup runs with the array connected, it scans things incredibly slowly (-hash?). Instead of the backup taking 20 minutes or so to scan contents and start uploading, it takes upwards of 12+ hours just to scan.
If there is already a way to prevent this I would love to know about it. Otherwise, it would be great if would more gracefully / intelligently handle this situation (along with incomplete snapshots, which seem to also force a full scan to see what has and hasn’t been uploaded).
I would hate to have to manually run all these backups in order to avoid all that scan time.
…thread with similar issues: