Duplicacy is great for versioning of data. It can chunk, de-duplicate, compress and encrypt your 1TB of data quite well and can keep a snapshot history.
For large media though - personally I’d use a tool like Rclone for copying/synchronising to cloud storage or just a file sync tool for local/external storage.
Local database? Duplicacy doesn’t really use one; you only need space for storage on the backup destination.
Initial backup can be interrupted but it’ll take longer to restart. You won’t lose much bandwidth by aborting a backup, but if you have a lot of data to backup and a slow uplink, you could reduce the initial size by moving files/folders temporarily out of the repository until it’s complete. Then move more data in and run another backup. Rinse repeat.
Duplicacy in fact does not use a local database, and it is precisely this design that makes it so reliable, not presenting the various problems that other tools that use databases present.
Rclone isn’t rsync. (Although I guess it does similar things.) However, you can ‘backup’ your large media files more efficiently by a simple Rclone copy or perhaps a sync with the --backup-dir flag.
Anyway, you asked for best practices and I gave you what I consider a better practice…
Duplicacy is fantastic for most user data (and, ridiculously, doesn’t have a local database) but the process packs files into chunks on the destination. This makes it cumbersome to access directly, and you likely won’t benefit from de-duplication or compression.