I think you’re safe with your setup, I use basically the same, although I don’t use SFTP (I use B2 and S3).
Remember that the weak point of all encryption is the password. If the password is weak it can be obtained by brute force and other methods, although this is difficult to do with a backup made on chunks like Duplicacy.
In short, choose a good encryption password to use with your setup above and you will be safe.
I let my password manager choose the encryption password so I guess it should be safe Thanks for your answer. Would it be an idea to create a “best practice” guide for a save and secure backup?
Fresh, up-to-date, guides on security (especially) are always much needed. There’s a lot of stuff out there that’s outdated, and just seeing an article or Youtube video dated relatively recently gives some added level of confidence.
Remember though, security can have multiple layers. So as well as good stuff like password managers with genuinely complex passwords, as many layers you can apply the better.
maybe even block all IPs, except your ISPs ranges (or country)
That last one would require a bit of extended research into whois, address pools and ranges.
Oh and make sure you can access your password manager without your repository and/or backups - in case it’s a file-based password manager and it only exists in your repo.
Changing port does nothing except break some old software that cannot talk to non-standard port (some rsync clients for example)
Fail2Ban is almost useless now; in my experience the server is probed not from a single IP but from a botnet, each bot trying once per day. Fai2Ban can’t do anything meaningful there. However many modern next-gen firewalls such as Sophos UTM9 or XG can; by leveraging aggregate data from multitude of their peers. It is a good idea to stick one into your network between the gateway and first switch in L2 mode and let it protect your server. Many are free for home use.
Yes, absolutely disable password auth. It is impossible to guess a good key, so let them go ahead and brute force it all day long. Nobody would bother. Bots look for root:root and admin:admin type of stuff. It also helps to not log the failed logins, only log successes and watch those closely. Removes noise from the logs. Even better approach would be to only connect via VPN.
Good advice on allowing access only from select networks from your firewall.
I’m sure there is a better way but this is how I do that, thought to share in case you guys find it useful:
Lets say you want to find network ranges that gmail/google servers reside in (as an example; I needed to be able to received mail from gmail servers only)
Plug that into your firewall. You can of course prune this list further by removing the obvious DNS ranges on top and other non-mail related stuff, but that is already good enough.
I administer a couple dozen systems with SSH ports exposed and, in my experience, changing the port cuts down the probing a helluva damn lot. Yes ‘security through obscurity isn’t’ but then again, I don’t tell everyone where I keep my house keys or wallet. There’s simply no reason for me not to change it when it reduces the amount of connection attempts (verifiably), and I’ve encountered no software that can’t deal with a non-standard port.
While I don’t have too much experience with Fail2ban (most of these are Windows systems), I keep logs and grep them often… the amount of probing I’ve seen from the same IP is pretty astonishing, sometimes for days continuously.
Now consider the above when very recently, OpenSSH was found to have had a 20 year old flaw that allowed username enumeration (possibly without triggering Fail2ban). And older flaws, which reduce randomness in key generation. I most definitely subscribe to the philosophy of layered security these days, no matter how insignificant they may seem in isolation.
That is very useful, thank you! Definitely gonna give this a try…