Everyone in my business knows that backing up is so utterly important that you just do it. The more important the system is the more important it is to back it up. For instance, backing up your web server/site is extremely important. If you run your business, if you rely on that web server to communicate with your customers, then you must back it up.
What I’m writing about today isn’t how you back up your server nor services, rather it’s a tale about what’s involved (considerations and pitfalls) in doing it.
Doing it right is important, and doing it right allows a quick recovery — in the event of a potentially catastrophic failure of hardware or software. Part of what I’m pointing out is that doing it consistent with standards is also very important. Avoid funky scripts if you can so there’s no need to hire some expensive guru to figure out who did what and how.
In my tale, along the way, a few additional things cropped up.
One of the important things in my shop is ensuring that I’m secure. As a consequence I have a difficult (for the bad guys) to maneuver configuration. The goal is to make it so that the effort is tremendously greater than the reward. If they have to fight to get anywhere and everywhere then they’ll likely just move on.
In past articles I’ve indicated that I have VLANs that separate parts of the network. That goal is so that some services that just might get broken into sit on part of the network doesn’t become a gateway to other parts that maybe are more important and contain sensitive data. They’d have to break into each one over separately. Security for me also means that there are reports/logs that are sent to alert me if someone is breaking in. My tolerances for break in attempts is at 0 meaning I have no tolerance. So, if you try to break in and you enter the wrong information you are banned for a year. In addition, no one outside of the US can even talk directly to my server, thanks to pFsense pFblockerng.
This is a story about how I use proxmox to run many containers, and about how those containers are backed up by proxmox to a different server and then those are backed up from another server located on a different VLAN and the process that I use.
I use SSH a lot. You can tell by reading past articles. You can also tell that I try to keep it as secure as possible. In fact, moving inside my LAN is severely limited and contacting my servers from outside are also severely limited. Getting in is hard and getting around once in is hard. You might think this is a burden for someone like me and I’ll say it is a slight burden, but it is worth it. It is more important to be secure than to be easy.
Well, with Proxmox I set up daily updates for those days when I work. So, from Tuesday to Saturday in the AM around 1:00 the proxmox server software stops each container and copies it to another computer on the VLan. Then restarts it. It then goes on to the next container. The backup process maintains 2 copies so, basically that prior days and the one before that.
I also wanted to have another copy of these containers on a computer on another VLAN but if you remember I said the primary VLAN doesn’t route into the LAN and that means that the proxmox server can’t initiate a communication to the LAN. It doesn’t mean that the LAN can’t initiate communications to the VLAN.
At first I thought I’d use SSHFS to mount a remote share and then copy the files from the mounted share to a local folder on the LAN based computer. That worked but it would require that I ensure the mount happens on every reboot. What I decided to do instead was to use rsync with ssh to connect securely using the jump server talked about in one of my other posts, and copy the files that way. This worked but I noticed a couple problems. The first was that I couldn’t see the progress. So, I found out how to make SSH show the progress. The next problem was that I needed to increase the speed of the transfers. They were happening at between 5MBps and 6MBPs. The network is all gigabit so the data should have been transferring upwards of 90MBPs+. I had an epiphany and looked up about pFsense to see if it was slowing the transfer down. After some investigation I found some settings that I’d set years and years ago on that server to limit it to 5MBPs. Why I set it is beyond me. I think I was working on trying to give VOIP traffic priority. Meaning, if I were copying files and a phone call came in VOIP would get the highest priority and bandwidth. Effectively I removed this limit and restarted the rsync. That showed me that I was getting 6 times the speed from before. My copy process went from 6 hours to 1 hour for one 100gb file. I took that as a win.
A note about rsync. This is a program that copies files and will automatically use the full bandwidth unless told otherwise and by default it will copy only the changed files, meaning every day this backup will only copy one of the two sets of backups done by the proxmox backup routine. However, it also means that I’ll accumulate backups of days prior and will need to delete those files older than 2 days manually. I have enough space on the Raid Array to handle many months of backup but I’d rather keep it tidy.
There is a parameter to the rsync command to tell it to delete the target directory before copying. If too much accumulates and I need to free space I can implement that.
One thing that I wanted to ensure was that rsync when using ssh utilized my jump server configuration. This will allow me to back up data from a remote site. I’ve tested it locally and it works and I know the jump server works perfectly so I don’t see any real issue backing up the remote server but it was proper to test it first. I’m happy with that.
At this point I started thinking about the travesty that the Ubuntu 19.10 update was. They updated MySQL to version 8 from version 5.7 and that caused me grief. None of my database accesses worked. I checked to ensure that the Databases and tables and data were there. All was right. I also noted that KODI failed to work so I had to go back to an older version which meant that KODI machine could no longer play netflix. Netflix on KODI on a Linux based media center.
About a week ago they did some updates to kodi which allowed it to install the latest, but it still had problems with the shared MySQL database. I was happy to see working so I left it at that. What I did instead was create a proxmox container that has MySQL 5.7 in it and used that as my shared database for Kodi and for subsonic.
I wanted to do this at home so I began to set up LXC which is the technology that Proxmox is based on. I was able to get this installed but found that there were issues and that I’d have to learn a lot about LXC (using it at the command line). I quickly picked up the commands to download and install a container. I chose debian buster amd64 and tested to see if that would access the internet. Yep. Tested to see if I could access it from another computer on the LAN. Nope. I adjusted a few things and found that I’d disabled my access to the computer remotely. Oh well, that’s where I stand and will work tonight on getting it to function again.