Obviously without the network nothing works on Proxmox. No way to get in, no way to serve pages, nothing is possible except by going to the physical console and typing commands.
The process is pretty simple to upgrade. You are actually upgrading to Debian Bullseye (probably from Buster). Predictable issues can crop up. The answers to those issues are not always what the developer states. And the forums and other requests for help are not typically on point because they only cover the basics.
In our case we were upgrading 1 of 3 nodes in a cluster as a test. There was nothing in the documentation that stated that all nodes need to be upgraded at once. When we went through the upgrade we followed the online guide for doing so. There is some special word salad here so you have to be precise in your reading and interpretation. I was reading it with someone else and as we discussed each section both of us had different interpretations of the things outlined.
We paid special considerations to the network option. Even so, it appeared so innocuous that we just went ahead. We did the pve6to7 app to test for issues. We were warned to shut down the VMs and containers before starting and there was a reference to group permissions. We shut down the containers/VMs and then ignored the permissions warning which wasn’t an issue at all.
After we brought the server back up it appears that all the containers started without issue, as did the VMs. However, we ended up without any network. We read online and there were discussions indicating that we might need to add the hwaddress to the /etc/network/interfaces file. We attempted this without luck.
After some discussion of our interfaces file we decided to remove the bond that we were using for failover. Once we did this the networking came back up and everything was communicating.
I think there was another issue with the fact that the other two nodes in the cluster had not been updated. With our prior attempts and the knowledge that we gained from that the other two nodes were successfully updated. The bond0 section was removed from the interfaces file and then the servers rebooted. Everything worked on all 3 nodes (so far).
I did this on a 4th computer and though I was successful I noted different things happened. I do a lot of updates and this is pretty easy, however as I watched the three prior updates I noted certain messages. On this last one I noted those messages and more. It’s sometimes scary to start this stuff. You have to plan it all out. I noted that the boot up messages told me that some ZFS mounts were not mounted. I still have to investigate that.
We were updating this because we were virtualizing pfsense and it turned out that the throughput when measured in the pfsense VM was horrible, often 1/4 to 1/3 the performance we get on bare hardware. With gigabit fiber when we only get 25-33% of the performance that’s a big hit. We were testing to see if we could come up with the solution to this. I brought over to the shop one of my spare R720 computer. We installed proxmox 7 and pfsense 2.5.2 onto it. With these as clean installed the throughput tests from the pfsense VM were almost as good as bare metal. In order to accommodate this the conclusion is that we need to be on Proxmox 7 and pfsense 2.5.2 or greater.