I’m sure many of you follow me because you use Proxmox. It’s been a staple of my content for some time now. So, while working on the next episode of the Ceph series, I thought it would be good to do a separate segment on networking. So, here you have it. The basics of VLANs, Bridges, and Bonds in Proxmox VE. I’m only covering the native Linux versions, not Open VSwitch and VXLAN. I’m sure I’ll get around to a video on that topic someday.
So, what are the most important things to know when choosing a network topology for Proxmox (or any virtualization environment)? TRAFFIC! Where is traffic going, and how much of it is going everywhere?
- How much traffic is going to Proxmox itself? This includes the web UI and API (which should be minimal), but also SPICE sessions if you’re using SPICE for VDI.
- How much traffic is going from Proxmox to your storage solutions? If you’re using NFS / SMB / iSCSI, it could be a lot. Are you keepign your storage network separated, either physically or virtually (VLANs)? Proxmox will need an IP address on any network you use to communicate with storage
- How much traffic is going to your VMs? Do they need to be on specific VLANs?
- Do any VMs do routing or need access to a VLAN trunk port? If so, should they get open access or restricted to certain VLANs? Do you want to expose each VLAN as a separate virtual network interface or trunk them over a single interface?
- Do you require high availability at the network level, i.e. bonded failover? Do you want to use a slower 1G network when your 10G network fails, or just lose connectivity altogether?
Once you can answer these questions, you can proceed to decide how to arrange the physical interfaces you have (or are buying/adding) for the best performace for your use case.
In my test setup, I’m going to demonstrate bonding between identical (two Gigabit) and different (multi-gig + gigabit), and the concepts apply to 10G and faster networking as well.
And a good reference from Proxmox: Proxmox Networking Admin Guide