After my experience with FreeBSD Jails and LXC containers, I wanted to get into ‘real’ virtualization - and all of the advantages that come with it, like VM snapshot and restore features, moving VMs around between my workstation and production environment, and separating my storage from my compute. To this end, I built the Minilab, a small scale virtualization lab that will be at home in any house or appartment.

The Choice of Hardware

I already had ‘production’ work running on my main server (including my very overworked automation server Telstar and my security camera recorder ZoneMinder), so I needed some new hardware to avoid downtime in the transition process. I built a my new virtualization server around an Asrock Deskmini A300, which is a small form factor PC barebones which supports fairly modern AMD Ryzen APUs. It sported a few things that made this attractive to me as a minilab:

  • AMD Ryzen APUs are quite power efficient, and include a decently powerful Vega GPU (if I can ever pass that through to a VM without breaking the host)
  • One M.2 slot for WiFi cards and one M.2 slot for NVMe storage (but not SATA, as I would learn)
  • Space and cabling for a 2.5" SATA SSD
  • A full suite of video outputs (HDMI, DisplayPort, and VGA), which is important since I have an old VGA monitor next to my server rack
  • On-board Gigabit Ethernet, USB-2, USB-3, and USB-C
  • Single 19V power brick with all power supply circuitry on the motherboard
  • No extra IO that I don’t need

In all truth, I originally bought this SFF PC to be the basis of a 3D-prined PC project (since it’s small enough to 3D print a case on my Prusa), but I never actually built that project, and it was available, so it became the virtualization lab. I fitted it out with 16Gb of DDR4-3000, a 240Gb 2.5" SATA SSD, and most importantly, an AMD Ryzen 2400G APU. I know the 2400G is older than the 2400G in my production server, but I bought the parts for this back when the 3400G was brand new and the 2400G was on sale, so it’s what I’m going with.

Setting up the Minilab

The first choice for software on the Minilab was XCP-NG, a hypervisor operating system based around the Xen hypervisor. Xen is the open-source basis of a number of hypervisors including Cirtix XenServer, the open-source XCP-NG, and the privacy focused Quebes OS project, among others. XCP-NG combined with Xen Orchestra for management seemed like a good choice to get started with, given the desire to stay with open-source software (so Hyper-V and VMWare are out), and the great snapshot and backup features that Xen Orchestra provides on top.

The Minilab has a 240Gb SSD which is available for VMs, and the Xen Orchesta VM is stored there (so I can manage the minilab if the remote storage has an issue). I also attached a zfs dataset on my primary server over the network for VM storage, and VMs can of course map their own network shares on that server as well. All seems well at this point

Setting up Home Assistant

The first ‘real’ VM I ran on the new minilab was Home Assistant. I imported the image of Home Assistant OS, expanded the filesystem to 32Gb, and started playing around. I don’t have a project article on my initial dealing with Home Assistant (I’m certainly not an expert on the topic yet), but it was the first real test of the Minilab.

Setting up Frigate

After many months of using the Minilab with Home Assistant, I wanted to add Frigate NVR as a new VM. Home Assistant OS supports running it as a supervised add-on, but I wanted to store the video data on my primary storage pool over the network, and Home Assistant OS doesn’t allow any package management or file mapping within the OS (it’s purely a Docker host managed through Home Assistant Supervisor). So, I created a new Ubuntu VM to run Frigate, and bought a pair of Coral AI devices to offload AI inferencing for Frigate. I was able to install Frigate and run it, but with just CPU it was unable to handle my full suite of 5 security cameras. I kept it going for just one camera, and let it run for a few weeks to get a feel for how well Frigate works with Home Assistant. This is the first time I’ve gone straight to the minilab to test new software instead of running a VM on my workstation in VirtualBox, and I’m really enjoying having a compute server to use, even if it’s not very powerful.

Unfortunately, when the two coral.ai devices came (one M.2 B+M key and one M.2 E key, finally a use for the WiFi slot on the A300 Deskmini!), I struggled to get PCIe passthrough working in XCP-NG. I was able to drop the devices from dom0 (the Linux host which manages the Xen hypervisor), but couldn’t get them to pass through. Xen was not showing IOMMU as enabled, even though the CPU supported it, and IOMMU support is required for PCIe passthrough.

Switching to Proxmox

Still overall happy with XCP-NG, I decided to switch to Proxmox to get better hardware passthrough support. It’s based on Debian Linux and uses the KVM hypervisor (Linux Kernel-based Virtual Machine), so it should have the advantage of Linux’s excellent hardware support. As expected, I just had to enable AMD IOMMU in the Linux boot options and it seemed to work correctly. The host Debian system kept trying to use the Coral’s itself, so I had to blacklist the drivers for those, but they still didn’t show up in the PCIe passthrough menu. I was able to get through it using the command line, but it wasn’t a smooth process. But, now, I have an Ubuntu 20.04 VM which has the two Coral AI accelerators available for use, and I can install Frigate.

Because I blew away my XCP-NG install, I had to backup and reload my Home Assistant installation across hypervisors. I decided to do a backup in Home Assistant OS and also a backup in Xen Orchestra, so I’d have options in the restore process. When restoring, I created a new VM in Proxmox with Home Assistant OS, using the latest install image, and reloaded the backup file there. After a few minutes it had figured itself out and was back exactly as I’d left it, no need for the disk image from Xen Orchestra.

The Final Verdict

I liked XCP-NG and Xen Orchestra, but I needed better hardware support that Proxmox provides. I’m not sure which I’ll end up going with in my next homelab, but I’m certainly happy to get away from the console LXC commands of my current server. I’m not using the ZFS features on Proxmox since I already have a ZFS pool on the storage server, and I like the separation of compute from storage, since a lot of my storage is accessed directly, not via the VMs. In my mind, this will become an advantage as I scale up and the load on the storage part of the system increases, but in reality it’s still mostly just a fictional future load for a future homelab.