In this project, I explore using low cost thin clients as cluster nodes, the fundamentals of Proxmox clustering, redundant storage, and hyper-converged infrastructure using Proxmox and Ceph.

Setting up a Proxmox HA Cluster

In the first video, I take the Dell Wyse 5060 I bought before and … bought 2 more. Once I had 3, I built a complete high availability cluster using them, demonstrating the very basics of Proxmox clustering, high availability resources, and how Proxmox handles failure.

Setting up a Proxmox HA Cluster

Small Proxmox Cluster Tips and Tricks, and QDevices

In this video, I walk through the nuances of 2 and 3 node Proxmox clusters, maintaining quorum, and installing a QDevice if you need one.

Small Proxmox Cluster Tips and Tricks, and QDevices

Making the $250 Proxmox HA Cluster Hyperconverged

In this video, I add additional RAM and storage to each node, and turn it into a hyper-converged cluster using Ceph. Ceph is the filesystem of BIG DATA, scale out solutions, so I’m excited to learn it. But Ceph is also a BIG topic, so in this episode I just focused on setting it up within Proxmox and setting up a basic replicated pool for RBD (RADOS Block Device) storage of virtual machines, all features that can be doen through the Proxmox GUI. I’ll dive into deeper Ceph topics in the future.

Making the $250 Proxmox HA Cluster Hyperconverged

All about POOLS | Proxmox + Ceph Hyperconverged Cluster fäncy Configurations for RBD

In this video, I expand on the last video of my hyper-converged Proxmox + Ceph cluster to create more custom pool layouts than Proxmox’s GUI allows. This includes setting the storage class (HDD / SSD / NVME), failure domain, and even erasure coding of pools. All of this is then setup as a storage location in Proxmox for RBD (RADOS Block Device), so we can store VM disks on it.

After all of this, I now have the flexibility to assign VM disks to HDDs or SSDs, and use erasure coding to get 66% storage efficiency instead of 33% (doubling my usable capacity for the same disks!). With more nodes and disks, I could improve both the storage efficiency and failure resilience of my cluster, but with only the small number of disks I have, I opted to go for a basic 2+1 erasure code.

All about POOLS | Proxmox + Ceph Hyperconverged Cluster fäncy Configurations for RBD