All about POOLS | Proxmox + Ceph Hyperconverged Cluster fäncy Configurations for RBD

In this video, I expand on the last video of my hyper-converged Proxmox + Ceph cluster to create more custom pool layouts than Proxmox’s GUI allows. This includes setting the storage class (HDD / SSD / NVME), failure domain, and even erasure coding of pools. All of this is then setup as a storage location in Proxmox for RBD (RADOS Block Device), so we can store VM disks on it.
read more →

Making the $250 Proxmox HA Cluster Hyperconverged

I previously setup a Proxmox high availability cluster on my $35 Dell Wyse 5060 thin clients. Now, I’m improving this cluster to make it hyperconverged. It’s a huge buzzword in the industry now, and basically, it combines storage and compute in the same nodes, with each node having some compute and some storage, and clustering both the storage and compute. In traditional clustering you have a storage system (SAN) and compute system (virtualization cluster / kubernetes / …), so merging the SAN into the compute nodes means all of the nodes are identical and network traffic is, in aggregate, going from all nodes to all nodes without a bottleneck between the compute and SAN nodes.
read more →

Hyper-Converged Cluster Megaproject

In this project, I explore using low cost thin clients as cluster nodes, the fundamentals of Proxmox clustering, redundant storage, and hyper-converged infrastructure using Proxmox and Ceph. Setting up a Proxmox HA Cluster In the first video, I take the Dell Wyse 5060 I bought before and … bought 2 more. Once I had 3, I built a complete high availability cluster using them, demonstrating the very basics of Proxmox clustering, high availability resources, and how Proxmox handles failure.
read more →