Since I already talked about my home lab setup in detail, it’s probably a good idea to cover the hardware too.

Let’s start with the old:

The Hyper-V server uses an older Celeron SoC board. This is the same hardware I started my virtualization journey with. You can read more about that here.

Last week I upgraded it’s memory to 16GB, and I’m sure storage upgrade will soon follow.

So why Hyper-V?

Note: this is the free Hyper-V Standalone Server and not a Windows Server with Hypervisor role

The answer is simple:

  • It is a good virtualization platform and has matured well over the years
  • Installation is identical to Windows Server family and does not require any extra drivers for on-board graphics to work
  • No feature limitations in this free offering (unlike ESXi), so Veeam or other hypervisor level backup software work as expected

Now the newer bits:

Supermicro Xeon-D is the main beast. It is hosting 12 to 19 VMs including the master firewall. Initial build had 32GB of ECC RAM (one module) and one 512GB NVMe M2 drive, with an Nvidia GT710 graphics card passthru to a Windows VM that was connected to the TV as my media centre.

After a year I added another 32GB of RAM, 2 x 512GB SATA SSDs, and replaced the GT710 with a PCIe quad-port network card for pfSense.

The latest addition:

At the same time I added more memory to the Celeron server, I also purchased my very first NAS device. This 3-bay QNAP NAS replaced my virtualized NAS solutions/experiments:

  • First NAS4Free with passthru disks on ESXi and then
  • Ubuntu server with passthru disks on Hyper-V

As I expanded my lab and added more services, it was obvious that a separate and central storage device, independent of the hypervisor hosts, was very much needed.