Closer look at components running my lab

Continuing on from the last post, having provided a detailed view of my home lab setup, you might find it helpful to actually see each component up close.

So let’s start with the old:

The Hyper-V server uses an older Celeron SoC board. This is the same hardware I started my virtualization journey with. You can read about the whole setup and how I prepped it up for ESXi in Getting Started page.

Last week I upgraded the memory to 16GB (maxing out what the board can take), with the same 120GB SSD as the hypervisor + VM storage. Now I’m sure after doubling up the RAM to allow for more VMs, a storage upgrade will soon (inevitably) follow.

Why Hyper-V? (reminder: I am using and talking about the free Hyper-V Standalone Server) … The answer is simple; it is a good virtualization platform and has matured since its inception. Installation is identical to Windows Server family’s and does not require any extra drivers (unlike ESXi) to work with this older board. And graphics drivers work as expected so a monitor can be connected if/when needed. Honestly anything labeled free and from Microsoft catches my attention. To top it up, unlike ESXi, the APIs are not locked down or limited in this free version and you can use backup software (like Veeam) to backup your VMs.

Now the newer bits:

Supermicro Superserver, Xeon-D platform is the main beast. It runs 12 to 19 VMs and keeps my network running. Of course it has newer and more capable hardware. When I built it a year and half ago, I started with 32GB of RAM (one module) and one 512GB NVMe M2 drive. I also passed through an Nvidia GT710 graphics card to a Windows VM that was connected to the TV as an entertainment unit. Of course the pass through for that video card is not supported but you can always cheat and force it by editting /etc/esx.config file.

After a year I expanded to 64GB, added 2 x 512GB SATA SSDs, also removed the graphics card and added a PCIe network card for pfSense. The HTPC case that is now used for Hyper-V setup was originally home to the Xeon-D board.

It has been running ESXi since the purchase and as explained in the last post, runs one of the firewalls and whole bunch of Linux VMs to support core network services. I started with ESXi free but upgraded to VMUG Advantage licensing 9 months ago. With VMUG Advantage licensing you get vSphere & vCenter license which will unlock the APIs and provide central management experience and capability.

Spoiler Alert: Both ESXi & Hyper-V will soon be replaced but I will talk about it in another post.

The latest addition:

At the same time I upgraded the memory on the old Celeron server, I decided (against every fiber in my body) to buy a NAS device. I have always virtualized my NAS, for a short time with NAS4Free on ESXi with passthrough disks and then just an Ubuntu NFS/Samba server with passthrough disk (Hyper-V). But I eventually realized that I need to have a standalone NAS that is not dependent on either of the virtual servers. The winner was Qnap TS-328, yes a 3 bay device (RAID-5) that received very good review from STH and with a reasonable price tag, along with 3 x 3TB WD NAS drives.