It’s nearing the end of 2022 and it’s time to refresh the homelab again. I recently moved two of the three custom built Supermicro servers previously in the home network cabinet over to the Colo Cluster, creating a need to replace the empty rack Us with something that could handle home services. Notice I used the word need there, although it’s really just a euphemism for want as I have plenty of servers that could handle this in the Data Center. Details aside, I started down the familiar research road to see what was suitable and “affordable”.

The main criteria were:

  • Short enough to fit into the network cabinet (20" max)
  • Beefy enough to run Proxmox LXC and VMs for home use
  • Flexible enough to upgrade more down the road (memory/disk)
  • Relatively quiet or able to be made so
  • Solid BMC/OOB management with modern creature comforts (HTML5 console, Redfish)
  • [ Good | Fast | Cheap ], pick any two

So after thinking about Dell and Supermicro again, I remembered I don’t have any HP systems in the mix, and having them would also help with some crossover stuff I have to do for the day job occassionally. It just so happens HP makes a machine that fits the bill reasonably well. Enter the HP DL20 Gen 9:

HP DL20 Gen9

What about the G10 you ask? Well yes, I would have preferred to go with a G10 but one of the criteria above was cheap. Mostly cheap anyway, because I want to add some additional goodies. I managed to pick up two G9s in the configuration shown for around $300 each, whereas a similarly configured G10 is going for $1000+. Note that you can get CTO configurations of various kinds if you really want, although these were available on eBay and seemed reasonable. But I’m getting ahead of myself, because there was more research involved before making the decision.

The Research

The TL;DR here is that these are nice little machines, ultimately meeting the good criteria aside from a bit of work to make them more civilized with third party components (fan noise, HBA mode for ZFS, etc.). If you’re reading this and getting your own ideas, remember that one Radiohead song. Do your own research and make sure it works for you too.

The Goodies

So I went cheap(ish) because I have other nice machines to do most of the heavy lifting, and the majority of the home services aren’t terribly demanding. But that doesn’t mean I can’t put some bling in them and have some fun.

Boot Drives

For boot drives, I always want a proper ZFS mirror setup. I picked up five SK Hynix 256GB NVMe M2 cards for cheap. There will be two per server, and I’ll have one spare just in case. It doesn’t matter that these are MLC pulled from laptops as they won’t see much daily write wear. Since these systems come with a 9.5mm slim optical drive, I wanted to make better use of the space as I have no need for CD/DVD media in my servers. Having used IcyDock products in the distant past I went spelunking and discovered this gem that came out earlier in 2021.

IcyDock ToughArmor MB852M2PO-B

Remember that research stuff? I got you covered.

Have no doubts that this is a premium option and requires additional adapters and cables, but being able to hot-swap your NVMe boot drives on a tiny little machine would be pretty neat and could make them more servicable over the long run. I ended up trying out the following:

Admittedly I didn’t know what OCuLink was until I looked it up, but it’s similar to SAS and related cabling. The important thing is that you get an adapter card to plug into a PCIe slot, and use a cable that can break out multiple lanes. In this case, you use a single 8X lane on the adapter and break out to two 4X lanes. The back of the IcyDock has one 4X connector per M2 slot, and then it requires a 12V SATA connector which I discovered too late.

None of this is really needed of course, but nooooo, I decided to spend an extra ~$275 per machine just to test OCuLink and hot-swap NVMe and try to make the front of the machine look cooler. Unfortunately, there are several issues that prevented this setup from working. Specifically, the problems are:

  • DL20 Gen9 doesn’t support bifurcation as far as I can tell
  • Lack of bifurcation seems to prevent the Oculink x8 card/NVMe SSDs from being recognized
  • The IcyDock requires 12V SATA power which can’t be conjured up without special cables

There are vastly cheaper options like SATADOM or using regular drives or… you should do your research and pick what works for you. I ended up ordering a dual NVMe PCIe Adapter that doesn’t require bifurcation.

Data Drives

To make room for a 10Gbe NIC, I removed the Smart Array card with battery backup and used a mini SAS cable to connect the onboard SATA controller to the backplane. Make note that the onboard controller only supports SATA and not SAS. I had some existing Samsung SM863a 480GB SATA SSDs sitting around that I configured in a mirrored pair per system. I can zfs replace those or add another mirror easily down the road if I need more, but most of the storage will come from the storage server running TrueNAS and exporting NFS and/or iSCSI as needed.

Memory and CPU

Each machine came with 2x8GB PC4-2133 Unregistered ECC (UDIMMs) which is sufficient to play around with. I bought a 64GB kit to add an additional 32GB per machine for 48GB total. The E3-1220 V5 CPU that is included can only use up to 2133MHz memory, but you can swap in E3 V6 CPUs and go up to 2400MHz if needed. I went ahead and bought a pair of E3-1270 for fun and will sell the V5s.

Rackmount Rails

For a short-depth server like this and equally short-depth rack, there are warnings that HP’s OEM rails are no shorter than 24". I’m not going to be pulling these out ouf the cabinet frequently, so I’m opting to let them rest on top of the UPS and PDU for the time being. The little bit of research I did turns up a company called Rack Solutions that make a custom 1U Raven 76 Rail that looks like it would work well enough. At ~$130 per set they seem expensive, but prices on eBay for HP rails are astronomical. These look like a very solid product especially for short-depth rack applications.

Networking Etc.

I’m using existing Intel X520-DA2 dual port 10Gb SFP+ cards with DAC cables connected to a Unifi XG-16 switch that acts as an aggregator between the house, FTTS, and Data Center.