We’re nearing the end of 2021 and it’s time to refresh the homelab again. I recently moved two of the three custom built Supermicro servers previously in the home network cabinet over to the Colo Cluster, creating a need to replace the empty rack Us with something that could handle home services. Notice I used the word need there, although it’s really just a euphemism for want as I have plenty of servers that could handle this in the Data Center. Details aside, I started down the familiar research road to see what was suitable and “affordable”.

The main criteria were:

  • Short enough to fit into the network cabinet (20” max)
  • Beefy enough to run Proxmox LXC and VMs for home use
  • Flexible enough to upgrade more down the road (memory/disk)
  • Relatively quiet or able to be made so
  • Solid BMC/OOB management with modern creature comforts (HTML5 console, Redfish)
  • [ Good | Fast | Cheap ], pick any two

So after thinking about Dell and Supermicro again, I remembered I don’t have any HP systems in the mix, and having them would also help with some crossover stuff I have to do for the day job occassionally. It just so happens HP makes a machine that fits the bill reasonably well. Enter the HP DL20 Gen 9:

HP DL20 Gen9

What about the G10 you ask? Well yes, I would have preferred to go with a G10 but one of the criteria above was cheap. Mostly cheap anyway, because I want to add some additional goodies. I managed to pick up two G9s in the configuration shown for around $300 each, whereas a similarly configured G10 is going for $1000+. Note that you can get CTO configurations of various kinds if you really want, although these were available on eBay and seemed reasonable. But I’m getting ahead of myself, because there was more research involved before making the decision.

The Research

The TL;DR here is that these are nice little machines, ultimately meeting the good criteria aside from a bit of work to make them more civilized with third party components (fan noise, HBA mode for ZFS, etc.). If you’re reading this and getting your own ideas, remember that one Radiohead song. Do your own research and make sure it works for you too.

The Goodies

So I went cheap(ish) because I have other nice machines to do most of the heavy lifting, and the majority of the home services aren’t terribly demanding. But that doesn’t mean I can’t put some bling in them and have some fun.

Boot Drives

For boot drives, I always want a proper ZFS mirror setup. I picked up five SK Hynix 256GB NVMe M2 cards for cheap. There will be two per server, and I’ll have one spare just in case. It doesn’t matter that these are MLC pulled from laptops as they won’t see much daily write wear. Since these systems come with a 9.5mm slim optical drive, I wanted to make better use of the space as I have no need for CD/DVD media in my servers. Having used IcyDock products in the distant past I went spelunking and discovered this gem that came out earlier in 2021.

IcyDock ToughArmor MB852M2PO-B

Remember that research stuff? I got you covered.

Have no doubts that this is a premium option and requires additional adapters and cables which increases the price, but being able to hot-swap your NVMe boot drives on a tiny little machine is rather cool I think, and will make the machines more servicable over the long run. I ended up using the following extra pieces per server:

Admittedly I didn’t know what OCuLink was until I looked it up, but it’s similar to SAS and related cabling. The important thing is that you get an adapter card to plug into a PCIe slot, and use a cable that can break out multiple lanes. In this case, I’m going to use one 8X lane of the adapter and break out to two 2X lanes. The back of the IcyDock has one 2X connector per M2 slot, and then it will use the existing Serial ATA power/connector that the optical drive used.

The cheaper and just as valid option that I strongly considered is:

No extra adapters or cables needed. But nooooo, I decided to spend an extra ~$275 per machine just to play with OCuLink and hot-swap NVMe and make the front of the machine look cooler. Ah well, such is the way of the home lab. Again, there are vastly cheaper options like SATADOM or using regular drives or… do your research and pick what works for you.

Data Drives

Because I spent more than I had planned on boot drives, I’m not going to be able to get nice Samsung 3.84TB Enterprise SSDs like I have in the serious machines. But I have some small HGST Enterprise 100GB SAS drives collecting dust that I’ll throw into two ZFS mirrored pairs for around 400GB of super fast data local storage. I can zfs replace those easily down the road if I need more, but most of the storage will come from the storage server running TrueNAS.

Memory and CPU

Each machine came with 2x8GB PC4-2133 Unregistered ECC (UDIMMs) which is sufficient to play around with. I found a good deal on 4 x Samsung 16GB UDIMMs (M391A2K43BB1-CPB) and will upgrade each to 32GB, and then a full 64GB in time. The E3-1220 V5 CPU that is included can only use up to 2133MHz memory, but you can swap in E3 V6 CPUs and go up to 2400MHz if needed. I’m sure it’ll surprise you to hear me say I’m not sure I’ll need the extra power and can justify the expense. But remember my criteria didn’t include fast necessarily.

Rackmount Rails

For a short-depth server like this and equally short-depth rack, there are warnings that HP’s OEM rails are no shorter than 24”. I’m not going to be pulling these out ouf the cabinet frequently, so I’m opting to let them rest on top of the UPS and PDU. The little bit of research I did turns up a company called Rack Solutions that make a custom 1U Raven 76 Rail that looks like it would work well enough. At ~$130 per set, I’d look for alternatives before going that route, but it seems like a solid product.

Networking Etc.

I’m still working on getting these machines installed, but I’ll be using existing Intel X520-DA2 dual port 10Gb SFP+ cards with DAC cables connected to a Unifi XG-16 switch that acts as an aggregator between the house, FTTS, and Data Center.

I’ll post more details soon.