There’s been an on-going discussion via Twitter and various blog postings about new home labs from several people, most notably @FrankDenneman and @ErikBussink. I love these discussions because creating home labs is a challenge I find interesting so I wanted to continue my thoughts on how I’ve been proceeding with my builds. Their builds are for all intents and purposes amazing. They encompass some seriously badass hardware with massive scaling and flexibility. This does come at a cost though but I think the ultimate goal is to create a solid lab that will last for quite a while and supports everything you’d like to test in a lab. There’s no sense in putting together something that will only last a few months before you’re looking for the next upgrade. You can follow their builds in the links below.
I received all the parts that I had originally discussed back in part 2 of the Home Lab post. I added in a Samsung 840 EVO 120GB SSD for being able to test things like PernixData FVP and its appropriately sized for a VSAN node SSD using some left over drives I have laying around. More on that later in the thread.
When I originally spec’d out these new hosts, I knew I wanted to get three of them so I could have some flexibility in the lab for diving into vCloud Director, vCAC and even Openstack. With VSAN also going GA recently, and the minimum number of nodes being 3 I thought this would be a good way to try that out as well. I could even integrate it into my vCloud learning as well. Ultimately this could really mean an agile home lab.
If you think about the needs I presented previously, the biggest and most important part of the entire build was the case. The case needed to fit into the 13.1” x 13.1” shelf I had available. Then I thought about VSAN. How would I be able to fit drives into this thing and still have some fun with VSAN? Thinking about the size of the SSD, 2.5”, it brought me to the ToughArmor MB994SP-4SB-1. This drive cage could support 4×2.5” drives in one 5.25” slot. Perfect for my needs. I have enough old 300GB 10K RPM drives laying around that I can put into the other slots and fill one with the SSD. This would give me 3x300GB and 1x120GB drives in each VSAN node. The ToughArmor drive cage is connected via 4x7pin SATA connectors. A solid start to a VSAN node.
As many have pointed, the next important part is ensuring you get a storage IO controller that’s on the HCL. You can’t use AHCI with VSAN in GA. Duncan Epping pointed out that AHCI is not a supported configuration. Even though this is a home lab, this could prove disastrous should you encounter one of the issues he talks about. Thinking about this I had to come up with a controller that’s both cheap and fits the needs. Enter the IBM M1015. These cards are highly used within the ZFS and other NAS/SAN communities for being amazing cards once you flash them with the IT version of firmware. This changes the card into a JBOD controller and is exactly what’s needed for VSAN. The motherboard I chose, Supermicro A1SAI-C2750F, has one slot and with a low-profile bracket provides the storage controller that’s on the VSAN HCL for my nodes. You can find these controllers on eBay fairly easily and with both brackets and even breakout cables. Approximate costs range from $100-$130 per card.
Since I would be adding in components that would be requiring power into the build, the 200W power supply in the In Win case would have to be enough under full load to keep all the parts running smoothly. This required some digging and approximations, but I was able to find loaded power consumption numbers on basically all the products. I put them into a spreadsheet and this is what I came up with power wise per node:
Supermicro A1SAI-C2750F-O x1
8GB of Kingston KVR16SLE11/8 x4
Samsung 840 EVO 120GB x1
SanDisk Cruzer 16GB USB Flash Drive x1
In Win IW-BP655-200BL x1
ToughArmor MB994SP-4SB-1 x1
IBM M1015 x1
300GB 10k SAS drives x3
Once I purchase all the parts, I’ll edit with a total build price per node. Granted I have some parts laying around so it will be as accurate as possible and I’ll reflect that in my edit. All in all I think this is probably one of the most power efficient and flexible platforms I could build that still doesn’t break the bank. If you have comments, please I encourage them.