As I’d been talking about in my previous 3 posts, I was planning to add three new hosts to my current vSphere home lab as a resource cluster for messing around with vCloud, vCAC and eventually looking into things like OpenStack. I was elected a 2014 vExpert, and I want to take advantage of as much of the perks as possible. You can take a quick look back at that post here to see my requirements and theorycrafting on the build.
I went ahead and stuck with mostly my original configuration. I had to make a few slight modifications due to a mistake on my part, but overall the configuration is solid and is currently running at my house.
I’m currently running three of the following:
Supermicro A1SAI-C2750F-O x1
8GB of Kingston KVR16SLE11/8 x4
Samsung 840 EVO 120GB x1
SanDisk Cruzer 16GB USB Flash Drive x1
In Win IW-BP655-200BL x1
ToughArmor MB994SP-4S x1
IBM M1015 x1
300GB 10k SAS drives x3
Normal SATA power to 4 pin Molex connector
I made a change to the ToughArmor cage, as the one I had previously listed was only a SATA cage. Since I have SAS drives, the connectors do not work and I had to change the configuration slightly. This also necessitated adding a SATA to 4-pin Molex connector. The drive cage requires two power connections and my case (read: cheap) only had one 4 pin Molex, but plenty of SATA power. Easy change over. Here’s how the home lab looks now after meeting all my requirements I made back in Part 2.
Overall the build was successful and has netted me a nice 2.4TB datastore in which to build VMs on and do integration testing with other applications as well as extend my ability to test new VMware technologies at home.
If I had a regret I would say that I wish I had left some room for growth in the nodes. Given that they only have one PCIe slot, being consumed by the M1015 card, there is no room for 10Gbe opportunities. The more I thought about this the more possibilities came up for future lab expansion and changes. There are other possibilities in the lab for adding 10Gbe into it at some point when prices become better. For now, 1Gbe will be more than sufficient for my needs. The next size up motherboard would not have fit into a small enough case either, so that would not have met my requirements to fit on one shelf.
This is just the start of a series of posts about the lab. I plan to show a quick step by step of configuring my nodes, some power consumption results during idle and load testing as well as anything else I find helpful. Just a quick tidbit, testing power consumption while loading ESXi on one of the nodes, I was only pulling 55.7W worth of power so the theorycrafting numbers are looking really good.