All 0s UUID, PernixData and the AMIDEDOS Fix

Let me start off by saying this isn’t an issue directly with PernixData FVP.  This is a subsequent issue with PernixData FVP caused by Supermicro not putting a UUID on the motherboards I bought. Two problems for the price of one.

I ran across an issue with the three whitebox Supermicro boards that I purchased for my home lab. I was attempting to install PernixData FVP on them to do some testing, when I ran across a strange issue. After I installed the host extension VIB on all my machines, only one of them would show up in the PernixData tab in vCenter. And when I would reboot them or uninstall the host extension, one of the other ones would show up.

Given that it’s a simple VIB install command, I didn’t figure it was anything to do with the installation itself but I uninstalled it anyway and by uninstalling and paying very close attention, I found my issue right away. The host UUID of the system was all 0s.

uuid_pic1I opened up the ‘prnxcli’ on one of my other hosts and verified my guess. As you can see, both UUIDs are all 0s. This was playing hell with the FVP Management server and my guess is it didn’t know which host was which.

uuid_pic2I did some quick searching and found this KB article discussing the issue, but it didn’t give me much in the way of how to fix the problem other than to contact the manufacturer. Given that the system is running an American Megatrends Inc, BIOS, I did some quick searching around and found a utility that will auto generate a UUID and hopefully resolve the issue. Finding the download was kind of a pain so I improvised after I found a link to a Lenovo posting and then I found it on the Lenovo driver site. All you need is the AMIDEDOS.EXE file, nothing more so you can get it from the following BIOS download.   Just download this file and extract the executable. I put it on a DOS USB key that I formatted using Rufus.

uuid_pic3Then I just booted up my host to the USB key and ran the following command:

 amidedos.exe /su auto

According to the instruction file, this will auto generate a new UUID for the system. You should see a screen similar after it performs the change.

uuid_pic4I went ahead and booted up my host to see if the change took affect and it looks like it did!

uuid_pic5I went ahead and changed the UUID on the other two hosts and booted them all back up. When I got into vCenter, I noticed that the PernixData Management Server was still seeing strange SSDs from my hosts. I removed all three hosts from vCenter and re-added them, restarted the PernixData Management Server and now magically all the host SSDs showed up correctly when I went to add them to the Flash Cluster.

uuid_pic6All in all, this was a perfect storm which I seem very good at creating from time to time. As much as I cursed trying to figure it out at first, it was fun learning about something I’ve never ran across.

Home Lab 4.1 (supplemental) – Checking M1015 Queue Depth/Flashing IT Mode

The IBM M1015 SAS controller is famous within the ZFS community that after you flash it with IT mode firmware it turns the card into a JBOD/passthrough controller that offers amazing performance for the pricepoint.

It just so happens I’m using this controller in each of my three nodes. While normally the controller comes in MFI mode, offering some RAID options, VSAN is best served by providing JBOD passthrough of the disks from the controller into the system. MFI mode is perfectly capable of passing the disks in JBOD mode up through the hypervisor and ready for VSAN. I know because I initially configured my VSAN nodes with the controllers in MFI mode. But MFI mode has an issue, and now I understand why you want to flash this card to IT mode, queue depth.

This thread should emphasize why you would want a controller with a queue depth as high as possible. VMware recommends having a storage controller with a queue depth >= 250. I pulled the following stats from ESXTOP on a host before and after I flashed the controllers.  A picture is worth a 1000 words.

IBM M1015 running in MFI mode queue depth:

m1015_mfi_mode_qdIBM M1015 running in IT mode queue depth:

m1015_it_mode_qdIf you’re planning to use the IBM M1015, ensure you flash it with IT mode (LSI-9211-8i) firmware. I followed this blog post on how to flash the card and since my boards have UEFI, it was brutally simple.

I did download the latest firmware from LSI and put it on the FreeDOS USB boot disk. If you want to overwrite the files they give you in the zip file, replace the ‘2118it.bin’ and ‘mptsas2.rom’ files with the new ones you download. The rest of the instructions are just fine and you can follow them exactly.

Here’s the difference on boot screens that you’ll see once you flash the card;

MFI Mode

m1015_mfi_bootIT Mode

m1015_it_bootLet me also state that I changed the controller firmware one at a time and it did not harm the VSAN volume.  After I rebooted each node, they worked perfectly.  Awesome stuff.

Home Lab – Part 4.2: VSAN Configuration

Continuing with the VSAN lab series, now that I have my 3 new nodes built and powered on it’s time to get VSAN all configured. This is an extremely simple process, however there are still a few things you have to do to be able to click that one checkbox to turn it on first.

HA has to be turned off on the cluster

vsan_config_pic1So we’ll turn off HA and let the cluster disable it

vsan_config_pic2You will need a VMkernel port on all your hosts for VSAN traffic. If you don’t turn this on, you’ll get a ‘Misconfiguration’ warning message on the VSAN General tab. The help notification will tell you that some or all of the hosts cannot talk to each other. You’ll also notice that the VSAN datastore is only the size of a single node, and that you only have access to it on each node separately. This was something I forgot about when I set this up initially, but the error messages quickly showed me what to do to resolve.

I like to break mine out by VLAN on my network so I’m using VLAN8 on all my hosts. You can quickly deploy this out with a host profile if you want. I only have three hosts so I just manually did it really quick.

Create a new VMkernel adapter

vsan_config_pic3I run all my storage traffic through a separate vSwitch, vSwitch1, so I’m going to put the port group on that switch.

vsan_config_pic4Put in the same of the port group as ‘VSAN’, put in the VLAN number of 8 and check the ‘Virtual SAN’ box. Rinse repeat on the other two nodes.

vsan_config_pic5Now we can actually turn on VSAN. I don’t like automatic selection of anything so I’m not planning to let VSAN choose the disks even though there aren’t any other ones it could claim. This is just a personal preference so YMMV.

vsan_config_pic6I’m running 1 SSD and 3 SAS drives in each disk group, one per host. We should now see that we have 3 SSDs and 9 eligible disks for VSAN.

vsan_config_pic7We’ll go ahead and select all the eligible disks across all three hosts and we should see a checkbox that the configuration is correct.

vsan_config_pic8Once the configuration task completes we should see the following screen and when we look at the VSAN General we should see a new 2.4TB VSAN datastore


Home Lab – Part 4.1: VSAN Home Build

As I’d been talking about in my previous 3 posts, I was planning to add three new hosts to my current vSphere home lab as a resource cluster for messing around with vCloud, vCAC and eventually looking into things like OpenStack. I was elected a 2014 vExpert, and I want to take advantage of as much of the perks as possible.  You can take a quick look back at that post here to see my requirements and theorycrafting on the build.

I went ahead and stuck with mostly my original configuration. I had to make a few slight modifications due to a mistake on my part, but overall the configuration is solid and is currently running at my house.

I’m currently running three of the following:

Supermicro A1SAI-C2750F-O x1
8GB of Kingston KVR16SLE11/8 x4
Samsung 840 EVO 120GB x1
SanDisk Cruzer 16GB USB Flash Drive x1
In Win IW-BP655-200BL x1
ToughArmor MB994SP-4S x1
IBM M1015 x1
300GB 10k SAS drives x3
Normal SATA power to 4 pin Molex connector

I made a change to the ToughArmor cage, as the one I had previously listed was only a SATA cage. Since I have SAS drives, the connectors do not work and I had to change the configuration slightly. This also necessitated adding a SATA to 4-pin Molex connector. The drive cage requires two power connections and my case (read: cheap) only had one 4 pin Molex, but plenty of SATA power. Easy change over.  Here’s how the home lab looks now after meeting all my requirements I made back in Part 2.


Overall the build was successful and has netted me a nice 2.4TB datastore in which to build VMs on and do integration testing with other applications as well as extend my ability to test new VMware technologies at home.

vwilmo_vsanIf I had a regret I would say that I wish I had left some room for growth in the nodes. Given that they only have one PCIe slot, being consumed by the M1015 card, there is no room for 10Gbe opportunities. The more I thought about this the more possibilities came up for future lab expansion and changes.  There are other possibilities in the lab for adding 10Gbe into it at some point when prices become better.  For now, 1Gbe will be more than sufficient for my needs.  The next size up motherboard would not have fit into a small enough case either, so that would not have met my requirements to fit on one shelf.

This is just the start of a series of posts about the lab. I plan to show a quick step by step of configuring my nodes, some power consumption results during idle and load testing as well as anything else I find helpful.  Just a quick tidbit, testing power consumption while loading ESXi on one of the nodes, I was only pulling 55.7W worth of power so the theorycrafting numbers are looking really good.