Home Lab – Part 3: VSAN Theorycrafting

There’s been an on-going discussion via Twitter and various blog postings about new home labs from several people, most notably @FrankDenneman and @ErikBussink. I love these discussions because creating home labs is a challenge I find interesting so I wanted to continue my thoughts on how I’ve been proceeding with my builds. Their builds are for all intents and purposes amazing. They encompass some seriously badass hardware with massive scaling and flexibility. This does come at a cost though but I think the ultimate goal is to create a solid lab that will last for quite a while and supports everything you’d like to test in a lab. There’s no sense in putting together something that will only last a few months before you’re looking for the next upgrade. You can follow their builds in the links below.

ErikBussink – Homelab 2014 upgrade
FrankDenneman – vSphere 5.5 Home lab

I received all the parts that I had originally discussed back in part 2 of the Home Lab post. I added in a Samsung 840 EVO 120GB SSD for being able to test things like PernixData FVP and its appropriately sized for a VSAN node SSD using some left over drives I have laying around. More on that later in the thread.

When I originally spec’d out these new hosts, I knew I wanted to get three of them so I could have some flexibility in the lab for diving into vCloud Director, vCAC and even Openstack. With VSAN also going GA recently, and the minimum number of nodes being 3 I thought this would be a good way to try that out as well. I could even integrate it into my vCloud learning as well. Ultimately this could really mean an agile home lab.

If you think about the needs I presented previously, the biggest and most important part of the entire build was the case. The case needed to fit into the 13.1” x 13.1” shelf I had available. Then I thought about VSAN. How would I be able to fit drives into this thing and still have some fun with VSAN? Thinking about the size of the SSD, 2.5”, it brought me to the ToughArmor MB994SP-4SB-1. This drive cage could support 4×2.5” drives in one 5.25” slot. Perfect for my needs. I have enough old 300GB 10K RPM drives laying around that I can put into the other slots and fill one with the SSD. This would give me 3x300GB and 1x120GB drives in each VSAN node. The ToughArmor drive cage is connected via 4x7pin SATA connectors. A solid start to a VSAN node.

As many have pointed, the next important part is ensuring you get a storage IO controller that’s on the HCL. You can’t use AHCI with VSAN in GA. Duncan Epping pointed out that AHCI is not a supported configuration. Even though this is a home lab, this could prove disastrous should you encounter one of the issues he talks about. Thinking about this I had to come up with a controller that’s both cheap and fits the needs. Enter the IBM M1015. These cards are highly used within the ZFS and other NAS/SAN communities for being amazing cards once you flash them with the IT version of firmware. This changes the card into a JBOD controller and is exactly what’s needed for VSAN. The motherboard I chose, Supermicro A1SAI-C2750F, has one slot and with a low-profile bracket provides the storage controller that’s on the VSAN HCL for my nodes. You can find these controllers on eBay fairly easily and with both brackets and even breakout cables. Approximate costs range from $100-$130 per card.

Since I would be adding in components that would be requiring power into the build, the 200W power supply in the In Win case would have to be enough under full load to keep all the parts running smoothly. This required some digging and approximations, but I was able to find loaded power consumption numbers on basically all the products. I put them into a spreadsheet and this is what I came up with power wise per node:

vsan_powerAll in all this looks to be a viable solution for a VSAN node as well. My build would now look like this:

Supermicro A1SAI-C2750F-O x1
8GB of Kingston KVR16SLE11/8 x4
Samsung 840 EVO 120GB x1
SanDisk Cruzer 16GB USB Flash Drive x1
In Win IW-BP655-200BL x1
ToughArmor MB994SP-4SB-1 x1
IBM M1015 x1
300GB 10k SAS drives x3

Once I purchase all the parts, I’ll edit with a total build price per node. Granted I have some parts laying around so it will be as accurate as possible and I’ll reflect that in my edit. All in all I think this is probably one of the most power efficient and flexible platforms I could build that still doesn’t break the bank. If you have comments, please I encourage them.

Advertisements

Automating NetApp SnapMirror and PernixData FVP Write-Back Caching v1.0

I wanted to toss up a script I’ve been working on in my lab that would automate the transition of SnapMirror volumes on a NetApp array from using Write Back caching with PernixData’s FVP, to Write Through so you can properly take snapshots of the underlying volumes for replication purposes. This is just a v1.0 script and I’m sure I’ll modify it more going forward but I wanted to give people a place to start.

Assumptions made:

  • You’re accelerating entire datastores and not individual VMs.
  • The naming schemes between LUNs in vCenter and Volumes on the NetApp Filer are close.

Requirements:

  • You’ll need the DataONTAP Powershell Toolkit 3.1 from NetApp. Its community driven but you’ll still need a NetApp login to download it. It should be free to sign up. Here’s a link to it.
  • You’ll need to do some credential and password building first, the instructions are in the comments of the script.
  • You’ll need to be running FVP version 1.5.

What this script does:

  • Pulls the SnapMirror information from a NetApp Controller, specifically Source and Destination information based on ‘Idle’ and ‘LagTimeTS’ status. The ‘LagTimeTS’ timer is adjustable so you can focus in on SnapMirrors that have a distinct schedule based on lag time and aren’t currently in a transferring state.
  • Takes the name of the volumes in question and passes them through to the PernixData Management Server for transitioning from Write Back to Write Through and waiting an adjustable amount of time for the volume to change to Write Through and cache to de-stage back to the array.
  • Performs a SnapMirrorUpdate of the same volumes originally pulled and waits for an adjustable amount of time for the snapshots to take place
  • Resets the datastores back into Write Back with 1 Network peer (adjustable).

Comments and suggestions are always welcomed. I’m always open to learning how to make it more efficient and I’m sure there are several ways to tackle this.

You can download the script from here.

 

Home Lab – Part 2: Future State

The overall goal of the lab is to expand the options available to me to try new technologies, especially those that exist with the VMware vCloud offerings.  The lab also needed to incorporate the two old hosts that I currently had.  The motherboard and CPU currently in those systems was older.  I researched upgrading some of the components but it wouldn’t flesh out the way I really wanted the lab to ultimately become.

Plan

  • Take the two existing hosts and turn them into a management cluster for the lab.  This means I would have to do nothing to them for them to still be viable in the lab.  RAM could be upgraded later if necessary but 32GB should be enough across two hosts to run all management services required.
  • Add three new hosts over time for Cloud resource consumption.  This will provide a plethora of abilities in the lab.  This would also provide the ability to test other products like VSAN down the line with some simple additions of components to the hosts.

Since this stuff is still going in a home lab, there are some requirements that have to be taken under consideration.  I want to create the smallest, most power-efficient, and feature-friendly hosts I could.

Requirements

  • The systems have to remain power usage friendly.  I’m not running a datacenter so I don’t want a power bill that resembles one.
  • The systems have to be as quiet as possible.  I sit right next to my lab.  The two Cisco switches are loud enough as it is so I don’t want to hear more noisy fans.
  • The systems have to support at least 32GB of RAM, have an IPMI-type of interface (I don’t want to have to plug a monitor into it but the one time to configure), and have at least dual onboard NICs and an expansion slot for any other need.
  • The systems should be able to fit into a 13.1” x 13.1” space if possible.  The IKEA Expedit shelf I currently have has one open spot for putting these into.  The cases will have to be as airflow efficient as possible as that is a very small space to put three hosts into.

The requirement list is pretty extensive however I was following @FrankDenneman on Twitter and ran across a conversation he was having with @wazoo a few weeks ago and about the new Asrock C2750D4I motherboard.  Doing a quick search on it I found that it was basically everything I wanted in a motherboard for this expansion.  I found some great reviews of this board on ServeTheHome.com.  That article then led me to another great ServeTheHome article about the Supermicro A1SAi-C2750F.  Given that I already have two Supermicro motherboards and love their products I began to do a comparison of the two boards.

comparisonsAt this point it was a matter of splitting hairs and going back and forth on possible future expansions.  Which offering would provide the better long-term investment and expansion possibilities?

Asrock

  • Would need another NIC if I wanted to keep 4 in each host.  While 2 1Gb NICs are good enough, the applications are limited with fewer NICs.  Also would take up the only expansion slot ensuring that nothing further could be done with the hosts.
  • Could be maxed on RAM immediately for maximum benefit.  Also uses standard ECC UDIMMS that other lab hosts use currently.

Supermicro

  • Could use the expansion slot for something else, i.e. SAS JBOD card for VSAN configuration.
  • Uses SODIMMs which are non-standard and couldn’t be re-used anywhere else in the lab.  Also can’t be maxed currently as there are no 16GB ECC SODIMMs currently available for purchase so lab would be stuck at 32GB per host.

Case

With all of those comparisons done, I decided it was time to find the right case to put all of this in.  The case that I settled on is BP655.200BL from In Win.

inwin_caseThe biggest need it meets is that its 3.9” wide and 10.40” tall. The fan is on the side of the case as well.  This means I can stand the case up on end, have the fan blow the air out of the top of the case with enough room to spare and fit three of them side by side with 1” to spare in the last section of my shelf. It has a 200W power supply which is more than enough to run either motherboard which both run ~35-40W under load.

Pricing

I figured at this point it was time to do some comparison shopping and look at what I’m getting for the price.  I took these screens today since they are relevant now and not a few weeks ago when I first thought of this and prices have fluctuated since then.

Asrock build

asrock_buildSupermicro build

supermicro_buildConclusions

Both boards are great and fit the needs very well.  You really have two amazing choices to choose from that should be able to give you a nice quiet, power-efficient and feature-rich lab host.  As I stated before, it really does come down to splitting hairs.  Much love to the crew over at ServeTheHome.  There’s a tremendous amount of information out of that site and they’re directly responsible for a ton of info that I found to help make my decisions.  If you didn’t know about them, I suggest you go fill yourself in.  In the end I’m settling on the Supermicro board.  The availability of the expansion slot to be used for something other than another NIC was the compelling reason.  Another post coming later after I get the parts and put it all together as well as my vision of this platform with some VSAN theorycrafting.

Home Lab – Part 1: Present State

A little background on my home lab since that’s something I haven’t really discussed on here before and after some Twitter chats this morning I figured I would toss it up as a reference to a few future posts of how my lab is going to evolve.  It’s a pretty simple lab, but it can still accomplish nearly every feature that vSphere currently has which I need for learning and certification purposes.  The only exception would be my networking gear as it’s a bit more elaborate but provides nearly everything I need to scale any networking needs.

Networking

  • Cisco 3760G-24-TS x2 switches with stacking cables

These are a bit loud I won’t lie (not 1U server fan loud), but they’re full layer 3 and I’m very familiar with the Cisco IOS CLI. They’re stacked for cross-stack etherchannel configurations and testing different NIC failover types.

Compute

Two ESXi Hosts based on the Baby Dragon concept by RootWrym

Storage

  • 1 QNAP TS-459 Pro II
  • Western Digital 1TB Black Drives x4

Housing

All of these components fit nicely into my IKEA Expedit shelving unit.  The networking switches sit on top of it but that’s OK.  I have some other equipment that’s work provided for working from home that sits on top as well.

Each one of the Lian Li cases fit perfectly into one of the shelves of the cube and my QNAP does as well.  This leaves me an open spot to expand my lab a bit further or I have room on the other side of my desk for another cube if I need it.  I’d like to add a few more hosts as I’m going to be digging deep into the vCloud offerings from VMware.  In part 2, I’ll be discussing my goals and requirements for evolving the lab.

VCAP5-DCA Thoughts and experiences

There’s been a ton of write-ups on the VCAP5-DCA exam.  Any quick Google search will pull up several people’s experiences and if you’re on Twitter following the #VCAP hashtag, you can join in the conversations and see posts there as well.  Given that I just took the exam back at the end of February and found out my results, I figured I’d toss up my thoughts and experience as well for anyone that might want to read about it.

Impressions

I sat the exam back in February in the afternoon.  The testing center was close to my office so it wasn’t bad traveling there.  I have to say that the testing center was significantly better than the one that I normally take my exams in, which is closer to my house.  The PC I was given was just fine and the latency to the testing environment was bearable.  Overall I thought the test was fair.  Just like others have suggested, if you follow the blueprint and you can do all the things listed on it, you’ll be just fine.

Practice

I suggest coming up with practice scenarios for each topic and changing them up as much as possible so you can get used to doing the topics as many ways as you can.  I think one area that most people forget about is fundamentals.  The VCAP5-DCA exam is an advanced-professional level test sure, but you’re still going to need solid fundamental understanding of vSphere.  Don’t just focus on the advanced topics, but make sure your basics are solid.  It will help ensure you don’t silly mistakes.  For the last several months I had been migrating our remote offices to new infrastructure builds so reinforcing basics was pretty easy for me as I had done quite a few of them 5-6 times already.  This also helped building my confidence and speed with doing basic tasks as well as fast GUI navigation rather than hunt and peck.

Strategy

My strategy was simple, write down 1-26 on the dry erase board I was given and the number of topics in each question along with the overall topic out to the side of the question.  I then attacked each question based on how fast I thought I could get through each piece.  When I was done I simply marked the question and topic out on the board.  This way I knew I had finished a question or was missing something.  I had to consult the PDF’s a couple of times.  I suggest knowing exactly where to look if you need to hit these and to use the Search function.  Scrolling the PDF’s means you’re going to have a bad, bad day.  As with just about everyone that I have read about their experience will attest, I ran out of time.  I was pretty confident I had accomplished enough of the tasks and parts of others to pass.  I received my results 9 business days later, 6 days short of the amount of time VMware said they would return results and was happy to see ‘PASS’ in the attachment.