NSX Data Center for vSphere to NSX-T Data Center Migration – Part 3

Planning and preparation are complete and the Healthcare organization is now ready to proceed with Part 3 of the NSX Data Center for vSphere to NSX-T Data Center migration.

Researching the process for migration from NSX Data Center for vSphere to NSX-T Data Center involves the following processes.  These efforts will be covered over a series of blog posts related to each step in the processes:

  • Understanding the NSX Data Center for vSphere Migration Process – Part 1
    • Checking Supported Features
    • Checking Supported Topologies
    • Checking Supported Limits
    • Reviewing the Migration Process and the prerequisites
  • Preparing to Migrate the NSX Data Center for vSphere Environment – Part 2
    • Prepare a new NSX-T Data Center Environment and necessary components
    • Prepare NSX Data Center for vSphere for Migration
  • Migration of NSX Data Center for vSphere to NSX-T Data Center – Part 3

As they started the process in part 1, consulting the official documentation on the processes and what steps to perform are recommended.

https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.4/migration/GUID-78947686-CC6C-460B-A185-7E2EE7D3BCED.html

MIGRATION OF NSX DATA CENTER FOR VSPHERE TO NSX-T DATA CENTER

The migration to NSX-T Data Center is a multi-step process.  The steps are outlined below:

  • Import the NSX Data Center for vSphere Configuration
  • Resolve Issues with the NSX Data Center for vSphere Configuration
  • Migrate the NSX Data Center for vSphere Configuration
  • Migrate NSX Data Center for vSphere Edges
  • Migrate NSX Data Center for vSphere Hosts
  • Finish the NSX Data Center for vSphere Migration

Upon further review of each step, the organization deployed two NSX-T Data Center Edge Nodes which will be used as replacements for the NSX Data Center for vSphere Edge Services Gateways.  These Edge Nodes were deployed using the official documentation and added to the NSX-T Manager.

migration_coordinator_process_pic8

migration_coordinator_process_pic9

IMPORT THE NSX DATA CENTER FOR VSPHERE CONFIGURATION

To begin the process, the organization needs to enable the Migration Coordinator on the NSX-T Manager that they deployed.  A quick SSH session into the NSX-T Manager using the admin account, will provide the means to run the command necessary to start the Migration Coordinator service and enable the user interface that will be used for the migration in the NSX-T Manager:

migration_coordinator_start

Now that the Migration Coordinator service is running, the user interface in the NSX-T Manager will be enabled.

migration_coordinator_process_pic1

The next step in the process is to authenticate to the NSX Manager and the vCenter Server.

migration_coordinator_process_pic2

migration_coordinator_process_pic3

With the NSX Data Center for vSphere Manager and vCenter Server added in, the organization can start the import configuration step.

migration_coordinator_process_pic4

The organization sees ‘Successful’ on importing the existing configuration into NSX-T Data Center.  There is an option to ‘View Imported Topology’ which will give them a nice visual diagram of the configuration details that were imported.

migration_coordinator_process_pic5

A successful import allows the organization to proceed with the next step in the migration process

RESOLVE ISSUES WITH THE NSX DATA CENTER FOR VSPHERE CONFIGURATION

Moving to the next step, the organization is presented with all of the ‘issues’ that need to be resolved to move forward with the migration process. The total number of inputs that need to be resolved are listed and once resolved, will also be listed.

migration_coordinator_process_pic6

Several of the issues appear to be items that the organization does have already have configured.  Each issue has a recommendation by the Migration Coordinator for the organization to consider and move forward with the migration process.  The more important issues listed, are the ones that deal with the ‘EDGE’ as those issues will result in new NSX-T Data Center Edge Nodes being deployed to replace the existing Edge Services Gateways.

migration_coordinator_process_pic7

After selecting the EDGE category of issues to resolve, the organization was met with the following items to remediate before it was able to proceed to the next step.

migration_coordinator_process_pic10

  • IP addresses for TEPs on all Edge transport nodes will be allocated from the selected IP Pool. You must ensure connectivity between Edge TEPs and NSX for vSphere VTEPs.

This issue requires putting in the TEP_POOL that was created for the Edge Nodes already.

  • An NSX-T Edge node will provide the connectivity to replace NSX-v edge. Enter an IP address for the uplink.

This issue requires putting in a valid uplink IP address for the NSX-T Edge Node.  The organization will want to use the same IP address that the NSX Data Center for vSphere Edge Services Gateway is currently using since the TOR is statically routed to that IP address.

  • An NSX-T Edge node will provide HA redundancy for NSX-v edge. Enter an IP address for the uplink on this Edge node. This IP address must be in the same subnet as the uplink of the other NSX-T Edge used to replace this edge.

This issue requires putting in a valid IP address for the HA redundancy that the Edge Node will provide

  • An NSX-T Edge node will provide HA redundancy for edge replacing NSX-v edge. Enter an unused fabric ID for Edge node. See System > Fabric > Nodes > Edge Transport Nodes.

This issue requires selecting the UUID that was imported from the NSX-T Edge Nodes and selecting which one will be the replacing the NSX Data Center for vSphere Edge Services Gateway

  • An NSX-T Edge node will provide the connectivity to replace NSX-v edge. Enter an unused fabric ID for this Edge node. See System > Fabric > Nodes > Edge Transport Nodes.

This issue is similar to the one above but requires selecting the second NSX-T Edge Node UUID instead.

  • An NSX-T Edge node will provide the connectivity to replace NSX-v Edge. Enter a VLAN ID for the uplink on this Edge node.

This issue requires putting in the VLAN ID of the uplink adapter that will be used.

With all of the items resolved, the organization is ready to proceed with the actual migration process. Given that there will be some data plane outages that will need to occur during this process, the Edge Services Gateways will need to migrate to NSX-T Gateways, the organization has decided to perform the actual migration process during a scheduled maintenance window.

MIGRATE THE NSX DATA CENTER FOR VSPHERE CONFIGURATION

Pressing start, the Migration Coordinate begins migrating the configuration over to the NSX-T Data Center Manager.  This part of the process does not incur an outage as it’s a copy of the configuration.

migration_coordinator_process_pic11

Once the configuration has been copied over, the organization can now see all of the components that have been created in NSX-T Data Center from the configuration imported.

NETWORKING

The organization can see that a new Tier-0 Gateway has been created and has the routing configuration that the Edge Services Gateways had.

networking

networking2networking3

GROUPS

The organization checks the new Group objects and can see that those new Inventory objects have been created

groups1

SECURITY

Lastly, the organization checks the security objects, specifically that their Distributed Firewall and Service Composer rulesets are migrated over properly.

security1

MIGRATE NSX DATA CENTER FOR VSPHERE EDGES

The next part will incur an outage as this is the process of migrating the NSX Data Center for vSphere Edge Services Gateways over to the NSX-T Data Center Edge Nodes.  This will involve moving the IP addressing over.

migration_coordinator_process_pic12

migration_coordinator_process_pic13

Once the Edges have been migrated over, the organization can see that a new Transport Zone is created, Edge Node Cluster created, and N-VDS switch is created.

MIGRATE NSX DATA CENTER FOR VSPHERE HOSTS

The next step involves swapping the ESXi host software components for NSX Data Center for vSphere out with NSX-T Data Center.

hosts1

With the ESXi hosts now migrated the organization has now been successfully migrated from NSX Data Center for vSphere over to NSX-T Data Center.

finished1.png

Now that the Healthcare organization has migrated over to NSX-T Data Center, they can start the decommissioning of the NSX Data Center for vSphere components that are no longer needed.  The topology of their data center environment with NSX-T Data Center now looks like this.

finish_topology

Advertisements

NSX Data Center for vSphere to NSX-T Data Center Migration – Part 2

Part 2 of the NSX Data Center for vSphere to NSX-T Data Center migration for the Healthcare organization is around preparing the new NSX-T Data Center environment by deploying, installing, and configuring the necessary components.

Researching the process for migration from NSX Data Center for vSphere to NSX-T Data Center involves the following processes.  These efforts will be covered over a series of blog posts related to each step in the processes:

  • Understanding the NSX Data Center for vSphere Migration Process – Part 1
    • Checking Supported Features
    • Checking Supported Topologies
    • Checking Supported Limits
    • Reviewing the Migration Process and the prerequisites
  • Preparing to Migrate the NSX Data Center for vSphere Environment – Part 2
    • Prepare a new NSX-T Data Center Environment and necessary components
    • Prepare NSX Data Center for vSphere for Migration
  • Migration of NSX Data Center for vSphere to NSX-T Data Center – Part 3

As they started the process in part 1, consulting the official documentation on the processes and what steps to perform are recommended.

https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.4/migration/GUID-78947686-CC6C-460B-A185-7E2EE7D3BCED.html

PREPARE A NEW NSX-T DATA CENTER ENVIRONMENT AND NECESSARY COMPONENTS

Preparing a new NSX-T Data Center environment involves deploying the NSX-T Manager.  Installation of the NSX-T Manager is beyond the scope of this blog post as the official documentation has the necessary steps involved.  The key piece of information for this part of the migration process is to deploy the NSX-T Manager appliance(s) on ESXi hosts that are NOT part of the NSX Data Center for vSphere environment that’s being migrated.  The Healthcare organization deployed the new NSX-T Manager on the same hosts that the NSX Data Center for vSphere Manager is currently deployed on.

before_topology.with.nsxt

The next step is to add the vCenter Server that is associated with the NSX Data Center for vSphere environment.  NSX-T Data Center has a completely separate user-interface to manage the NSX-T installation, that will not conflict with the NSX Data Center for vSphere user-interface that’s added as a plug-in to the vSphere Client.  The steps to add the vCenter Server, Compute Manager, into NSX-T are documented in the same official documentation as part 2 of the migration process.  Once added into NSX-T, this is what the organization sees:

nsxt_compute_manager_added

There is a recommendation to add more NSX-T Managers to form a cluster for a proper production deployment, but since the Migration Coordinator is only run on one of the NSX-T Manager appliances, they can be added later.

The last step to prepare the NSX-T side of the migration process for the organization is to create an IP Pool for the Edge Tunnel Endpoints (TEP).  The organization already has a VLAN network for the VXLAN Tunnel Endpoints on the ESXi hosts for NSX Data Center for vSphere.  The VLAN is constrained using an IP range and part of the VLAN network will be assigned for the Edge TEPs as well as the host TEPs that will need to be created as well.

tep_pool_pic1

A TEP pool is created that the organization will reference during the migration

tep_pool_pic2

An IP range of addresses in the VLAN network is allocated and ensured not stepped on by any other devices in the range.

PREPARE NSX DATA CENTER FOR VSPHERE FOR MIGRATION

With the NSX-T Data Center environment setup and the steps followed, the next part of the migration process involves preparing the NSX Data Center for vSphere environment.

The first step involves configuring any hosts that might not already be added to a vSphere Distributed Switch.  The Healthcare organization has moved all of the data center hosts over to a vSphere Distributed Switch so this part of the process is not applicable to them.

The second step of this part of the migration process involves checking the Distributed Firewall Filter Export Version of the virtual machines.  This involves checking the ESXi hosts where these workloads reside and running a few simple commands.  Checking the vSphere Client, the workloads and the hosts they reside on can be seen so the organization knows which hosts to check filter export versions.

vcenter_vm_inventory

Now that the information on the virtual workload is confirmed, a simple SSH session into the ESXi host will determine if the export version is correct or needs to be modified to support the migration process.

export_filter_check

The check of the workload shows that the Distributed Firewall Filter Export Version is the correct version for this workload.  The organization can now check all of the other workloads to ensure this is the case with those as well.  This is the last step in part 2 of the process and once fully completed the Healthcare organization can moved to Part 3 and begin the actual migration process.

 

 

All 0s UUID, PernixData and the AMIDEDOS Fix

Let me start off by saying this isn’t an issue directly with PernixData FVP.  This is a subsequent issue with PernixData FVP caused by Supermicro not putting a UUID on the motherboards I bought. Two problems for the price of one.

I ran across an issue with the three whitebox Supermicro boards that I purchased for my home lab. I was attempting to install PernixData FVP on them to do some testing, when I ran across a strange issue. After I installed the host extension VIB on all my machines, only one of them would show up in the PernixData tab in vCenter. And when I would reboot them or uninstall the host extension, one of the other ones would show up.

Given that it’s a simple VIB install command, I didn’t figure it was anything to do with the installation itself but I uninstalled it anyway and by uninstalling and paying very close attention, I found my issue right away. The host UUID of the system was all 0s.

uuid_pic1I opened up the ‘prnxcli’ on one of my other hosts and verified my guess. As you can see, both UUIDs are all 0s. This was playing hell with the FVP Management server and my guess is it didn’t know which host was which.

uuid_pic2I did some quick searching and found this KB article discussing the issue, but it didn’t give me much in the way of how to fix the problem other than to contact the manufacturer. Given that the system is running an American Megatrends Inc, BIOS, I did some quick searching around and found a utility that will auto generate a UUID and hopefully resolve the issue. Finding the download was kind of a pain so I improvised after I found a link to a Lenovo posting and then I found it on the Lenovo driver site. All you need is the AMIDEDOS.EXE file, nothing more so you can get it from the following BIOS download.   Just download this file and extract the executable. I put it on a DOS USB key that I formatted using Rufus.

uuid_pic3Then I just booted up my host to the USB key and ran the following command:

 amidedos.exe /su auto

According to the instruction file, this will auto generate a new UUID for the system. You should see a screen similar after it performs the change.

uuid_pic4I went ahead and booted up my host to see if the change took affect and it looks like it did!

uuid_pic5I went ahead and changed the UUID on the other two hosts and booted them all back up. When I got into vCenter, I noticed that the PernixData Management Server was still seeing strange SSDs from my hosts. I removed all three hosts from vCenter and re-added them, restarted the PernixData Management Server and now magically all the host SSDs showed up correctly when I went to add them to the Flash Cluster.

uuid_pic6All in all, this was a perfect storm which I seem very good at creating from time to time. As much as I cursed trying to figure it out at first, it was fun learning about something I’ve never ran across.

QNAP vSphere Plug-in Basics

I have a QNAP TS-459 Pro II which I use for my home lab to serve both NFS and iSCSI for my lab. QNAP provides a vSphere plug-in to help with provisioning and attaching storage to your ESXi hosts. It can really help make things go a bit faster when you want to add a new LUN or NFS datastore. This only works on the C# client unfortunately/fortunately (whichever you prefer to think).

Installation/Add device for provisioning

The plug-in requires QTS version 3.8 or higher to work for your device. You can download the plug-in from the following link. It’s very straightforward and uneventful really.

http://www.qnap.com/v3/useng/product_x_down/

We’re looking for this download

qnap_download_pic1Once downloaded we’ll run the installation

qnap_download_pic2Take the defaults for all the install properties. Open up the vSphere Client and go to Plugins>Manage and Enable the QNAP vSphere Plug-in

qnap_download_pic3We need to add a QNAP Storage device to the interface so we can manage and deploy datastores from it. I’m going to add it in from the Home>Solutions and Applications>QNAP application.

qnap_download_pic4Click on ‘Add QNAP Storage in the upper right-hand corner

qnap_download_pic5Put in the IP address of the QNAP and put in the admin password. Click ‘Add’ when done.

qnap_download_pic6 Now we can see the storage added

qnap_download_pic7

 

Add NFS/iSCSI datastore

We should now see that the QNAP has been added to the ‘QNAP Storages’ tab. Let’s add an NFS and an iSCSI datastore.

I want to add two new datastores for my ‘MGMT_CLUSTER’ group of hosts. First iSCSI:

I right-click on ‘MGMT_CLUSTER’ and go down to the QNAP option and select ‘Create a datastore’

qnap_download_pic8

I can see that it picks up both of the hosts and it has selected the ‘SLAVE1’ QNAP device. Ensure that you CTRL+click to highlight both hosts.

qnap_download_pic9We’ll select the ‘iSCSI’ option.

qnap_download_pic10We’ll set the name to ‘qnap_test’, put the size at 5GB and enable ‘Thin Provisioning’.

qnap_download_pic11We’ll confirm all our settings and click on ‘Finish’.

qnap_download_pic12We’ll see that a static target it created (if necessary) to the QNAP, the datastore is created and formatted with VMFS.

qnap_download_pic13Now we’ll quickly add an NFS datastore following the same process as above. Except we’ll select the ‘NFS’ option

qnap_download_pic14Then we’ll call the datastore ‘qnap_test2’

qnap_download_pic15Verify our settings and click ‘Finish’

qnap_download_pic16We can see that an datastore is built and attached as NFS to our hosts

qnap_download_pic17We can verify that the new datastores are attached via the QNAP tab.

qnap_download_pic18

Add Datastore to another host

Let’s say we wanted to add one of those datastores to another host, we can do that easily by using the ‘Connect the datastore to a new ESX host’ option

qnap_download_pic19We’ll select the NFS datastore and select the ‘esx03.vwilmo.local’ host

qnap_download_pic20Verify our settings

qnap_download_pic21We can now see that the NFS datastore is attached to 3 hosts.

qnap_download_pic22

Disconnect/Destroy Datastore

To disconnect a datastore we simply select the host we want to remove it from and follow the same path as before.

qnap_download_pic23We select the ‘qnap_test2’ datastore and verify that there are no VMs on that datastore before removing it.

qnap_download_pic24Confirm and it will be removed

qnap_download_pic25Verify that it’s been removed

qnap_download_pic26To destroy a datastore we can do that following the same path as before.

qnap_download_pic27Select the ‘qnap_test’ datastore

qnap_download_pic28Verify there are no VMs attached before we destroy it.

qnap_download_pic29Confirm and its will be destroyed

qnap_download_pic30Overall its a pretty decent plug-in with the absolute minimum of features but is enough to warrant using it in the home lab for quickly adding storage to your environment.

 

 

 

 

 

User-Defined Network Resource Pools

This isn’t anything profound, but as I was going through my VCAP-DCA5 materials, one of the objectives concerns user-defined network resource pools.   While creating them is well documented, in the study guides I was using there were no mentions of where you turn these on.  It’s pretty simple actually.  Here’s a quick recap of what we’re trying to accomplish:

Create a new user-defined network resource pool.

user_net_resource_step1Name the pool

user_net_resource_step2Assign the pool to a dvPortGroup

user_net_resource_step3Confirm

user_net_resource_step4You can even do it from another interface as well by clicking on the ‘Manage Port Groups’ link in the same window

user_net_resource_step5Another objective point take care of.  Easy stuff.

Enabling VAAI for NFS on NetApp

A quick post about VAAI, NetApp and VMware.  I was messing around with VAAI and installing the NetApp plugin for NFS in my VCAP test environment.  This is a pretty simple process and can be automatically deployed via the NetApp VSC, if installed.  Since I’m working on making sure I can do this from the CLI, think VCAP-DCA5, I’m practicing doing it from there.  I’m using the NetApp Simulator, which is probably the best thing that NetApp could have done for its customers.  Simply amazing piece of software that offers nearly all the functionality of a NetApp filer.

vaai_nfs_plugin_step1I placed the NetApp_bootbank_NetAppNasPlugin_1.0-018.vib on a datastore so that I could access it from the CLI

vaai_nfs_plugin_step2Easy stuff, nothing new here.  Now let’s get this thing installed from the CLI

vaai_nfs_plugin_step3Looks like we got it.  Reboot the host and we’re golden.  Check vCenter to see if our Hardware Acceleration is working.

vaai_nfs_plugin_step4Hmmm, not working as it should.  Let’s check to make sure the vib is showing up in the vib list.

vaai_nfs_plugin_step5Yeah we got it there.  What could be the problem?  Well, let’s check the NetApp to see the NFS options.

vaai_nfs_plugin_step6Ah ha!  There’s our problem right there.  There’s a few options not enabled.  These have to be enabled to allow VAAI functions on the NetApp for NFS datastores.  So let’s enable them

vaai_nfs_plugin_step7Now that it’s enabled, let’s go check our vCenter and NFS datastores again.  A quick rescan of the datastores and we’re working as intended.

vaai_nfs_plugin_step8Or we can check it from the CLI, since we’re concerned about knowing how to check this from there anyway

vaai_nfs_plugin_step9EDIT – So I was building a new controller out in the lab and I confirmed Mike’s assumption.  The need for ‘options nfs.v4.enable on’ is only a requirement using the version of the NetApp simulator that I’m using.  This option is NOT required for a real set of controllers.

VMware VirtualCenter Management Webservices won’t start

I promise to get back to my NCDA study guide when time permits but I’m currently building two data centers and it’s consuming my days and my evenings have been more for relaxing.

This isn’t anything ground-breaking but since it seemed kind of obscure, I figured I’d post it if for anything as a reminder to myself to check this if I see it again.  As I was building out my second vCenter in my other primary data center, I noticed when I linked the two sites together that I was having issues with the vCenter Hardware Status plugin on my remote vCenter when I’d launch the vSphere Client.

plugin_error

Looking into I noticed that the VMware VirtualCenter Management Webservices service was set to Automatic, however was not started.  When I attempted to start the service I was met with a very vague error. service_start_error

Doing some quick searching brought me to this KB article.  However, the memory size was the exact same on my local vCenter and it was running just fine.  As I started to look through the other settings in the Tomcat configuration and comparing the broken vCenter to the working one, I noticed this.

Workingtomcat_working

Not Workingtomcat_not_working

A simple copy paste and the service fired right up.  Again, not ground-breaking stuff but irritating.