NSX-T Data Center 2.5 Upgrade – Planning

NSX-T Data Center 2.5 Upgrade – Planning

NSX-T Data Center 2.5 Upgrade – Practical

Upgrades to software-based products are generally thought of just ‘next, next, next’ and done.  When you’re using a software platform for running your most critical business workloads, you’re usually a bit more careful.

The Healthcare organization has taken a look at the most recent release of NSX-T Data Center, version 2.5, since putting NSX-T Data Center 2.4.1 in recently.  There are several new use cases that have come up that the organization feels that 2.5 can provide them so they’re looking into what is necessary to perform an upgrade of their current NSX-T deployment.

Requirements:

  • Upgrade the NSX-T deployment from 2.4.1 to 2.5.0
  • Note any outages that may occur

Steps:

  1. Check VMware Product Interoperability Matrix
  2. Check VMware Product Interoperation Matrix – Upgrade Path
  3. Take Backup of NSX-T Manager
  4. Check NSX-T 2.5 Upgrade Guide Official Documentation
    1. Perform upgrade – Steps 4a – 4l
  5. Post-upgrade tasks
  6. Troubleshoot upgrade errors

Step 1 – Check VMware Product Interoperability Matrix

One of the first things to do is to check the VMware Product Interoperability Matrix to ensure that the version of ESXi and vCenter Server are compatible with NSX-T Data Center 2.5.  The organization’s infrastructure is running ESXi 6.7 U2 and vCenter Server version 6.7 U3.

upgrade_pic1

From this chart, it appears that NSX-T Data Center 2.5.0 supports the version of ESXi and vCenter Server necessary.

Step 2 – Check VMware Product Interoperability Matrix – Upgrade Path

While on the same web page, by clicking on the Upgrade Path tab, the admin can see what versions of NSX-T Data Center are supported upgrade paths.

upgrade_pic6

Step 3 – Take Backup of NSX-T Manager

The admin runs a current backup of the NSX-T Manager in case a restore is necessary.

restore_pract_pic1

Step 4 – Check NSX-T 2.5 Upgrade Guide Official Documentation

Digging into the NSX-T 2.5 Upgrade Guide, VMware has provided a checklist of items to review for upgrading NSX-T Data Center.

upgrade_pic2

Each of these tasks has sections to follow for performing the upgrade of NSX-T.  The admin will add these steps to the existing steps as part of the overall plan.

Step 3a – Review the known upgrade problems and workaround documented in the NSX-T Data Center release notes.

Doing a quick search in the Release Notes for NSX-T Data Center 2.5 for the word ‘upgrade’, the admin starts looking through the matches to see any changes that might impact the upgrade process.

There are a few items that stand out:

  • Messaging Port Change – There is a port change in the messaging channel from the NSX-T Manager to the Transport and Edge Nodes. This TCP port has changed from TCP 5671 to TCP 1234.
  • Upgrade Order Change– When upgrading to NSX-T 2.5, the new upgrade order is Edge-component upgrade before Host component upgrade. This enhancement provides significant benefits when upgrading the cloud infrastructure by allowing optimizations to reduce the overall maintenance window.

The admin notes these impacts and has checked the other smaller upgrade issue changes and noted them in case they run into any of them.

Step 3b – Follow the system configuration requirements and prepare your infrastructure

Reviewing the NSX-T Data Center 2.5 Installation Guide, the admin takes a look at the following components:

  • NSX-T Manager – no changes in the requirements from 2.4.1 to 2.5 in terms of size, disk, vCPU, or memory requirements
  • ESXi Hypervisors – no changes in the requirements from 2.4.1 to 2.5 and the admin verified that the ESXi version was listed on the VMware Product Interoperability Matrix.

Step 3c – Evaluate the operational impact of the upgrade.

  • Manager Upgrade – TCP port 1234 will replace TCP port 5671 from NSX-T Manager to Edge Nodes and Transport Nodes. There should be no impact as there is currently no firewall between the NSX-T Manager and the Transport or Edge Nodes.
  • Edge Cluster Upgrade – One of the notable impacts looking over the official documentation for the upgrade, is the possible impact to the North-South datapath during the upgrade of the Edge Nodes and disruption between East-West datapath traffic. This possible disruption of traffic will require the admin to notify their change management and perform the upgrade during a maintenance period where a disruption has minimal impact to the business.
  • Hosts Upgrade – All ESXi hosts are in a DRS-enabled cluster and hosts will be placed into maintenance mode and flushed before upgraded. No impact to the running VMs is anticipated.

Step 3d – Upgrade your supported hypervisor.

The admin confirms that they are running the appropriate version of VMware vSphere that is supported by NSX-T Data Center 2.5.

upgrade_pic3

upgrade_pic4

Provided from – https://kb.vmware.com/s/article/2143832

Step 3e – Verify that the NSX-T Data Center environment is in a healthy state.

To perform this step, the admin logs into the NSX-T Manager and checks the following locations for errors:

  • Overview dashboard

upgrade_pic8

  • Fabric Nodes
    • Host Transport Nodes

upgrade_pic9

  • Edge Transport Nodes

upgrade_pic10

  • Edge Clusters

Checking the Edge cluster status and the high availability for the cluster requires checking the CLI from one of the Edge nodes in the cluster.  Logging in as ‘admin’ via SSH to one of the Edge Nodes, run the following command – ‘get edge-cluster status’.

upgrade_pic11

Then the admin will double check from a VM:

  • North-South connectivity
  • East-West connectivity

A quick RDP session into one of the production servers and both connectivity needs can be tested.

upgrade_pic7

Step 3f – Download the latest NSX-T Data Center upgrade bundle.

The admin visits http://my.vmware.com, logs in, and downloads the appropriate upgrade bundle for NSX-T Data Center 2.5.  The file comes in a .mub file extension.

upgrade_pic5

Step 3g – If you are using NSX Cloud for your public cloud workload VMs, upgrade NSX Cloud components.

The organization is not currently using any cloud-based workloads, so this step is not applicable at this time.

The steps that proceed after the last step are part of the actual upgrade process.  Those steps will be continued in the next post.

This blog goes through a typical check during an upgrade process.  There may be other processes that other organizations take prior to upgrading and this blog is not meant to be encompassing of every step another organization may take.

Using NSX-T to Test NSX-T and Virtual Machine Recovery with Automation – Practical

Part 1 – Windows SFTP Backup Targets

Part 2a – Using NSX-T to Test NSX-T and Virtual Machine Recovery with Automation – Concept

Part 2b –Using NSX-T to Test NSX-T and Virtual Machine Recovery with Automation – Practical

In Part 2a, the Healthcare organization admins had created several scripts using VMware PowerCLI, PowerShell Core 6, OVF Tool, and NSX-T Policy REST APIs.  Those scripts are located at the following GitHub link for other community admins to consume as well.

The original requirements that were put forth for the admins to provide a design for were:

Requirements:

  1. Use NSX-T to build a production replica network to test restores of the NSX-T Manager and show virtual machines can also be restored and tested on the same network
  2. Use Veeam to restore the following virtual machines:
    1. Backup Server – Will be used to run automation scripts from
    2. Active Directory – Will be needed for DNS purposes
    3. SFTP Server – Hosts the NSX-T backups that restores will be tested from
  3. Deploy a new NSX-T Manager to test the restore process to it
  4. Use automation wherever possible to continue expanding automated techniques

To meet these requirements the admin had designed the following topology to meet these requirements:

finish_topology

  • Standalone Tier-1 Gateway – not connected to any Tier-0 Gateway, preventing northbound communications that would conflict with the production networking
  • Restore Network Segment – Provides a logical network for the restored VMs to attach to
  • Restored Domain Controller – One of the organizations domain controllers that will provide DNS for the replica network and the VMs attached
  • Restored Backup Server – Hosts the PowerShell scripts that are necessary for scripting part of the deployment on the restored NSX-T Manager. Some of the scripts will need to be run from the Production Backup Server and some of them from the Restored Backup Server since there will be no outside communications to the Restore environment other than vCenter Server direct console access
  • Restored SFTP Server – Hosts the backups of the NSX-T Manager
  • Restored NSX-T Manager – Will be used to test its own restores. NSX-T Manager restores requires that the new NSX-T Manager have the same IP address as the production copy.  To test this appropriately, we have to create a copy of the production network and IP addressing
  • vCenter Server B – Manages the Compute Cluster B
  • Compute Cluster B – Provides a non-production host for the restored systems to be placed on that’s not managed by the production vCenter Server A.

For further details on reasonings for this topology, you can take a look at Part 2a referenced at the top of this thread.

With the scripts created, it’s now time for the admin to work through the workflow processes and test that this strategy will meet the requirements in practice.  This is a review of the workflow process:

restores_pic10

Step 1 – Copy scripts to BACKUP-01a – GitHub download and copy

The scripts just need to be pulled down from GitHub and copied to a location on the BACKUP-01a server

Step 2 – Copy NSX-T OVA to BACKUP-01a – Download and copy

Another straightforward step with downloading the NSX-T OVA that’s the exact version of the current NSX-T Manager and copying it to a location on BACKUP-01a

Step 3 – Install PowerShell Core 6, PowerCLI, and OVFTool – Download installs and install

restore_pract_pic2

Step 4 – Perform a Backup of the NSX-T Manager – Native Backup Tool

A pretty simple step by just going into the NSX-T Manager and the Backup & Restore tab and pressing the ‘BACKUP NOW’ button and verifying its completion.

restore_pract_pic1

Step 5 – Backup SFTP-01a, AD-01a, BACKUP-01a – Single Veeam Backup Job

Once all of the components to perform the remaining workflows are done and installed and configured, the backups of the necessary virtual machines, especially the BACKUP-01a machine, can occur.

restore_pract_pic3

Step 6 and 7 – Deploy Testing Tier-1 Gateway and Segment – NSX-T Policy API via PowerCLI

From the BACKUP-01a production server, the admin runs the 01_NSXT_DEPLOY.ps1 to build the Tier-1 Gateway and Segment and then it will start the OVF Tool to deploy the NSX-T Manager OVA file to the Compute Cluster B.

restore_pract_pic4

Tier-1 Gateway has been created, not linked to a Tier-0 Gateway to prevent Northbound connectivity with the overlapping production network and ‘nsxt-restore-segment’ created for the virtual machines and new NSX-T Manager to attach to.

restore_pract_pic5

restore_pract_pic6

The admin can also see that the new NSX-T Manager, connected to the ‘nsxt-restore-segment’ is being deployed.

restore_pract_pic7

Step 8 – Adjust NSX-T CPU/Mem Resources and Power-On – PowerCLI

Once the new NSX-T Manager is deployed, the admin wants to adjust the memory reservation so that they can start the NSX-T Manager without running into memory constraints since the test environment is rather limited.  The deployed NSX-T Manager is in ‘small’ form factor, but still has a 16GB Memory reservation on it.  From the BACKUP-01a production server, the admin runs the 02_NSXT_RESERVATION_ADJUST.ps1 to adjust the memory reservation down to 8GB and then power on the appliance.

restore_pract_pic8

Step 9 – Restore VMs to NSX-T Testing Segment – Veeam Restore Job

To get the virtual machines necessary to help in the NSX-T restore process and to prove that the admins can restore NSX-T and virtual machines from native and Veeam backups respectively, the admin runs a restore entire VM job of the three VMs previously backed up, and…

  • Points the Veeam restores to the Compute Cluster B host
  • Places them on the VM Network
  • Appends ‘_restored’ to each of their VM names
  • Leaves them powered Off. They’re left powered off so that once restored, the admin can adjust their network configurations to be attached to the ‘nsxt-restore-segment’.

restore_pract_pic9

Step 10 – Change Restored VMs networking to NSX-T Testing Segment – vCenter Server network vMotion

The restored VMs can easily be moved in bulk to the ‘nsxt-restore-segment’ by using the Migrate VMs to Another Network option.

restore_pract_pic10

Once the VMs are restored and moved to the ‘nsxt-restore-segment’, they can be powered on and the next step can proceed.

Step 11 – Add NSX-T Restore Config – NSX-T Policy API via PowerCLI

Now that the restored VMs are all added to the ‘nsxt-restore-segment’ and the new NSX-T Manager is online and attached as well, the admin can access these VMs by using the vSphere Client and using a direct console to the BACKUP-01a_restored VM.  It’s critical to run the remaining scripts from that machine, as there is no outside network access to the new NSX-T Manager appliance, as intended.

Consoling into the BACKUP-01a_restored server, the admin can make some checks to see if network connectivity is indeed limited to the ‘nsxt-restore-segment’.  Taking a quick look at the IPCONFIG of the BACKUP-01a_restored server, the admin can see that they cannot PING the default gateway of the network, however they are able to PING the other VMs and the NSX-T Manager (which has the same IP address as the Production NSX-T Manager).

restore_pract_pic11

The admin can also log into the UI of the NSX-T Manager from the BACKUP-01_restored server as well and can see that this is a brand-new deployment with no configurations.

restore_pract_pic12

The admin can also see that the Restore configuration is no longer configured as well.  The next step is to get the configuration for restoring the NSX-T Manager put back into the new NSX-T Manager.  This NSX-T Manager is already the same IP and Name as the production version, which is a requirement for restoration.

restore_pract_pic13

With connectivity to the NSX-T Manager, and confirmation that there’s no configurations, the admin can proceed with running the PowerCLI script to add the Restore Configuration into the NSX-T Manager from script 03_NSXT_RESTORE_CONFIG.ps1.

restore_pract_pic14

A quick run of the script and a refresh of the NSX-T Manager UI, and the admin can see that the SFTP server configuration is back and all of the backups that have been taken are showing up as well.

restore_pract_pic15

After checking the backup files, the admin picks the first one in the list of Available Backups and clicks on the restore button to apply the configuration.  During the restore process, since this is not a full restore and components such as Edge Nodes and Transport Node hosts are not contactable, the admin may get a few error messages that they can skip through.  Once the restore is done, the admin can take a look at the restored configuration and see that the NSX-T Manager configuration matches the production instance and the restore was successfully finished and validated.

restore_pract_pic16

restore_pract_pic17

With a successful test and the requirements accomplished, the admin can now perform the final steps running the last two scripts on the BACKUP-01a production server.  One of the scripts, 04_NSXT_RESTORE_CLEANUP.ps1 will shutdown and then forcibly delete all of the restored virtual machines and the NSX-T Manager.  The last script, 05_NSXT_DEPLOY_CLEANUP.ps1, runs a Policy API REST command to remove the Tier-1 Gateway and Segment to bring the entire deployment back to its original, clean state.

restore_pract_pic18

restore_pract_pic19

restore_pract_pic20

The last 2 posts have shown the Healthcare organization the power of using NSX-T and how it can be used with even a small amount of automated techniques to accomplish several use case examples and provide a real value to the organization that requires them to test their backups.

Using NSX-T to Test NSX-T and Virtual Machine Recovery with Automation – Conceptual

The Healthcare organization, last post, configured their NSX-T Manager to send its backups to an SFTP backup server so they can perform restores if necessary.  The Healthcare organization also utilizes Veeam Backup and Recovery to provide virtual machine-based backups for their virtual infrastructure.  Unfortunately, the NSX-T Manager is not supported to be backed up using Veeam, and requires a new fresh NSX-T Manager installation deployed and a backup configuration restored to it and the Healthcare organization would like to test restores of the NSX-T Manager.

Configuring and actually backing up the NSX-T Manager configuration or a virtual machine is one thing, actually being able to test the backups is another.  A backup is no good if you can’t restore from it.  The organization has found a way to test both their NSX-T backups and their virtual machine backups at the same time to meet the requirements.  Taking some pointers from what they’ve learned previously around using automation tools, they plan to expand their automation learning with this same process.

NSX-T can provide exact copies of production environments running on top of the same underlying physical network with no changes to the physical network.  The Healthcare organization has placed a very large bet on NSX-T being their networking and security platform for their infrastructure, and looking to use this capability to provide an isolated backup environment to test restoring their backups.  Keeping an automation mindset in place, the Healthcare organization admins take a look at the requirements they’ll need to accomplish the tasks:

Requirements:

  1. Use NSX-T to build a production replica network to test restores of the NSX-T Manager and show virtual machines can also be restored and tested on the same network
  2. Use Veeam to restore the following virtual machines:
    1. Backup Server – Will be used to run automation scripts from
    2. Active Directory – Will be needed for DNS purposes
    3. SFTP Server – Hosts the NSX-T backups that restores will be tested from
  3. Deploy a new NSX-T Manager to test the restore process to it
  4. Use automation wherever possible to continue expanding automated techniques

The following topology is drawn out by the admin that will ensure that they can rebuild a production network replica while not overlapping with the actual production networking.  This topology consists of the following constructs to build out the production replica network:

restores_pic3

  • Standalone Tier-1 Gateway – not connected to any Tier-0 Gateway, preventing northbound communications that would conflict with the production networking
  • Restore Network Segment – Provides a logical network for the restored VMs to attach to
  • Restored Domain Controller – One of the organizations domain controllers that will provide DNS for the replica network and the VMs attached
  • Restored Backup Server – Hosts the PowerShell scripts that are necessary for scripting part of the deployment on the restored NSX-T Manager. Some of the scripts will need to be run from the Production Backup Server and some of them from the Restored Backup Server since there will be no outside communications to the Restore environment other than vCenter Server direct console access
  • Restored SFTP Server – Hosts the backups of the NSX-T Manager
  • Restored NSX-T Manager – Will be used to test its own restores
  • vCenter Server B – Manages the Compute Cluster B
  • Compute Cluster B – Provides a non-production host for the restored systems to be placed on that’s not managed by the production vCenter Server A.

Before any automation can begin, the admin needs to understand all of the workflow steps that will be necessary and how to perform them so they can put automation around each workflow process.

restores_pic10

 

NSX-T 2.4.x provides a hierarchical intent-based Policy API for customers to use for automation techniques.  The admin takes a look at the NSX-T API official documentation on the Policy API and finds a few REST API commands that could be useful for creating the necessary constructs.  From the configuration of the backup of the NSX-T Manager in the previous post, the admin can also use the information collected there and REST API commands to automate adding the restore configuration into the NSX-T Manager that will be deployed.

The NSX-T Manager comes from the VMware download site as an OVA type of download.  Using a tool such as the OVFTOOL, could be used to help automate the process of deploying the NSX-T Manager to the new network that will be created.

To wrap all of these different automation techniques into scripts that the admin can use, they’re planning to use PowerCLI and PowerShell Core 6 to build scripts that can be run to automate as much of this process as possible.

The admin performs the following actions to be able to use PowerShell Core 6 and VMware PowerCLI on the Backup Server.  The Backup Server will host the scripts and also be the server where the scripts are run from, both in production and in the restored segment.

Install Prerequisites:

The post isn’t going to go into how to install these items as they are fairly simple to install with mostly click, click, next.

Each of these processes will be necessary to meet all of the requirements.  There are specific portions of the workflow where processes can be joined together into singular scripts and the admin will attempt to do so within their experience.

The first and second workflow process in the table consists of building a Veeam Backup Job around all of the virtual machines needed, and ensuring that NSX-T is sending backups to the SFTP server.

Requirement 2 – Use Veeam to restore the following virtual machines

  • Backup Server – Will be used to run automation scripts from
  • Active Directory – Will be needed for DNS purposes
  • SFTP Server – Hosts the NSX-T backups that restores will be tested from

Regardless of the order of the requirements, first and foremost to test the restore process, the admin needs to ensure that they have backups of the systems that they’re planning to perform restore testing.  The admin also hops into the NSX-T Manager console and checks that the latest backup job has completed or can press the ‘BACKUP NOW’ button to start a latest backup to the SFTP server.

restores_pic2

For the process of testing backups in this use case, the admins have configured a separate backup job in Veeam that has the 3 virtual machines that will be used for this testing procedure.

restores_pic1

The admin waits to start the backup job in Veeam until the scripts are all built as they’ll be needed once the Backup Server is restored.  The admin can start to take a look at how to build an NSX-T isolated copy of the production network.

Requirement 1 – Use NSX-T to build a production IP-based isolated network to test restores to NSX-T and show virtual machines can also be restored and tested on the same network

Requirement 3 – Deploy a new NSX-T Manager to test the restore process to it

Requirement 4 – Use automation wherever possible to continue expanding automated techniques

The process of building the production replica network can be accomplished using the NSX-T REST API.  The admin has taken a look at the NSX-T REST API official documentation and found an example of using the hierarchical intent-based API to build the Tier-1 Gateway and the Segment that will be used.  The next process is around using the OVF Tool to deploy the NSX-T Manager to the same segment previously created.  Since these processes can be called from PowerCLI, the admin decides to combine these two workflows into one script.

The code that was built for this resembles the following:

restores_pic4

restores_pic5

This script builds the Tier-1 Gateway and Segment using the NSX-T Policy API, then immediately jumps to using the OVF Tool to deploy the new NSX-T Manager to the previously created Segment.  You can find the actual script over here – github link.  For ease of reading, the OVF arguments were word wrapped.  Those need to be in one-line, normally.

The next process is around changing the Memory resources of the NSX-T Manager.  Typically, the NSX-T Manager has a memory reservation to ensure enough memory is available for it to run.  Given this is a testing environment restore, the admin wants to remove this reservation so they can start the NSX-T Manager without running into issues.  The admin builds another script to adjust this and start the VM.

The code that was built for this resembles the following:

restores_pic6

This script adjusts the memory reservation down to 8GB and then starts the NSX-T Manager VM.

The next piece of scripting that the admin chooses to do is around putting in the Restore Configuration for the NSX-T Manager into the new NSX-T Manager virtual machine using PowerCLI and the REST API.  The code that was built for this resembles the following:

restores_pic7

This script sends a REST API command to put in the Restore Server configuration into the NSX-T Manager so it can now see the NSX-T Backups on the restored SFTP-01a and can choose which one the admin wants to test the restore to.

The final script the admin decides to build is around clean up of all of the virtual machines and networking components created to test with.  The code built for this resembles the following:

restores_pic8

This script powers down and deletes all of the restored virtual machines and the NSX-T Manager, and then runs the NSX-T Policy API to remove the Tier-1 Gateway and testing Segment created resetting the infrastructure back to its original configuration.

restores_pic9

There are obviously several areas where the scripting can be improved and even further simplified.  This is a good first start for the admin to meet the requirements and grow their automation skills and further refine the scripting.  In the next post, the admin will put all of these scripts and processes to work and test the full process.  The screenshots of the script code may be tough to read, so the admin has uploaded all of the scripts to this location – https://github.com/vwilmo/NSXT_RESTORE_TESTING 🙂

 

The VMware NSX Platform – Healthcare Series – Part 7: Secure End User Concept

As more and more organizations look to bring the edge computing closer to the data center, this can include the desktop systems that end users utilize to access the organizations systems.  Bringing these systems can bring in vulnerabilities and exposures that would typically be constrained by the physical desktop device.  When these systems are virtualized, a new security posture is required to protect the critical data and assets in the data center.

When visiting most Healthcare organizations during my pre-sales days, I found that many of them were using some form of Virtual Desktop Infrastructure (VDI) or Remote Desktop Session Host (RDSH) technology to present the applications to their clinicians.  Regardless of the overarching technology providing these services, Horizon or even Citrix, the VMware NSX platform has several business values for securing Healthcare customers running these platforms.

Revisiting the nine NSX use cases we previously identified:

NSX_EUC_pic6.png

The use case of Secure End User with NSX can be further broken down into six unique use cases.  A Healthcare organization does not have to use all six of them, but can start with one or more as their needs dictate.  Let’s break down each use case and explain when each can meet a business need.

NSX_EUC_pic2

In the majority of Secure End user use cases, we can apply the concept of micro-segmentation to provide granular security for a Healthcare organization by protecting East-West traffic between VDI desktops, or implementing Identity-based Firewall, separation of desktop pools, even providing 3rd party integrations like Agent-less Anti-Virus or Anti-Malware into the NSX platform.  Protecting the desktop or RDS hosts is straightforward, but we can even apply the same security concepts to protect the infrastructure that manages the VDI or RDSH environments.

NSX_EUC_pic1

Micro-segmentation

From the virtual desktop standpoint, micro-segmentation provides a means in which we can control East-West traffic between the desktop systems.  It also means, that if an organization has the need to have separate pools of desktops or even RDSH systems, NSX can provide security within each pool and between each pool, separately.  In Healthcare environments, there may be a need for external coders to provide services for the organization.  A new desktop pool, specifically for those external coders could be created and secured with NSX to only allow access to necessary systems.

Edge Services

NSX is a platform product.  This means that it has capabilities that span more than just security.  NSX can also provide NAT and Load Balancing services for the Edge management components of a VDI and RDSH infrastructure.  This added benefit helps customers reduce the complexity of having multiple interfaces in which to manage their infrastructure servers.  Healthcare systems require high availability and maximum uptime for their patient-facing systems.  The NSX Edge can be put into high availability and provide Load Balancing services to meet this use case, without the additional costs of 3rd party products.  These features come in even the standard version of NSX.

Network Virtualization

The ability to create logical networking constructs dynamically, is a principal use case with NSX.  NSX can faithfully recreate production networks, even with the same IP addressing, and isolate each network from talking with each other.  For Healthcare organizations where application uptime means patient care, the ability to quickly spin up these network reproductions can mean that copies of production applications can be placed into the isolated copy network and things like upgrades and security changes can be tested, prior, to deployment into the production workloads.

Protecting VDI Infrastructure

There’s no doubt that the virtual desktops and RDSH servers are key to a VDI deployment, but the back-end management components that provide the means to ‘spin up’ these desktops and servers, can also be protected by NSX.  These systems provide desktop interfaces for clinicians and hospital staff.  If these staff are unable to access the applications and systems they need to perform their jobs, it could directly affect patient care for the organization.  The back-end systems which facilitate these desktops are just as critical as the desktops themselves.

Protecting Desktop Pools

NSX provides 3rd party partners with the ability to plug into the NSX framework using NetX or EPSec APIs.  These APIs provide the partners the ability to integrate products such as:  Next-Gen Firewalls, Intrusion Detection and Prevention solutions, as well as Anti-Virus and Anti-Malware products.  By integrating with NSX, these products can remove the need for traditional in-guest agent approaches for these products.  Doing so can greatly impact the overall performance and resource requirements of each of the ESXi hosts these services reside on.

User-based Access Control

Regardless of whether a Healthcare organization uses one or all the use cases in their environment, each use case provides a unique value and layered approach to securing virtualized desktops or remote session hosts.  With the proximity these systems now have to the internal data center systems, their protection is very important to ensure a compromise or attack on one of them, doesn’t allow further access in the data center and to vital patient information.

Over the next several blog posts, we’ll dive deep into each of these concepts and show how to practically apply these use cases to common scenarios that a Healthcare organization may run into.

Deploying F5 Virtual Edition for vSphere

During the rebuild of my home lab, I was bound and determined to do things as close to a production deployment as possible. This includes the introduction of a load balancer into my lab. I will preface this post with ‘I have no clue how to operate a load balancer at all’. That has never stopped me from trying to accomplish something and it certainly won’t now. There were some trials and tribulations when attempting to set this up so I wanted to talk about what I experienced during my deployment.

I’m going to be using the F5 BIG-IP-LTM Virtual Edition trial load balancer. I’m going to start off by using it to load balance my Platform Services Controllers (PSC) for my vCenter deployment at my primary site. VMware was gracious enough to include how to setup high availability of the PSC’s in this document. However the part that’s lacking is how to properly deploy the F5 and get it to the point where you can actually use it for load balancing the PSC’s.   I couldn’t find a definitive source of step-by-step to deploy the F5, so I thought I’d just do it myself.

Information:

  • vwilmo.local – 192.168.0.111 – Deploy to Host
  • vwilmo.local – 192.168.1.4 – Primary Node
  • vwilmo.local – 192.168.1.5 – Secondary Node
  • vwilmo.local – 192.168.1.6 – Virtual IP address of the HA pair

All entries have forward and reverse DNS entries.

Download the BIG-IP LTM VE 11.3 from here. You’ll need to create an account and login to download the trial. Generate license keys. You should be able to generate 4 keys of 90 days each for the trial.

The file you’re looking to download is – BIGIP-11.3.0.39.0-scsi.ova

Once downloaded you simply need to run through the deployment of the OVA.

  • Open the vSphere Client and connect to one of your hosts. Since we do not have vCenter setup because we’re trying to configure HA PSC prior to installing vCenter, you’re just going to have to pick one host to deploy this on
  • Select File > Deploy OVF Template
  • Browse for the BIGIP-11.3.0.39.0-scsi.ova file you downloaded

f5_deploy_pic1

  • Verify that the details are what they say they are. You may notice an invalid publisher certificate. This is OK

f5_deploy_pic2

  •  Accept the EULA

f5_deploy_pic3

  • Name the appliance

f5_deploy_pic4

  • Select the datastore to store it on

f5_deploy_pic5

  • Select provisioning type

f5_deploy_pic6

  • Map networks – You have to be careful here. What happened to me was putting the Management and the Internal interfaces of the appliance on the same VM Network and VLAN. This creates an issue when you put in a Self-IP for the appliance during configuration. Select two DIFFERENT networks for both Management and Internal. The others are inconsequential to us right now. This is an internal-only load balancer and I’m not doing an HA configuration of the F5.

f5_deploy_pic7

  •  Confirm and Finish deployment.

f5_deploy_pic8

Now that the F5 is deployed, we’ll go ahead and boot it up and run through the initial configuration for getting into the management interface.

The default login for the appliance is ‘root’ and ‘default’

f5_deploy_pic9

Once you’re logged in, then type the word ‘config’ to take you through setting up the Management interface

f5_deploy_pic10

You can either input your own IP address, or let the appliance pull from DHCP.

f5_deploy_pic11

We can now browse back to the IP address of the appliance via HTTPS. The user and password here to login is ‘admin’ and ‘admin’.

f5_deploy_pic12

We can now go ahead and start the initial configuration of the appliance from the GUI. The first thing we need to do is Activate a license

f5_deploy_pic13

Copy and Paste one of the license keys you received from F5 into the ‘Base Key’ field. Check the interface to make sure that it has access to the Internet to activate the key.

At this point I don’t mess around with configuring any other sections using the wizards. I go through the regular interfaces to finish it up. The next thing we need to make sure that this thing will actually load balance is to configure the VLANs, Self-IPs, and network interfaces. You do this in the ‘Network’ tab to start.

Select VLAN > VLAN List > and click on the ‘+’

f5_deploy_pic14

Fill in the information for the ‘Internal’ network you selected alternatively to the ‘Management’ network. These should be on different networks. This was the only way I could get this to work properly. Select the 1.1 interface, as that corresponds to the ‘Internal’ NIC of the VM.

f5_deploy_pic15

Select Self IP > and click on the ‘+’.  This is the part, coupled with having the 1.1 Internal interface on the same network as the PSC’s where I screwed up.  I never did this step.

f5_deploy_pic16

Make sure to select the VLAN name you created when you configured the previous setting. This is the IP that the load balancer will use to direct traffic to this network.

Now that those are configured, I finish up the configuration by adding in DNS and NTP settings to ensure proper time and resolution states for the appliance.

Select System > Configuration > Device > NTP/DNS

f5_deploy_pic17

f5_deploy_pic18

That’s the basic configuration necessary to use the F5 for load balancing. In the next post I’ll go through how to setup the PSC in HA and use the F5 to facilitate load balancing for the deployment.

When I attempted to load balance the PSC’s with the F5 with the following perfect storm:

  • Both Management and Internal NICs on the same VLAN
  • No Self-IP for VLAN2 because you can’t add another IP on the same VLAN if it matches the VLAN the management interface is on.

As soon as I made those changes, I was able to get the PSC’s to use the F5 properly.  Hopefully my pain is your gain.  Good luck.

NCDA Study Part 2.8 – Data ONTAP: Configuration

As always download the newest version of the blueprint breakdown from here.

This section covers Qtrees.  Qtrees are another logical segregation of a volume used to hold further information within.  Think of them as directories inside a volume with files in them.  They are integral components of products like SnapMirror and necessary components of SnapVault.  They can have quotas and different backup and security styles than their volume counterparts.  You can have 4994 qtrees per volume, per NetApp system.

We’re going to perform the following labs in both the CLI and System Manager where the capabilities exist to do in both:

  • Create Qtree
  • Copy Qtree
  • Rename Qtree
  • Move Qtree
  • Change Qtree security
  • Delete Qtree

Create Qtree – CLI

Creating a qtree is a very simple process.  You first need a volume to put the qtree in.  From ONTAP1, we’ll use the test_vol Volume and add a qtree named test_qtree.

qtree create /vol/test_vol/test_qtree
qtree status

cli_qtree_create_step1Create Qtree – System Manager

Creating a qtree from System Manager is easy as well but does require a few more steps.  Log into System Manager and the Data ONTAP instance.  Expand the device, click on Storage and click on Qtrees

osm_qtree_create_step1Click on Create, name the qtree appropriately, and then click Browse to locate the test_vol volume.  Click OK and then Create again.  This creates a default type qtree in the test_vol

osm_qtree_create_step2Copy Qtree – CLI

Copying a qtree is also pretty easy but is done in different ways.  Qtrees can’t natively be copied using a copy command, you can only copy them using either Qtree SnapMirror or by using NDMPCOPY.  Qtree SnapMirror is covered in a later topic so we’ll just use NDMPCOPY.  NDMPCOPY will require downtime while the copy is performed.  Qtree SnapMirror can be used to sync the qtree and then cut over to it.  It is the much more elegant solution for qtrees with a large amount of data in them.

 ndmpcopy /vol/test_vol/test_qtree /vol/test_vol2/test_qtree
qtree status

cli_qtree_copy_step1Copy Qtree – System Manager

You cannot copy a qtree from within System Manager

Rename Qtree – CLI

Renaming a qtree is not that hard either.  However, you can only do it from within the advanced shell in Data ONTAP, priv set advanced.

priv set advanced
qtree rename /vol/test_vol/test_qtree /vol/test_qtree_new
qtree status

cli_qtree_rename_step1Rename Qtree – System Manager

You cannot rename a qtree from within System Manager

Move Qtree – CLI

Moving a qtree is no different than copying a qtree.  The same rules apply. You can only use Qtree SnapMirror or NDMPCOPY.  After you’re done, you simply delete the old qtree location. This obviously means that you’ll need to ensure that you have enough room to keep both copies on the filer until you can delete the old one.  There’s no reason to show how to copy it, I already did that above.  Below we’ll see how to delete a qtree which would complete a ‘move’ process.

Move Qtree – System Manager

You cannot move a qtree from within System Manager

Change Qtree Security – CLI

By default, qtrees take on the security of the volume they’re in.  The default volume security style is determined by the wafl.default_security style option setting.  By default this is UNIX which means all volumes created and subsequent qtrees will be UNIX security by default.  This can be overridden to provide NTFS, UNIX or MIXED access to a qtree.  Below we’ll change the qtree security style to NTFS.

qtree status
qtree security /vol/test_vol/test_qtree ntfs
qtree status

cli_qtree_security_step1Change Qtree Security – System Manager

Changing the security of a qtree from within System Manager is one of the only things you can actually do to a qtree from System Manager once it’s created.  Log into System Manager and the Data ONTAP instance.  Expand the device, click on Storage and click on Qtrees.  Since we changed it to NTFS before, we’ll change it to Mixed now.

osm_qtree_security_step1Delete Qtree – CLI

Much the same with renaming a qtree, you can only delete a qtree from the CLI from the advanced shell.

priv set advanced
qtree delete /vol/test_vol/test_qtree

cli_qtree_delete_step1Delete Qtree – System Manager

Log into System Manager and the Data ONTAP instance.  Expand the device, click on Storage and click on Qtrees.  Select the test_qtree and click on the Delete button.  Check the OK to delete the Qtree(s) option and click Delete.

osm_qtree_delete_step1