The VMware NSX Platform – Healthcare Series – Part 7: Secure End User Concept

As more and more organizations look to bring the edge computing closer to the data center, this can include the desktop systems that end users utilize to access the organizations systems.  Bringing these systems can bring in vulnerabilities and exposures that would typically be constrained by the physical desktop device.  When these systems are virtualized, a new security posture is required to protect the critical data and assets in the data center.

When visiting most Healthcare organizations during my pre-sales days, I found that many of them were using some form of Virtual Desktop Infrastructure (VDI) or Remote Desktop Session Host (RDSH) technology to present the applications to their clinicians.  Regardless of the overarching technology providing these services, Horizon or even Citrix, the VMware NSX platform has several business values for securing Healthcare customers running these platforms.

Revisiting the nine NSX use cases we previously identified:

NSX_EUC_pic6.png

The use case of Secure End User with NSX can be further broken down into six unique use cases.  A Healthcare organization does not have to use all six of them, but can start with one or more as their needs dictate.  Let’s break down each use case and explain when each can meet a business need.

NSX_EUC_pic2

In the majority of Secure End user use cases, we can apply the concept of micro-segmentation to provide granular security for a Healthcare organization by protecting East-West traffic between VDI desktops, or implementing Identity-based Firewall, separation of desktop pools, even providing 3rd party integrations like Agent-less Anti-Virus or Anti-Malware into the NSX platform.  Protecting the desktop or RDS hosts is straightforward, but we can even apply the same security concepts to protect the infrastructure that manages the VDI or RDSH environments.

NSX_EUC_pic1

Micro-segmentation

From the virtual desktop standpoint, micro-segmentation provides a means in which we can control East-West traffic between the desktop systems.  It also means, that if an organization has the need to have separate pools of desktops or even RDSH systems, NSX can provide security within each pool and between each pool, separately.  In Healthcare environments, there may be a need for external coders to provide services for the organization.  A new desktop pool, specifically for those external coders could be created and secured with NSX to only allow access to necessary systems.

Edge Services

NSX is a platform product.  This means that it has capabilities that span more than just security.  NSX can also provide NAT and Load Balancing services for the Edge management components of a VDI and RDSH infrastructure.  This added benefit helps customers reduce the complexity of having multiple interfaces in which to manage their infrastructure servers.  Healthcare systems require high availability and maximum uptime for their patient-facing systems.  The NSX Edge can be put into high availability and provide Load Balancing services to meet this use case, without the additional costs of 3rd party products.  These features come in even the standard version of NSX.

Network Virtualization

The ability to create logical networking constructs dynamically, is a principal use case with NSX.  NSX can faithfully recreate production networks, even with the same IP addressing, and isolate each network from talking with each other.  For Healthcare organizations where application uptime means patient care, the ability to quickly spin up these network reproductions can mean that copies of production applications can be placed into the isolated copy network and things like upgrades and security changes can be tested, prior, to deployment into the production workloads.

Protecting VDI Infrastructure

There’s no doubt that the virtual desktops and RDSH servers are key to a VDI deployment, but the back-end management components that provide the means to ‘spin up’ these desktops and servers, can also be protected by NSX.  These systems provide desktop interfaces for clinicians and hospital staff.  If these staff are unable to access the applications and systems they need to perform their jobs, it could directly affect patient care for the organization.  The back-end systems which facilitate these desktops are just as critical as the desktops themselves.

Protecting Desktop Pools

NSX provides 3rd party partners with the ability to plug into the NSX framework using NetX or EPSec APIs.  These APIs provide the partners the ability to integrate products such as:  Next-Gen Firewalls, Intrusion Detection and Prevention solutions, as well as Anti-Virus and Anti-Malware products.  By integrating with NSX, these products can remove the need for traditional in-guest agent approaches for these products.  Doing so can greatly impact the overall performance and resource requirements of each of the ESXi hosts these services reside on.

User-based Access Control

Regardless of whether a Healthcare organization uses one or all the use cases in their environment, each use case provides a unique value and layered approach to securing virtualized desktops or remote session hosts.  With the proximity these systems now have to the internal data center systems, their protection is very important to ensure a compromise or attack on one of them, doesn’t allow further access in the data center and to vital patient information.

Over the next several blog posts, we’ll dive deep into each of these concepts and show how to practically apply these use cases to common scenarios that a Healthcare organization may run into.

Advertisements

Deploying F5 Virtual Edition for vSphere

During the rebuild of my home lab, I was bound and determined to do things as close to a production deployment as possible. This includes the introduction of a load balancer into my lab. I will preface this post with ‘I have no clue how to operate a load balancer at all’. That has never stopped me from trying to accomplish something and it certainly won’t now. There were some trials and tribulations when attempting to set this up so I wanted to talk about what I experienced during my deployment.

I’m going to be using the F5 BIG-IP-LTM Virtual Edition trial load balancer. I’m going to start off by using it to load balance my Platform Services Controllers (PSC) for my vCenter deployment at my primary site. VMware was gracious enough to include how to setup high availability of the PSC’s in this document. However the part that’s lacking is how to properly deploy the F5 and get it to the point where you can actually use it for load balancing the PSC’s.   I couldn’t find a definitive source of step-by-step to deploy the F5, so I thought I’d just do it myself.

Information:

  • vwilmo.local – 192.168.0.111 – Deploy to Host
  • vwilmo.local – 192.168.1.4 – Primary Node
  • vwilmo.local – 192.168.1.5 – Secondary Node
  • vwilmo.local – 192.168.1.6 – Virtual IP address of the HA pair

All entries have forward and reverse DNS entries.

Download the BIG-IP LTM VE 11.3 from here. You’ll need to create an account and login to download the trial. Generate license keys. You should be able to generate 4 keys of 90 days each for the trial.

The file you’re looking to download is – BIGIP-11.3.0.39.0-scsi.ova

Once downloaded you simply need to run through the deployment of the OVA.

  • Open the vSphere Client and connect to one of your hosts. Since we do not have vCenter setup because we’re trying to configure HA PSC prior to installing vCenter, you’re just going to have to pick one host to deploy this on
  • Select File > Deploy OVF Template
  • Browse for the BIGIP-11.3.0.39.0-scsi.ova file you downloaded

f5_deploy_pic1

  • Verify that the details are what they say they are. You may notice an invalid publisher certificate. This is OK

f5_deploy_pic2

  •  Accept the EULA

f5_deploy_pic3

  • Name the appliance

f5_deploy_pic4

  • Select the datastore to store it on

f5_deploy_pic5

  • Select provisioning type

f5_deploy_pic6

  • Map networks – You have to be careful here. What happened to me was putting the Management and the Internal interfaces of the appliance on the same VM Network and VLAN. This creates an issue when you put in a Self-IP for the appliance during configuration. Select two DIFFERENT networks for both Management and Internal. The others are inconsequential to us right now. This is an internal-only load balancer and I’m not doing an HA configuration of the F5.

f5_deploy_pic7

  •  Confirm and Finish deployment.

f5_deploy_pic8

Now that the F5 is deployed, we’ll go ahead and boot it up and run through the initial configuration for getting into the management interface.

The default login for the appliance is ‘root’ and ‘default’

f5_deploy_pic9

Once you’re logged in, then type the word ‘config’ to take you through setting up the Management interface

f5_deploy_pic10

You can either input your own IP address, or let the appliance pull from DHCP.

f5_deploy_pic11

We can now browse back to the IP address of the appliance via HTTPS. The user and password here to login is ‘admin’ and ‘admin’.

f5_deploy_pic12

We can now go ahead and start the initial configuration of the appliance from the GUI. The first thing we need to do is Activate a license

f5_deploy_pic13

Copy and Paste one of the license keys you received from F5 into the ‘Base Key’ field. Check the interface to make sure that it has access to the Internet to activate the key.

At this point I don’t mess around with configuring any other sections using the wizards. I go through the regular interfaces to finish it up. The next thing we need to make sure that this thing will actually load balance is to configure the VLANs, Self-IPs, and network interfaces. You do this in the ‘Network’ tab to start.

Select VLAN > VLAN List > and click on the ‘+’

f5_deploy_pic14

Fill in the information for the ‘Internal’ network you selected alternatively to the ‘Management’ network. These should be on different networks. This was the only way I could get this to work properly. Select the 1.1 interface, as that corresponds to the ‘Internal’ NIC of the VM.

f5_deploy_pic15

Select Self IP > and click on the ‘+’.  This is the part, coupled with having the 1.1 Internal interface on the same network as the PSC’s where I screwed up.  I never did this step.

f5_deploy_pic16

Make sure to select the VLAN name you created when you configured the previous setting. This is the IP that the load balancer will use to direct traffic to this network.

Now that those are configured, I finish up the configuration by adding in DNS and NTP settings to ensure proper time and resolution states for the appliance.

Select System > Configuration > Device > NTP/DNS

f5_deploy_pic17

f5_deploy_pic18

That’s the basic configuration necessary to use the F5 for load balancing. In the next post I’ll go through how to setup the PSC in HA and use the F5 to facilitate load balancing for the deployment.

When I attempted to load balance the PSC’s with the F5 with the following perfect storm:

  • Both Management and Internal NICs on the same VLAN
  • No Self-IP for VLAN2 because you can’t add another IP on the same VLAN if it matches the VLAN the management interface is on.

As soon as I made those changes, I was able to get the PSC’s to use the F5 properly.  Hopefully my pain is your gain.  Good luck.

NCDA Study Part 2.8 – Data ONTAP: Configuration

As always download the newest version of the blueprint breakdown from here.

This section covers Qtrees.  Qtrees are another logical segregation of a volume used to hold further information within.  Think of them as directories inside a volume with files in them.  They are integral components of products like SnapMirror and necessary components of SnapVault.  They can have quotas and different backup and security styles than their volume counterparts.  You can have 4994 qtrees per volume, per NetApp system.

We’re going to perform the following labs in both the CLI and System Manager where the capabilities exist to do in both:

  • Create Qtree
  • Copy Qtree
  • Rename Qtree
  • Move Qtree
  • Change Qtree security
  • Delete Qtree

Create Qtree – CLI

Creating a qtree is a very simple process.  You first need a volume to put the qtree in.  From ONTAP1, we’ll use the test_vol Volume and add a qtree named test_qtree.

qtree create /vol/test_vol/test_qtree
qtree status

cli_qtree_create_step1Create Qtree – System Manager

Creating a qtree from System Manager is easy as well but does require a few more steps.  Log into System Manager and the Data ONTAP instance.  Expand the device, click on Storage and click on Qtrees

osm_qtree_create_step1Click on Create, name the qtree appropriately, and then click Browse to locate the test_vol volume.  Click OK and then Create again.  This creates a default type qtree in the test_vol

osm_qtree_create_step2Copy Qtree – CLI

Copying a qtree is also pretty easy but is done in different ways.  Qtrees can’t natively be copied using a copy command, you can only copy them using either Qtree SnapMirror or by using NDMPCOPY.  Qtree SnapMirror is covered in a later topic so we’ll just use NDMPCOPY.  NDMPCOPY will require downtime while the copy is performed.  Qtree SnapMirror can be used to sync the qtree and then cut over to it.  It is the much more elegant solution for qtrees with a large amount of data in them.

 ndmpcopy /vol/test_vol/test_qtree /vol/test_vol2/test_qtree
qtree status

cli_qtree_copy_step1Copy Qtree – System Manager

You cannot copy a qtree from within System Manager

Rename Qtree – CLI

Renaming a qtree is not that hard either.  However, you can only do it from within the advanced shell in Data ONTAP, priv set advanced.

priv set advanced
qtree rename /vol/test_vol/test_qtree /vol/test_qtree_new
qtree status

cli_qtree_rename_step1Rename Qtree – System Manager

You cannot rename a qtree from within System Manager

Move Qtree – CLI

Moving a qtree is no different than copying a qtree.  The same rules apply. You can only use Qtree SnapMirror or NDMPCOPY.  After you’re done, you simply delete the old qtree location. This obviously means that you’ll need to ensure that you have enough room to keep both copies on the filer until you can delete the old one.  There’s no reason to show how to copy it, I already did that above.  Below we’ll see how to delete a qtree which would complete a ‘move’ process.

Move Qtree – System Manager

You cannot move a qtree from within System Manager

Change Qtree Security – CLI

By default, qtrees take on the security of the volume they’re in.  The default volume security style is determined by the wafl.default_security style option setting.  By default this is UNIX which means all volumes created and subsequent qtrees will be UNIX security by default.  This can be overridden to provide NTFS, UNIX or MIXED access to a qtree.  Below we’ll change the qtree security style to NTFS.

qtree status
qtree security /vol/test_vol/test_qtree ntfs
qtree status

cli_qtree_security_step1Change Qtree Security – System Manager

Changing the security of a qtree from within System Manager is one of the only things you can actually do to a qtree from System Manager once it’s created.  Log into System Manager and the Data ONTAP instance.  Expand the device, click on Storage and click on Qtrees.  Since we changed it to NTFS before, we’ll change it to Mixed now.

osm_qtree_security_step1Delete Qtree – CLI

Much the same with renaming a qtree, you can only delete a qtree from the CLI from the advanced shell.

priv set advanced
qtree delete /vol/test_vol/test_qtree

cli_qtree_delete_step1Delete Qtree – System Manager

Log into System Manager and the Data ONTAP instance.  Expand the device, click on Storage and click on Qtrees.  Select the test_qtree and click on the Delete button.  Check the OK to delete the Qtree(s) option and click Delete.

osm_qtree_delete_step1