The VMware NSX Platform – Healthcare Series – Part 6: DMZ Anywhere Practical

Continuing our discussion on the topic of Healthcare and the DMZ use case, we’re going to put these concepts into actual practice.  With Healthcare systems, patients want access to their information quickly and not necessarily within the four walls of a Healthcare organization.  This means that this information needs to be provided to Internet-facing devices for secure access.  Below is the layout we’re going to use as a typical layout with Internet-facing EMR Patient Portal for customers using traditional methods.

Traditional Model

dmz_ms_pic1

For this post, we’re going to use a physical Perimeter Firewall, and an NSX Edge Services Gateway (ESG) as the Internal Firewall to separate the DMZ systems from the Internal data center systems.

In our concept post, we talked about how NSX can help augment an existing DMZ approach to simplify the restrictions of communications between systems that reside there.  For Healthcare providers, the EMR Internet-facing Web Servers should not allow communications between themselves.  If one Web Server is compromised, lateral movement must be restricted.  Traditional approaches to restrict intra-server traffic between the EMR Web Servers would require blocking the communication at Layer 2, using MAC addresses.  With NSX, we can instantiate a firewall at the virtual machine layer, regardless of the servers being on the same Layer 2 network, and restrict the Web Servers from talking to each other without needing to know the MAC addresses or by sending the intra-server traffic through an external firewall to block.  This same concept of East-West Micro-Segmentation, is covered in previous posts and is the same concept we can apply for DMZ workloads.

Let’s lay out the requirements from the customer for the first use case.

VMware NSX – DMZ Augment Model

dmz_ms_pic2

Use case – Augment the existing DMZ to remove communications between DMZ systems.

  • Block all EMR Web Servers from talking to each other
  • Maintain the existing infrastructure as much as possible and without major changes

Technology used

Windows Clients:

  • Windows 10 – Management Desktop – Jumpbox-01a (192.168.0.99)

VMware Products

  • vSphere
  • vCenter
  • NSX
  • Log Insight

Application in question

Open Source Healthcare Application:

  • OpenMRS – Open Source EMR system
    • Apache/PHP Web Server
    • MySQL Database Server

Let’s start things off like we normally do, with the layout of our methodology for writing our rule sets.

dmz_ms_table1

 

When we put in NSX, we can write one rule and get the following result.

dmz_ms_pic3

The rule is very simple to write. We simply add any DMZ systems to a Security Group and add that Security Group as both the Source and Destination and apply a Block.

dmz_ms_pic4

Once this rule is in place, any virtual machines we place into the DMZ-SG-ALL Security Group, will be blocked from talking to each other.  Let’s verify this is working.

dmz_ms_pic24

As we can see, the Web Servers are no longer allowed to talk to each other.  We have produced a similar result with less complexity and more scalability and operational ease without changing the existing infrastructure at all.

For the next use case, collapsing the traditional hardware DMZ back into the data center, the goal is to remove the need for the NSX Edge Services Gateway to provide an Internal Firewall and use the NSX Distributed Firewall (DFW) to handle access between the DMZ and the internal data center systems.

VMware NSX – DMZ Anywhere (Collapsed) Model

dmz_ms_pic6

You may notice that the ESG is still in place.  That’s because the Internal data center is running on VXLAN and there still needs to be an off-ramp router to get to the physical network.  However, we have disabled the ESG’s firewall to demonstrate removing the Internal Firewall separation and allowing the DFW to handle the restrictions.

Let’s lay out the use case and the requirements from the customer.

Use case – Collapse the existing DMZ back into the data center while still maintaining the same security posture as when it was isolated.

  • Restrict External Patients to connect only to the EMR DMZ Web Servers
  • Restrict Internal Clinicians to connect only to the internal EMR Web Server
  • Allow all EMR Web Servers to connect to the EMR DB Server
  • Block all EMR Web Servers from talking to each other
  • Maintain DMZ isolation of the EMR System from the HR System

Technology used

Windows Clients:

  • Windows 10 – Clinician Desktop – Client-01a (192.168.0.36)
  • Windows 10 – HR Desktop – Client-02a (192.168.0.33)
  • iPad – External Patients – (External IP)

VMware Products

  • vSphere
  • vCenter
  • NSX
  • Log Insight

Application in question

Open Source Healthcare Application:

  • OpenMRS – Open Source EMR system
    • Apache/PHP Web Server
    • MySQL Database Server
  • IceHRM – HRHIS (Human Resource for Health Information System)
    • Apache/PHP Web Server
    • MySQL Database Server

Let’s start things off like we normally do, with the layout of our methodology for writing our rule sets.  I’m not going to go through how to get these flows.  Please reference one of my previous posts around using Log Insight, Application Rule Manager, and vRealize Network Insight to gather this information.

 

dmz_ms_table2

A few things of note.  We created an RFC 1918 IP Set in these groupings.  We did so, so that we can restrict only External IP addresses access to the EMR DMZ Web Servers.  We don’t want our internal Clinicians connecting to them.  By blocking the entire 1918 range set, we should never get a connection from an internal system to the DMZ systems.  To do this, we create an IP Set with all three RFC 1918 ranges in it.  We create a Security Group with this IP Set put into the Inclusion Criteria.  Then we write a rule that blocks these ranges above an ANY rule to filter the types of traffic that should hit the DMZ Web Servers.

dmz_ms_table3

 

Let’s put our rules in the appropriate places on the appropriate firewalls and do some testing to verify the traditional method is working properly.

dmz_ms_pic7

NSX Edge Services Gateway Firewall Policy

This rule is in place to allow the EMR DMZ Web Servers to talk to the backend Database only.  We have to use an IP Set here because the DMZ Web Servers are outside the scope of NSX and do not have a firewall applied to them yet.  However, we can control what talks to the EMR-SG-DB Security Group from the physical environment.

dmz_ms_pic8

Physical Firewall Policy

We’re going to forward our DMZ Web Servers through our Physical Firewall to accept traffic on TCP 8080.  With this change we should be able to access our OpenMRS EMR system from the Internet.  Let’s verify.

dmz_ms_pic9

As you can see from the address bar, we’re able to hit one of the DMZ Web Servers from the Internet.  I’m using an iPad to demonstrate that it doesn’t matter the device at this point.  We can also verify that our NSX ESG Firewall is being hit by the DMZ Web Servers as well.  Using Log Insight, we can verify this quickly.

dmz_ms_pic10

We can see that the DMZ Servers are hitting our rule and that the destination being hit is 172.16.20.11, which is the EMR-DB-01a server.

Let’s put our rules for inside the data center into the NSX DFW.

dmz_ms_pic11

This type of configuration represents how we’d have to build our rule sets to accommodate a segregated DMZ environment.  Let’s verify that our EMR DMZ and Internal EMR Web Servers can still hit the EMR DB and that our Clinician Desktop and HR Desktops cannot browse to their respective systems.

Clinician Desktop to Internal EMR

dmz_ms_pic12

HR Desktop to HRHIS

dmz_ms_pic13

We’ve confirmed that all the rules in place are working and the traditional approach still works.  Let’s collapse those two Web Servers back into the data center and show how we can still provide a similar security posture, without the physical hardware isolation.

To do this we’re going to need to move back into our data center the two EMR DMZ Web Servers.  I’m going to create a new VXLAN network for them to live on that mimics their physical VLAN configuration inside the data center so we can still keep network isolation.  Keeping the same network doesn’t technically matter since we can still control the traffic, but most production Healthcare organizations would want to refrain from having to change IP addresses of their production systems if they can help it.

dmz_ms_pic14

dmz_ms_pic15

As you can see, the EMR-DMZ-WEB-01a/02a machines are now inside the Compute cluster in my data center.  They’re also on their same layer 2 network as they were before in hardware isolation.

We’ve disabled the Firewall on the ESG as well.

dmz_ms_pic16

And here is our now modified DFW rule sets to accommodate a collapsed DMZ environment similar to the hardware isolated configuration.

dmz_ms_pic17

So, here’s what did we added/changed:

  • We added our RFC1918 Security Group so that any internal systems would not connect to the DMZ Web Servers.
  • We also created a PERIMETER-IPSET for the Physical Firewall. This is because the ports for the EMR DMZ Web Servers are being NAT’d through the Perimeter Firewall so communications to the EMR DMZ Web Servers appear to come from an interface on that device.  Since that interface is on RFC1918 network, we add it to the RFC1918 Security Group as an Excluded host address.
  • Added DMZ Security Tags so that any new systems that are built can have the DMZ-ST-ALL Security Tag applied, which will put them into the DMZ-SG-ALL Security Group and block intra-server communications immediately.

Now that all of our changes in architecture are in place, we can go through and verify that all the requirements are being accounted for.  Let’s revisit the requirements.

Use case – Collapse the existing DMZ back into the data center while still maintaining the same security posture as when it was isolated.

  • Restrict External Patients to connect only to the EMR DMZ Web Servers

dmz_ms_pic18

dmz_ms_pic19

dmz_ms_pic20

We can see that our External device from an IP of 172.221.12.80 is connecting to our EMR-DMZ-WEB-01a server.  We can also see that the Web Server is also talking to the backend EMR-DB-01a server.

  • Restrict Internal Clinicians to connect only to the internal EMR Web Server

dmz_ms_pic21

dmz_ms_pic22

dmz_ms_pic23

Here we can see that our Internal Clinician Desktop has the ability to connect to the Internal EMR Web Server but when they attempt to connect to one of the DMZ Servers, they’re blocked.

  • Allow all EMR Web Servers to connect to the EMR DB Server

dmz_ms_pic18

dmz_ms_pic21

This requirement appears to be functioning as expected as well.

  • Block all EMR Web Servers from talking to each other

dmz_ms_pic24

A quick cURL to the Web Servers shows that Internal and External are not communicating with each other.  Also, from EMR-DMZ-WEB-02a to EMR-DMZ-WEB-01a we’re not getting a connection either.

  • Maintain DMZ isolation of the EMR System from the HR System

dmz_ms_pic25

Another attempt to cURL to the HRHIS System shows that the EMR-DMZ-WEB-01a server is not able to communicate to the HRHIS System.  This completes the requirements set forth by the customer.  The patient information access is now limited to only from the EMR system and compromise of any adjacent system within the Healthcare organization, will not allow communications between those systems and the EMR.  We have effectively reduced the attack surface and added defense-in-depth security with minimal efforts.

As we look back, there are several ways to architect a DMZ environment.  Traditional hardware isolation methods can still be augmented to remove massive infrastructure changes to an existing DMZ.  Customers looking to remove the hardware isolation altogether, can do so by collapsing the DMZ environment back into the data center and still maintain the same level of control over the communications both in and out of DMZ systems.  With NSX, the DFW and its ability to control security from an East-West perspective can be overlaid on top of any existing architecture.  This software-based approach helps provide security around a Healthcare organization’s most critical externally-facing patient systems and help reduce exposure from adjacent threats in the data center.

 

 

 

 

Advertisements

The VMware NSX Platform – Healthcare Series – Part 5: DMZ Anywhere Concept

Healthcare organizations are being asked to expose Internet-based services and applications to their patients more than ever.  With Healthcare, exposure of PHI and PII is of the utmost concern.  With the perimeter of the Healthcare organization needing to be as secure as possible, exposing external systems and applications to the Internet falls under this scope as well.  Traditional DMZ approaches are hardware-centric, costly, and operationally difficult to use in most modern datacenters.  With VMware NSX, we can take the concept of the DMZ, and augment a current DMZ approach, or even collapse the DMZ back inside the data center while still providing a robust security posture necessary for Internet-facing applications.

Let’s revisit the nine NSX use cases we identified previously.

dmz_aw_pic1

DMZ Anywhere is a use case that our customers are looking at that augments traditional hardware-based approaches and leverages the Distributed Firewall capabilities to segment how traffic is allowed to flow between systems anywhere in the data center.  Let’s be clear, VMware NSX is not in the business of replacing a hardware perimeter firewall system.  But with NSX, you can fundamentally change how you design the DMZ environment once you’re inside the perimeter firewall to provide a much easier to manage and scalable solution overall.  You can review previous posts on how to Micro-segmentation works here.  https://vwilmo.wordpress.com/category/micro-segmentation/

Let’s take a quick look at traditional approaches to building a DMZ environment with physical devices.

dmz_aw_pic2

Traditional hardware-based approaches can leverage either Zone-based logical firewalling or actual physically independent firewalls to separate out a specific section called the DMZ for Internet-facing applications to sit in. These zones are built to only allow specific sets of communication flows from the Internet-facing systems to their backend components. The systems are typically on their own separate networks.  Typical applications exposed to the Internet are web-based applications for major systems.  These types of systems can comprise of several Web servers, all of which can be used to provide multi-user access to the application.

If customers want to keep the same traditional approaches using zone-based Firewalling, NSX can help block movement for the virtual systems that reside within the DMZ from East-West movement.  In most cases, the systems that sit in the DMZ are Web-based systems.  These types of systems typically do not require communications between the Web servers, or even between disparate applications.

dmz_aw_pic3

In the above examples, all the DMZ Servers can instantiate a conversation bi-directionally with each other.  This is inherently insecure and the only way to secure these is to send all the East-West traffic through the firewall.  When you add more systems, you add more rules. This problem continues to compound itself the larger the DMZ gets.  What if you have multiple networks and systems in the DMZ?  That will require significantly more rules and more complexity.  If you need to scale out this environment, it becomes even more operationally difficult.  How can NSX plug into this scenario and help reduce this complexity and also provide a similar level of security?

With NSX, we can provide the East-West firewalling capabilities in both scenarios to secure the applications from each other from compromise.  If one system is breached, the attack surface for movement laterally, is removed as the systems don’t even know the other systems exist.

Putting in NSX, we’re now blocking the systems from talking to each other without changing any aspect of the underlying infrastructure to do so.  We’re placing an NSX firewall at the virtual machine layer and blocking traffic.  As you can see, NSX can be made to fit nearly any DMZ infrastructure architecture.

dmz_aw_pic4

Here we have our Electronic Medical Records application that has an Internet-facing Patient Access Portal.  With a traditional approach, the Patient Portal may be on separate hardware, situated between two sets of hardware Firewalls, or one set of Internally Zoned, Firewalls, and on a completely different logical network.  The backend systems that are required for the DMZ EMR systems are situated behind another internal firewall along with the rest of the systems in the data center, in this case, share infrastructure systems and the EMR backend database system.  Neither of these systems should have contact with the Internal HR Web or DB Server.  If they did, compromise from the DMZ environment could allow an attacker access to other sensitive internal systems like the HR system.

Now let’s look how NSX can change the traditional design of a DMZ and collapse it back into the data center but will allow the same levels of security as traditional methods, but with a software-based focus.

dmz_aw_pic5

Using NSX in this approach, we’re doing the same thing we did when we augmented the existing hardware approach by placing a software-based Firewall on each Virtual Machine in the data center.  This fundamentally means, that every VM, has its own perimeter and we can programmatically control how each of those VM’s talk or don’t talk to each other.  This approach could enable a Healthcare organization to pull back the hardware isolation for their DMZ back into their data center compute clusters and apply DMZ-level security to those specific workloads hereby collapsing the isolation into software constructs versus hardware ones.  In the collapsed DMZ model, we have no separate infrastructure to support a DMZ environment, we simply control the inbound traffic from the perimeter through the physical firewall as we would normally do, but apply VM-level security using NSX between the systems that would’ve been separated out.  The DMZ EMR Web Servers are still restricted access to the HR system even though they technically live next to each other within the Internal data center.

Let’s contrast a software-based approach versus traditional hardware methods.

Hardware-based

  • For Zone-based firewalling leveraging a single hardware appliance, this is much less of an issue. Some organizations purchase at multiple Firewalls at the perimeter for HA configurations.  If they leverage a separation of their DMZ using two sets of Firewalls, that means they’ll need to purchase at least 4 Firewalls to perform this configuration.
    • New features and functions with hardware products can be tied to the hardware itself. Want these new items?  That could require a new hardware purchase.
  • Scale
    • Hardware-based scaling is generally scale-up. If the Firewall runs out of resources or begins to be over-utilized, it could require moving to larger Firewalls overall to accommodate. This means a rip and replace of the existing equipment.
  • Static
    • A hardware-based DMZ is very static. It doesn’t move within the data center and the workloads have to be positioned in accordance to the network functions it provides.  In modern data centers, workloads can exist anywhere and on any host in the data center.  They can even exist between data centers.  Uptime is critical for Healthcare providers as is maintaining data security.  Wherever the workload may end up, it requires the same, consistent security policy.
  • Cost
    • Buying multiple hardware Firewalls is not cheap. If the organization needs to scale up, ripping and replacing the existing Firewalls for new ones can be costly and incur downtime.  For Healthcare organizations, downtime affects patient care.  Some DMZ architectures have separate hardware to run only the workloads in the DMZ environment.  This separates out the Management of that environment from the internal data center environment.  It also means that, when architecting a hardware-based DMZ, you may end up with compute resources that costly and underutilized.  A concept that totally goes against virtualization in general and leads to higher operating costs in the data center and wasted resources.
  • Operationally difficult
    • If the customer is going with the multiple Firewall method, this means that to configure the allowed and disallowed traffic, the customer would need to go into two sets of Firewalls to do this. Hardware Firewalls for the DMZ will require MAC addresses for all the workloads going into them.  DMZ networks may be a few networks, but usually Web Servers exist on the same logical network.  Healthcare systems can have massive Internet-facing infrastructures to provide for their patients.

Software-based

  • By placing the Firewall capabilities within the ESXi kernel, we’re able to ensure security simply by virtue of the workload residing on any host that is running the vSphere hypervisor. When it comes to new features and functions, where you might need to upgrade proprietary Firewall hardware, NSX is tied to any x86 hardware and new features simply require an update to the software packages reducing the possibility of ripping and replacing hardware.  For Healthcare customers, this reduces or eliminates the downtime required to keep systems up-to-date where downtime is a premium.
  • Scale
    • The nature of NSX being in every hypervisor means Firewall scales linearly as you add more hypervisors to a customer environment. It also means, that instead of having to purchase large physical Firewalls for all your workloads, the DFW will provide throughput and functionality for whatever your consolidation ratio is on your vSphere hosts.  Instead of a few physical devices supporting security for 100s-1000s of virtual machines, each host with the vSphere hypervisor supports security for the VMs residing on it.  With a distributed model that scales as you add hosts, this creates a massive scale platform for security needs.  Physical Firewalls with high bandwidth ports are very expensive, and generally don’t have nearly as many ports as you can have in a distributed model across multiple x86 hardware platforms.
  • Mobility
    • Hardware-based appliances are generally static. They don’t move in your data center although the workloads you’re trying to protect may.  These workloads, when virtualized, can moved to any number of hosts within the data center and even between data centers.  With NSX, the Firewall policy follows the virtual workload no matter the location.  Healthcare providers care about uptime, the ability to move sensitive data systems around to maintain uptime, while maintaining security, is crucial.
  • Cost-effective
    • Software-based solution only need to be licensed for the hosts that the workloads will reside on. No need to purchase licensing for hosts where protected workloads may never traverse to.  With Healthcare organizations, they can focus on the workloads that house their patient’s sensitive data and the systems that interact with them.
    • No need to spend money on separate hardware just for a DMZ. Collapse the DMZ workloads back to the compute environments and reduce wasted resources.
  • Operationally easier
    • By removing another configuration point within the security model, NSX can still provide the same level of security around DMZ workloads even if they sat on the same host as a non-DMZ workload. All of this, while keeping them logically isolated versus physically isolated.  With NSX, there’s no reason to use multiple networks to segment DMZ traffic and the workloads on those segments.  NSX resolves the IP and MAC addresses so that rule and policy creation is much simpler and can be applied programmatically versus traditional manual methods.

When it comes to DMZ architecture, traditional hardware approaches that have been followed in the past, can be too static and inflexible for modern workloads.  Healthcare customers need uptime and scale as medical systems that house patient data are not getting smaller and patient requirements for access to their information continues to grow.  With NSX, we can augment a current DMZ strategy, or even collapse their physical DMZ back into their virtual compute environment and still provide the same levels of security and protection as hardware-based approaches, at a lower cost and easier to maintain.

The VMware NSX Platform – Healthcare Series – Part 4.3: Micro-segmentation Practical – vRealize Network Insight

In the last post, we showed the methodology to building out Micro-segmentation rules for the OpenMRS EMR application in the test environment.  This was a straightforward process using the VMware NSX Distributed Firewall and vRealize Log Insight. The process became even simpler when we leveraged the new NSX 6.3 feature – Application Rule Manager. More importantly it gives us a starting point for applications to provide a Zero-Trust security posture.

As we continue the series, we’re going to expand the Healthcare organization’s environment to include other systems that are typically running.  As Healthcare professionals, we know that while the EMR is a critical application, it’s not solely an independent system.   There are many systems within the Healthcare organization that have connections to and pull information from the EMR system, and there are other systems inside that environment as well beyond clinical-facing.  All of these systems require the same amount of security as the EMR system does.  In this post, we’re going to leverage the VMware NSX Platform, our foundational methodology for micro-segmenting we built out in the first post, and vRealize Network Insight to help build our NSX DFW rules for several new systems we’re going to be adding in.  This will help complete the environment build out and conclude the Micro-segmentation NSX use case for the series.

Let’s start with a layout of the environment and the systems we added in to show just how complex this type of environment could be.

Lab_topology.png

  • We have our EMR system as we had in the previous posts.
  • We’re now going to add a DCM4CHEE PACS system that our OpenMRS EMR can forward events to. Our PACS system has 2 modalities, a CT and MRI scanner that talk to the PACS system, simulated using the DTKv modality emulator.  These systems simulate physical devices out in a clinical setting.
  • We’re introducing an HL7 Interface Engine, Mirth Connect, that is pulling data from our EMR and sending that information via SFTP over to our Data Warehouse DB server for processing into a population health application. When the SFTP job is complete, the HL7 system will be sending an email notification of completion to our hMail, email server.
  • Finally, we’re going to connect the systems to the organizations Active Directory, DNS, and NTP servers for the shared infrastructure based services these systems require.

 The Healthcare organization has asked that we expand the Zero Trust security model using NSX to their entire environment.  They have given us some details about the applications and requirements on what we need to accomplish.  Given the size of the environment and number of applications now in-scope, the customer has asked for a scalable way to operationalize Micro-segmentation.  The customer has added several new integrations to the EMR and between other application systems in the infrastructure.  We need to find out what the applications are, verify with Application Teams, block/allow as necessary per Application Team.

 Expanded Use case – Provide a Zero Trust security model using Micro-segmentation around a Healthcare organization’s data center applications.  Facilitate only the necessary communications both to the applications and between the components of the applications.

  • Allow EMR Client Application to communicate with EMR Web/App Server
  • Allow EMR Web/App Server to communicate with EMR Database Server
  • Allow PACS Client Application to communicate with PACS Web/App Server
  • Allow PACS Web/App Server to communicate with PACS Database Server
  • Allow PACS Web/App Server to communicate with EMR Web/App Server
  • Allow PACS Modalities (CT Scan/MRI) to communicate with the PACS Web/App Server
  • Allow HL7 Server to communicate with whatever systems it requires
    • This system is the ‘bridge’ between ancillary applications and the EMR system
  • Allow HRHIS Client Application to communicate with HRHIS Web/App Server
  • Allow HRHIS Web/App Server to communicate with HRHIS Database Server
  • Allow EMAIL messages to be sent as necessary. Certain applications are emailing their status updates.
  • Block any unknown communications except the actual application traffic and restrict access to the EMR application to only a Clinician Desktop system running the EMR Client Application.
  • Block any unknown communications except the actual application traffic and restrict access to the HRHIS application to only a HRHIS Desktop system running the HRHIS Client Application.
  • Allow bi-directional communication between the Infrastructure Services and all applications that require access to those services

Problem – The Healthcare organization still does not have a full understanding of how their applications communicate within and outside the organization. We have some details listed above, but nothing about the flows or ports and protocols to restrict traffic to only necessary communications.  With the expanded use case, the Healthcare organization thinks it will be difficult to operationalize this security model.

Technology used –   

Windows Clients:

  • Windows 10 – Clinician Desktop – Client-01a (192.168.0.36)
  • Windows 10 – HR Desktop – Client-02a (192.168.0.33)
  • Windows 10 – Jumpbox-01a – (192.168.0.99)

VMware Products:

  • vSphere
  • vCenter
  • NSX
  • vRealize Network Insight

Applications in question –

Open Source Healthcare Applications:

  • OpenMRS – Open Source EMR system
    • Apache/PHP Web Server
    • MySQL Database Server
  • DCM4CHEE – PACS (Picture and Archiving Communication System)
    • Apache/PHP Web Server
    • MySQL Database Server
  • IceHRM – HRHIS (Human Resource for Health Information System)
    • Apache/PHP Web Server
    • MySQL Database Server
  • Weasis – DICOM (Digital Imaging and Communications in Medicine) system
    • Runs on a Windows-based system
  • DTKv – DICOM Emulator
    • Runs on a Windows-based system
    • Used to simulate a modality for the PACS system
      • Modality – MRI, CAT Scan, etc.
    • Mirth Connect – HL7 Interface Engine
      • Apache/Java/MySQL Server
    • Data Warehouse DB server

Infrastructure Applications:

  • LDAP
  • DNS
  • NTP

Enterprise Applications

  • hMail Email Server

After sitting down with each Application owner and asking questions about their application, we were able to at least identify the virtual machines associated with each application.  Using vRealize Network Insight (vRNI from here on), we’re going to map out the flows of all the applications that are in-scope here. Now that we have all the Applications defined, we’ll go ahead and build all the constructs within NSX for use with our rules.

If you’ve heard me speak before, you’ll know that I harp on creating naming schemes that are intuitive.  What do I mean by that?  Intuitive, in that just looking at the name will tell you the meaning of whatever you’re looking at.  In any setting, making changes to rules, services, ports, groups, etc. can have a profound impact on the infrastructure.  In the Healthcare setting, this could mean patient lives.  It’s important that we adopt a naming scheme, and adhere to it, even if the services already exist within NSX.  I’ve settled on these examples for this blog post:

vrni_ms_table1

INFRA Analysis and Rule Building:

 Similar to the last post with ARM, we’re going to start with Infrastructure services.  They are services that all of these systems depend upon and represent rules that will encompass the entire environment.  When we open up vRNI we’re going to look at everything within the vCenter for Site 1.  You can go more granular as you wish but with my environment being nested I have to look at bit higher.  I’m also only going to look at flows over the last 7 days.  In a production deployment, you’d want to examine each application and understand it.  If it’s a payroll processing application that does month-end processing you may want to look at the last 30 days.  When we change the micro-segments to ‘by Application’ we can see all of our custom applications we created and their flows both intra and inter the virtual environment.

Servers:

  • Active Directory – AD-DNS-01a
  • DNS – AD-DNS-01a
  • NTP – NTP-01a

These are the 3 servers in scope for the infrastructure.  We’ll need to create our structure for writing our rules.  We’re going to combine all of these servers into one Infrastructure Grouping as necessary.  To start building our rules we’ll need to ‘Plan Security’ using vRNI for the VM’s.

We’ll start by examining the flows of the AD-DNS-01a VM to other VMs.  When we log into vRNI the top option on the left side menu is ‘Plan Security’.  When we select that, we get a pop-up box that allows us to choose several Entities to plan security around as well as to examine the flows over a duration of time.  vRNI will allow you to examine flows from the last 30 days.  Again, understand the application and the necessary time that would be needed to realize the flows.

vrni_ms_pic1.png

When we select Analyze, we’ll be given a report of information about the VM in question.

vrni_ms_pic2

We’ll need to change a few settings to bring into view, the information we’re looking for.  In this case, we’re going to change the ‘Group By’ to ‘by VM’.  We’ll also need to change the ‘Also show groups for’ to ‘Other Virtual’.  This will show us all VM’s that are in-scope for this planning.

vrni_ms_pic3

From this view, we can see all of the ‘Other Virtual’ machines that the AD-DNS-01a server talks to.  If we want to dig into the flows we’ll need to look at to build our rules, we’ll focus on the AD-DNS-01a wedge.  Double-clicking on that wedge brings us to this screen.

vrni_ms_pic4

This screen shows up all of the flows captured over the last 7 days both to and from AD-DNS-01a as well as the port and protocol information.  We can use filters to break down the flows into smaller groups and dig into them specifically.  In this case, we want to take a look at what vRNI is recommending we do to create NSX Distributed Firewall (DFW) rules.  To see that, we’ll select the ‘Recommended Firewalls Rules’ tab.

vrni_ms_pic5

vRNI has shown us that it’s recommending we create 17 rules for our flows.  Let it be said, that Infrastructure Services will share the bulk of the flows in this lab.  Nearly every VM needs access to some or all of the services.  It will show very granular rules for us to build.  The thing to understand, is that these are ‘recommendations’, not absolutes.  Taking a look at the recommended rules, it looks like we might be able to combine a few and make things a bit simpler.  To do that, we’re going to take these rules and extract them from vRNI.

vrni_ms_pic6

In the upper right corner of vRNI, there’s an option to ‘Export as CSV’.  This will take all of those recommendations and put them in a format we can modify to our liking.

vrni_ms_pic7

vRNI has also given us some recommended Security Group structures as well.  You can choose to use this structure or not.  In my case, I’m going with the structure I created above.  I’ve gone through and replaced the groups with the custom ones.

 

We see a few Destinations in this output that don’t give us an explicit location, DC-Physical and Internet.  How do we handle those? What we need to do is go back to the wheel depiction, and hover over the DC-Physical portion.  For the Internet flows, we’re going to block any communications to the Internet.

vrni_ms_pic8

When we do this, we can see the flows just between AD-DNS-01a and DC Physical.  If we want to see the what the Destination of these flows are, we can click on the green line.  This will open up those flows.  When we get into this screen, we’re going to select the DC-Physical to AD-DNS-01a tab.

vrni_ms_pic9

From this output, we’re given the port and protocol, the number of flows, and the amount of traffic that’s been sent over the 7-day period.  If we drill into the ‘Count of Flow’ of ‘3’, this will give us the information we need.

vrni_ms_pic10

We now see that the 3 flows are coming from:

  • 168.0.33 – Clinician Desktop
  • 168.0.36 – HR Desktop
  • 168.0.99 – Management Jumpbox

Now that we have this information, we can create IP Set’s to match these sources, and add them to our rules.  Since we’re not going to specifically allow any Internet flows, we don’t have to worry about creating any rules specifically for this traffic.  The Block All rules will catch anything not explicitly allowed.

Let’s look at our vRNI output, albeit modified.

vrni_ms_pic11

So, you’ll notice a few things are different with the original output from vRNI.

  • Subbed in custom Security Groups with appropriate naming
  • Added a column for Combined Security Groups and Combined Service Groups. This will help us tremendously slim down the number of rules we need to write overall.  Remember, vRNI will give us very granular rules one by one.  We can consolidate those down to bigger groups and smaller numbers of rules.
  • Split all Services out if multiples were showing per line and created Services in NSX.
  • Color-coded things so that you can see where we can group Security Groups together to make one rule to cover each line instead of a single line for each.

What does this all mean?  Well this is what things will look like for our rules now:

vrni_ms_pic12

Let’s move to the rules for NTP.  We’re going to change the VM to NTP-01a and run the Analyze again.

vrni_ms_pic13

We’ll get a similar output that we got for the AD-DNS-01a VM.  When we drill into the NTP-01a wedge that will show us more details.

vrni_ms_pic14

We can see that there are 10 flows both incoming and outgoing, and vRNI is recommending 10 rules for us to build to accommodate.  Let’s drill into the ‘Recommended Firewall Rules’ tab.

vrni_ms_pic15

Now this set of ‘Recommended Firewall Rules’ is much simpler than the AD-DNS-01a ones.  One of these rules, the rule for DNS is already covered in the rules built above.  The rest are 9 rules from different Sources going to the same Destination.  We can reduce these rules down to one rule.

vrni_ms_pic16

vrni_ms_pic17

As you can see, we’ve significantly reduced the number of rules we’ll need to cover all the recommended rules just by combining Security Groups and nesting.  This will make our DFW policies much more efficient.  Let’s take this info and put it into our tables so we can visualize what all these groupings have inside.

vrni_ms_table2

Now that we have our structure in order, we start building our Security Groups, Security Tags, Services, and Service Groups in NSX.  I’m not going to go through the process of creating the objects within NSX.  The processes are very straightforward and my previous blogs discuss this very process.  Here are the results.

Security Tags:

vrni_ms_pic19

IP Sets:

vrni_ms_pic20

Security Groups:

vrni_ms_pic21

Services:

vrni_ms_pic22

Service Groups:

vrni_ms_pic23

Let’s build the rules that we had planned above now.  Again, I’ve already shown how to build rules in previous posts, so I’m not going to go through that here either. Here’s what we end up with in the DFW.

DFW Rules:

vrni_ms_pic24

All in all, we were able to take 27 recommendations and bring that all down to 4 Allow rules, and 2 Block rules that cover all of our Infrastructure Services.  Let’s move onto the next application and requirements, our EMR.

EMR Analysis and Rule Building:

 Requirements to meet:

  • Allow EMR Client Application to communicate with EMR Web/App Server
  • Allow EMR Web/App Server to communicate with EMR Database Server
  • Allow PACS Web/App Server to communicate with EMR Web/App Server
  • Block any unknown communications except the actual application traffic and restrict access to the EMR application to only a Clinician Desktop system running the EMR Client Application.

In the previous post we went through and put in our rules for the EMR application.  I have since removed those rules so we can leverage vRNI to show us what to put in.  Since we’ve added several new systems that may communicate with the EMR, we’ll be making some changes anyway.

Similar to what I did in the Infrastructure Services section, I’m going to select the EMR-WEB-01a VM and then show the micro-segments ‘by VM’.

vrni_ms_pic25

We’re going to dig into the VM and look at the recommended rules just like we did for the Infrastructure VMs.

vrni_ms_pic26

What we will notice is that there are a few different rules for different connections. We see new connections from the PACS-WEB-01a VM to our EMR-WEB-01a server.  This would imply that these two systems shared information over TCP 9696 and TCP 2575.  We also see TCP 8080 and TCP 22 connections from DC-Physical which means that external systems are accessing the EMR-WEB-01a VM.  Lastly, we see that the EMR-WEB-01a system talks to its EMR-DB-01a system across TCP 3306 like we’d discovered before.  Since these devices are from outside the NSX environment, we’ll need to look at the Flows section to see what these IP addresses are.

These seem to correlate with at least one known IP address.  The 192.168.0.99 address was my Jumpbox system from the last blog post, but this 192.168.0.36 and 192.168.0.33 systems are unfamiliar.  Talking with the Application Team we learned:

  • The 192.168.0.18 system is an outside system that is accessing the EMR-WEB-01a via SSH on TCP 22, and also accessing the EMR itself, over TCP 8080.
  • The 192.168.0.36 system is a Clinician Desktop
  • The 192.168.0.99 Jumpbox should have access for management purposes to the EMR.
  • The TCP 9696 is an integration plugin that now exists between the PACS and EMR system to route data to each. This port is the DICOM MPPS port.
  • The TCP 2575 is also part of the integration plugin that now exists between the PACS and EMR systems. This port is used by the EMR system to send Radiology orders to the PACS system.

One of our requirements was to only allow Clinician Desktops access to the EMR, and block any other connections that are not identified. If we don’t explicitly allow that communication, our catch-all Blocks will stop this communication.

Extracting these rules from the interface gives us this format in CSV.

vrni_ms_pic27

We’re going to swap in our groupings and remove any already covered rules like NTP and DNS.  We need to figure these DC-Physical flows first.  We’re going to go back to the Flows tab and filter down the results to only show the 8080 and 22 flows respective of the recommended rule.

vrni_ms_pic28

From this page, we’re seeing:

  • Two unknown IP address flows from 192.168.0.18 on 8080 and 22.
  • One unknown IP address flow from 192.168.0.24 on 8080.
  • One known IP address flow from 192.168.0.99 (Jumpbox) on 8080.

Looking at these, only one of these is a proper flow that we should account for.  The others have been identified as not necessary.  Now we can finish up our rules.

vrni_ms_pic29

Legend

Yellow with dots = Already covered by another rule

Blues – Rules we’ll write

Let’s move onto EMR-DB-01a and get our recommended rules.  Same process as before.  We’re going to analyze the VM named EMR-DB-01a.

vrni_ms_pic30

vrni_ms_pic31

vrni_ms_pic32

 

This brings us to those output.

vrni_ms_pic33

vrni_ms_pic34

This leaves us with only a couple of rules for our EMR.  Now that we know the communications that are necessary and are known good, we’ll go ahead and write our rules. Let’s lay out what those rules should look like:

vrni_ms_table3

Most of our groupings have already been created, so we’ll finish out whatever is left.

Security Tags:

vrni_ms_pic35

Security Groups:

vrni_ms_pic36

Services:

vrni_ms_pic37

With all this info, we build our DFW rules.

DFW Rules:

vrni_ms_pic38

 This completes the EMR section of the DFW.  We can move to the next application and it’s requirements, the PACS system.

PACS Analysis and Rule Building:

PACS applications have physical devices, modalities, that connect to them to send the images and information.  In this case, we have an MRI and CT Scanner emulator that’s playing that role in this scenario.  While they are VM’s themselves, they are outside the NSX environment so we’ll have to use IP Set’s to accommodate, similar to Clinician physical desktops.  The requirements are similar to all the other applications.

Requirements to meet:

  • Allow PACS Client Application to communicate with PACS Web/App Server
  • Allow PACS Web/App Server to communicate with PACS Database Server
  • Allow PACS Modalities (CT Scan/MRI) to communicate with the PACS Web/App Server

We’ll start by running the analysis against PACS-WEB-01a and then on PACS-DB-01a.

vrni_ms_pic39

vrni_ms_pic40

vrni_ms_pic41

Let’s clarify our DC-Physical flows:

vrni_ms_pic42

Talking with the Application owners, the Modalities are shown to be the systems connecting via on port TCP 6060.  The Application owners have shown that this is the DICOM Echo port for the modalities.  The other system is our Jumpbox, as usual.

vrni_ms_pic43

vrni_ms_pic44

We’re all squared away on the PACS-WEB-01a server.  Let’s work on the PACS-DB-01a.

vrni_ms_pic45

vrni_ms_pic46

vrni_ms_pic47

We’re going to start noticing a trend here.  The recommended rules are going to start being covered by previous recommendations as we continue down the list of applications.  This is good because this is simplifying our rule sets overall.

vrni_ms_pic48

vrni_ms_pic49

Looks like every rule that’s recommended here will be covered in a previous rule.  Let’s build the groupings and our rules.

vrni_ms_table4

Security Tags:

vrni_ms_pic50

IP Sets:

vrni_ms_pic51

Security Groups:

vrni_ms_pic52

Services:

vrni_ms_pic53

DFW Rules:

vrni_ms_pic54

This completes our rule build out for the PACS system.  Let’s move to the HL7 integration engine, Mirth Connect.

HL7 Analysis and Rule Building:

 Requirements to meet:

  • Allow HL7 Server to communicate with whatever systems it requires
    • This system is the ‘bridge’ between ancillary applications and the EMR system

Talking with the Application Team for the HL7 system, we learned that it’s all running on one server, HL7-01a.  With that being the case, there will be no intra-group communications necessary for this application to function.  However, the HL7 system talks with many other systems in the infrastructure.  It’s a broker of communications between disparate systems.  In our case, we’re going to take a look at what other systems this one brokers communications between.

vrni_ms_pic55

vrni_ms_pic56

vrni_ms_pic57

Let’s dig into the DC-Physical side so we can understand the IP-based flows to build out our full rule sets.

vrni_ms_pic58

vrni_ms_pic59

We see a port 22 connection talking with the Application owners, we’ve determined that this connection is for the HL7 system to SFTP a file pulled from a MYSQL query.  Let’s clean these up a bit and classify them.

vrni_ms_pic60

The green identified flow, we’re going to put that into the EMAIL Group as that’s accessing the EMAIL server, specifically.  Now that we’re all cleaned up, we can build out our rules and groupings.

vrni_ms_table5

Security Tags:

vrni_ms_pic61

Security Group:

vrni_ms_pic62

Services:

vrni_ms_pic63

Service Groups:

vrni_ms_pic64

DFW Rules:

vrni_ms_pic65

EMAIL Analysis and Rule Building:

 Requirements to meet:

  • Allow EMAIL messages to be sent as necessary. Certain applications are emailing their status updates.

As with the HL7 system, the EMAIL server is running on one server so there’s no need for any intra-group communications.  Let’s analyze the flows.

vrni_ms_pic66

vrni_ms_pic67

vrni_ms_pic68

As usual, we need to dig into the DC-Physical flows to get the IP addresses of these systems.

vrni_ms_pic69

It appears that we have the two Desktop machines connecting to the EMAIL-01a server over IMAP and we have a NETBIOS-DGM port that’s hitting the Broadcast network of the VLAN it’s on.  Let’s start making our groupings.

vrni_ms_pic70

Let’s refine this.

vrni_ms_pic71

vrni_ms_table6.png

Security Tags:

vrni_ms_pic72

Security Groups:

vrni_ms_pic73

Services:

vrni_ms_pic74

DFW Rules:

vrni_ms_pic75

Let’s finish up with our HR System now.

HRHIS Analysis and Rule Building:

 Requirements to meet:

  • Allow HRHIS Client Application to communicate with HRHIS Web/App Server
  • Allow HRHIS Web/App Server to communicate with HRHIS Database Server
  • Block any unknown communications except the actual application traffic and restrict access to the HRHIS application to only a HRHIS Desktop system running the HRHIS Client Application.

Again, we’re going to follow the same process to finish our build out.

vrni_ms_pic76

vrni_ms_pic77

vrni_ms_pic78

vrni_ms_pic79

Looks like our Clinician’s desktop has been attempting access to the HR system.  There doesn’t appear to be much traffic, but that’s definitely a system we don’t want having access to the HR web site.  We’ll be blocking this traffic off from that system.

vrni_ms_pic80

vrni_ms_pic81

These are pretty simple rules.  Half of them are covered already in Infrastructure Services.  A quick analysis run against the DB:

vrni_ms_pic82

A quick glance will show that there are no recommendations that aren’t already covered by another rule we’ve accounted for.  We’re all set to finalize our last application.

vrni_ms_table7.png

Let’s build our groupings and rule sets.

Security Tags

vrni_ms_pic83

Security Groups:

vrni_ms_pic84

Services:

vrni_ms_pic85

DFW Rules:

vrni_ms_pic86

With the rules all in place, we will go through and check our communications with our applications and verify they’re all still working correctly per the requirements given by the customer.

Let’s bring back the requirements we needed to fulfill and demonstrate how we’re accomplishing these as specified.

  • Allow EMR Client Application to communicate with EMR Web/App Server
  • Allow EMR Web/App Server to communicate with EMR Database Server

vrni_ms_pic87

Verified – Able to log into the EMR and pull up a patient record.

  • Allow PACS Client Application to communicate with PACS Web/App Server
  • Allow PACS Web/App Server to communicate with PACS Database Server
  • Allow PACS Web/App Server to communicate with EMR Web/App Server

vrni_ms_pic88

Verified – Able to log into the PACS system and run a DICOM Echo from the PACS system to the EMR.

  • Allow PACS Modalities (CT Scan/MRI) to communicate with the PACS Web/App Server

vrni_ms_pic89

vrni_ms_pic90

Verified – The PACS modality physical emulators are able to DICOM Echo to the PACS system successfully.

  • Allow HL7 Server to communicate with whatever systems it requires
    • This system is the ‘bridge’ between ancillary applications and the EMR system

vrni_ms_pic91

vrni_ms_pic92

Verified – We’re seeing successful connections from the HL7-01a system to EMR-DB-01a, runs a MySQL query, outputs it to a file on the HL7-01a server, and then SFTP’s it to DW-DB-01a for import.  Then, it emails the ‘dwftpservice’ Email account and provides the status.

  • Allow HRHIS Client Application to communicate with HRHIS Web/App Server
  • Allow HRHIS Web/App Server to communicate with HRHIS Database Server

vrni_ms_pic93

Verified – We’re able to access the IceHRM, HRHIS system, and login.  We’re also able to pull up Employee data, all from the HR Desktop system.

  • Allow EMAIL messages to be sent as necessary. Certain applications are emailing their status updates.

vrni_ms_pic92

Verified – This is the same confirmation email that was sent from the HL7-01a system.

  • Block any unknown communications except the actual application traffic and restrict access to the EMR application to only a Clinician Desktop system running the EMR Client Application.

vrni_ms_pic94

  • Block any unknown communications except the actual application traffic and restrict access to the HRHIS application to only a HRHIS Desktop system running the HRHIS Client Application.

vrni_ms_pic95

Verified – We’ve attempted access to the HRHIS system from the Clinician Desktop.  You can see that we’re not allowed access from this machine.

  • Allow bi-directional communication between the Infrastructure Services and all applications that require access to those services

vrni_ms_pic96

vrni_ms_pic97

That completes our verifications.  As you can see we were able to use vRNI to do Micro-segmentation at scale and get to a Zero-Trust model faster than traditional methods.

The VMware NSX Platform – Healthcare Series – Part 4.2: Micro-segmentation Practical with Application Rule Manager

Originally this series on Micro-segmentation was only going to cover Log Insight, vRealize Network Insight (vRNI), and VMware NSX.  With the release of VMware NSX 6.3, there is a new toolset within NSX that can be leveraged for quick micro-segmentation planning The Application Rule Manager (ARM) within NSX, provides a new way to help create security rulesets quickly for new or existing applications on a bigger scale than Log Insight, but smaller scale than vRNI.   With that in mind, we’re going to take the previous post using Log Insight, and perform the same procedures with ARM in NSX to create our rulesets using the same basic methodologies.

The Application Rule Manager in VMware NSX leverages real-time flow information to discover the communications both in and out, and between an application workload so a security model can be built around the application.  ARM can monitor up to 30 VMs in one session and have 5 sessions running at a time.  The beauty of ARM is that it can correlate the information that you would typically have to either have or look up in Log Insight to create your rulesets and significantly reduces time to value.  ARM can also show you blocked flows and the rules that are doing the blocking.

Let’s bring back our use case from the previous post and sub in, ARM instead.

Use case – Provide a Zero Trust security model around a Healthcare Organization’s EMR system.  Facilitate only the necessary communications both to the application and between the components of the application.

  • Allow EMR Client Application to communicate with EMR Web/App Server
  • Allow EMR Web/App Server to communicate with EMR Database Server
  • Block any unknown communications except the actual application traffic and restrict access to the EMR application to only a Clinician Desktop system running the EMR Client Application.
  • Allow bi-directional communication between the Infrastructure Services and the entire EMR Application

Problem – The Healthcare organization does not have a clear understanding of how the application communicates within and outside the organization.  Organization wants to lock down the EMR application so that only known good workstations can access.

Technology used –   

Windows Clients:

  • Windows 10 – Clinician Desktop – Client-01a (192.168.0.36)
  • Windows 10 – HR Desktop – Client-02a (192.168.0.33)
  • Mac OSX – Unauthorized Desktop – (192.168.0.18)

VMware Products

  • vSphere
  • vCenter
  • NSX and Application Rule Manager

Application in question –

Open Source Healthcare Application:

  • OpenMRS – Open Source EMR system
    • Apache/PHP Web Server
    • MySQL Database Server

Infrastructure Services:

  • NTP

The first thing we need to do to utilize ARM is establish the VM’s that we’re going to look at the flows from.  These systems will represent the session that we will build and run for a period of time to examine what flows are coming in and going out, and between these systems.

Since Infrastructure Services are generally global services that affect more than one system within a data center, we’re going to build the rules that accommodate our NTP virtual machine first.  We’re going to see more flows to other VM’s in the lab, but we’ll be narrowing it down to just the EMR systems.  Then when we’ve got those working, we’ll do the EMR-based rules.

First, we need to establish our naming scheme and methodology to writing our rule sets.  Below is a chart I use when creating my rules for the applications.  It helps me lay things out logically and then apply it quickly in the DFW.

ms_table_pic1

You will notice that I like to nest my Security, Services, and Groups.  I do this because at scale, this makes changing things much simpler and more efficient.  If I have to swap in a service or change a port, I want the changes to be reflected to all affected groupings down the table without having to seek out where else I might need to make a change.  This has consequences as well, you must pay attention to what you’re doing when you make a change as it can have effect on other applications.  This is why I name things the way I do, to ensure that any change you make that you understand the implications.

Now that we have all our custom services and naming convention in place, we can go back to the ARM and start resolving Services, Service Groups, and building out rules.  We’re going to take these flows and squeeze them down even further to create very succinct rules with as few entries as possible.

We’re going to create a session called INFRA MONITOR and add the NTP-01a server into the monitor so we can see all the flows coming in and out of it.

arm_ms_pic1

The amount of time that you collect flows for is up to you.  My advice is the following:

  • Understand the application you’re planning for. If this application is a billing cycle application that does 30 day monthly cycles, ARM my not be the tool to use.  ARM was built to monitor flows in real-time over the maximum of a 7-day period.  If the application you’re looking at has periodic flows that occur over a longer period of time, I suggest using vRealize Network Insight for those applications.  It has a longer retention period for looking up flows.
  • Keep the applications small. ARM has a 30 VM upper limit on monitoring of flows per session, with up to 5 sessions.  While most applications may fall into this range, some applications may have more than 30 machines.  In that case, my suggestion is breaking up the application into smaller chunks and running the chunks in different sessions to keep the numbers down.
  • ARM uses NSX Manager resources. Just like Flow Monitoring, ARM uses resources from the NSX Manager to collect flow data.  Be careful of NSX Manager utilization during this process.  Unlike Flow Monitoring, if you exit the interface, ARM will continue to run until you stop the session collection.

Once we hit OK, the flow collections will begin.

arm_ms_pic2

When we’re comfortable that we’ve captured all the flows from the VM, we can stop the collection and ARM will take the number of flows, de-duplicate the flows down to unique flows for us.

arm_ms_pic3

Now we’re going to analyze our flows.  This is going to resolve any VM names it can as well as resolve the ports to protocols that they could be.

arm_ms_pic4

I’ve hidden other flows to other systems within the lab for simplicity sake for now, and only focused on finding the flows to the EMR system.  You’ll also notice a column that I added into the View Flows tab.  The column is Rule ID.  The Rule ID column will show the applied rule that affects that flow in the View Flows table.

arm_ms_pic5

As you can see from the output, all the flows appear to be hitting the Rule ID of 1001.  If you click on the Rule ID 1001 link, it will pop open the details of the Rule from the DFW.  While this is showing an allowed flow, we will see later, that we can also see flows being blocked by a Rule ID.

arm_ms_pic6

The View Flows tab only resolves to VM names.  We’ll need to take this information and create appropriate Security Group, Service, and Service Group names for our rules.  We’ll click on the gear icon next to the EMR-DB-01a VM and select ‘Create Security Group and Replace’.

arm_ms_pic7

We’ll name the Security Group after the naming scheme you’ll find below of EMR-SG-DB and we’ll statically add in the VM, EMR-DB-01a.

arm_ms_pic8

What we’ll find is that the Security Group now replaces the virtual machine as the Source.  But since we’re both of the VM’s that make up the EMR talking to the same destination, we’ll create an EMR-SG-ALL Security Group and nest the individual VM Security Groups within it.  To do this, we can click on the gear for the Source we just changed, and select ‘Create Security Group and Replace’ same as before but this time we’re going to add the EMR-SG-DB Security Group to the EMR-SG-ALL Security Group we’re going to create.

arm_ms_pic9

We should now see that the Source has changed to the EMR-SG-ALL GROUP.

arm_ms_pic10

For the EMR-WEB-01a VM, we’ll do the same thing.  Create a Security Group called EMR-SG-WEB and add that VM to it.

arm_ms_pic11

We’ll then click on the gear and select ‘Add to existing Security Group and Replace’.  Then select the EMR-SG-ALL group, and this will nest the EMR-SG-WEB Security Group into the main group.

arm_ms_pic12

We repeat this same process for the NTP-01a VM according to the layout above.  Create an INFRA-SG-NTP and add the NTP-01a system.  Create a INFRA-SG-ALL and add the INFRA-SG-NTP Security Group to it.  This will allow us to add more SG’s to the main group later as needed without having to write another rule to do it.  When done, it should look like this.

arm_ms_pic13

You only need to do this to one rule as you’ll see later, we’ll grab both of these flows to write one rules.  We focus in on the Services that we found now.  If you click on the ‘2 Services’ link, you’ll see what ARM resolved the ports to, protocol-wise.

arm_ms_pic14

I like creating my own Services and Service Groups.  This way we know if we make changes, we know what could possibly be impacted.  So with this, we’re going to perform a similar workflow as we did above.  Create a Service and Replace.  Create a Service Group and Replace.  This will add the custom Service we create to the custom Service Group we create.

arm_ms_pic15arm_ms_pic16arm_ms_pic17arm_ms_pic18

Leaving us with this final result.

arm_ms_pic19

We’re now ready to create our rule from the two flows.  Selecting both rules, we right click and select ‘Create Firewall Rule’.

arm_ms_pic20

We’ll see some unresolved items in the list, but we’re going to make modifications so that this one rule covers both flows.  We’re going to:

  • Remove the NTP-01a as the Destination leaving the INFRA-SG-ALL
  • Remove the UDP 123 Service leaving the INFRA-SVG-ALL
  • Remove the vNIC for NTP-01a and add EMR-SG-ALL and INFRA-SG-ALL

arm_ms_pic21arm_ms_pic22

Once we click on OK, we’ll have a new rule created in the ‘Firewall Rules’ tab.

arm_ms_pic23

Double check our rule and make sure it looks correct and then hit ‘Publish’.  We’re then prompted to create a section name and select where to insert it.

arm_ms_pic24

We should get confirmation that the publish was successful and we can go to the ‘Firewall’ interface and verify our work.

arm_ms_pic25

Everything looks good!

Now we can focus on the EMR system.  Talking with our applications team, we have determined that the VM’s in question are:

  • EMR-WEB-01a
  • EMR-DB-01a

Now that we know the names of the VM’s, we can go into the Flow Monitoring section of the NSX Management console and select Application Rule Manager.  From here we can Start New Session.

We’ll select the VM’s we discussed above, EMR-WEB-01a and EMR-DB-01a, and we’ll add them to the session.  This will start collection of flow data from the vNIC’s of these VM’s and post them in the View Flows table.

From here we will create a new session, calling it EMR MONITOR, and add in our VM’s to the session.  Once we hit OK, the collection will start and we can stop the collection when we’re comfortable with the period of time we collected for.  In this instance, I’m going to collect data for 15 minutes.  I have automated tasks running to provide traffic to the EMR so we can see flows that run every 5 minutes, so this should be long enough for this demonstration.

arm_ms_pic26

Once we have stopped the collection, we should see the View Flows table has several flows showing.  ARM will attempt to de-duplicate repeated flows as much as possible.

arm_ms_pic27

Analyze the outputarm_ms_pic28

Now that ARM has analyzed our flow data and matched where it can we can see a few things:

  • Direction – This tells us in what direction the flow came to the source
    • IN – Inbound
    • OUT – Outbound
    • INTRA – Between
  • Source – Resolved to a VM name if the VM falls under the scope of the vCenter/NSX Manager relationship. If an IP address, represents an external IP system that is not resolvable in the vCenter/NSX Manager relationship.
  • Destination – Resolved to a VM name if the VM falls under the scope of the vCenter/NSX Manager relationship. If an IP address, represents an external IP system that is not resolvable in the vCenter/NSX Manager relationship.
  • Service – Resolved to all services that exist within NSX. If more than one service is shown, this means that the user will need to manually pick the service it correlates to because NSX has more than one service definition with that corresponding port number. If you want to create a custom Service or Service Group, which we will

With the information shown we have the following:

  • Inbound 192.168.0.99 > EMR-WEB-01a TCP 8080
    • 168.0.99 is identified as the Management Jumpbox for the infrastructure.
  • Inbound 192.168.0.36 > EMR-WEB-01a TCP 8080
    • 168.0.36 is identified as the Clinician Desktop
  • *Inbound HL7-01a > EMR-DB-01a TCP 3306 – Skipping this for now
  • *Outbound EMR-DB-01a > NTP-01a UDP 123 – Rule ID 1028
  • *Outbound EMR-WEB-01a > NTP-01a UDP 123 – Rule ID 1028
  • Intra EMR-WEB-01a > EMR-DB-01a TCP 3306

What you may notice when you have the Rule ID column unhidden, is that the two flows we already built for above in the Infrastructure section are showing as the Rule ID 1028 which is the Rule ID for that Infrastructure rule.  As I said before, ARM can show you any flow and the corresponding rule in the DFW that it’s hitting.  All of the rest of the flows are hitting the Default Allow Rule ID of 1001 just like Infrastructure before we started.

arm_ms_pic29

What this means for us, is that we’re already covered on Rule ID 1028 for NTP Services and we can hide these two flows.  We’re also going to hide the HL7-01a flow so we can see another interesting feature that ARM can show us.

arm_ms_pic30

This leaves us with three flows we need to write rules for.  Using the same methodology as we did above, we’re going to create our table for naming and apply it to the VM’s, Services, Security Groups, Service Groups for the EMR.

ms_table_pic2

Good news is that we’ve already created Security Groups for the EMR system when we created the Infrastructure Rules.  We just need to ‘Replace with Membership’ each VM with the appropriate Security Group.

arm_ms_pic31

When you select ‘Replace with Membership’, ARM will show you all the Security Groups that the VM belongs to.  You’ll notice the differences when you change EMR-WEB-01a and EMR-DB-01a.

arm_ms_pic32arm_ms_pic33

We do however, need to create IP Sets for the Desktop systems that we’re going to allow access to the EMR through.  We do this by clicking on the gear and selecting ‘Create IP Set and Replace’.

arm_ms_pic34arm_ms_pic35arm_ms_pic36

We then create the EMR-SG-ACCESS Security Group and add these two IP Sets to it and replace.

arm_ms_pic37arm_ms_pic38

We need to resolve our Services now as well.  We have two unresolved ports of 3306 and 8080.  Again, we’re going to refer to the table and create new Services and Service Groups for these.

arm_ms_pic39arm_ms_pic40arm_ms_pic41

There’s no reason to resolve the service for the second IP-based flow, as the services are the same and we’re going to create one rule like we did above for these two flows since they’re fundamentally the same.  Time to swap in Service Groups.

arm_ms_pic42arm_ms_pic43arm_ms_pic44

We’re now left with this output.

arm_ms_pic45

Let’s create our rules.  We’re going to select both of the IP-based rules and combine them.

arm_ms_pic46arm_ms_pic47

Cleaned up it looks like this

arm_ms_pic48

Now to create our INTRA communication rule between the EMR-WEB-01a and EMR-DB-01a VMs.

arm_ms_pic49

Cleaned up it looks like this

arm_ms_pic50

We now have our two rules that we can publish to the DFW for the EMR application.

arm_ms_pic51

We’ll create a new section called ‘EMR’ and place it below our Infrastructure Section.

arm_ms_pic52

Double check our work.

arm_ms_pic53

Things look correct.  Let’s add in our Block Rules so we can do our testing to make sure we did this correctly and it meets the requirements.

arm_ms_pic54

Here’s the list of requirements the customer gave us:

  • Allow EMR Client Application to communicate with EMR Web/App Server
  • Allow EMR Web/App Server to communicate with EMR Database Server
  • Block any unknown communications except the actual application traffic and restrict access to the EMR application to only a Clinician Desktop system running the EMR Client Application.
  • Allow bi-directional communication between the Infrastructure Services and the entire EMR Application

With ARM, we can check that these requirements are hitting the correct rules very quickly.  We’re going to create another Monitor session to monitor EMR-WEB-01a, EMR-DB-01a, and NTP-01a.  When we see the flows that come out, we should see the rules that they hit.  This will quickly tell us if things are working correctly.  I’m also going to generate traffic that should hit the block rules because they don’t meet the requirements.  Let’s start!

arm_ms_pic55

arm_ms_pic56

We can see flows hitting the Rule ID’s to the right which is good.  We’re going to stop the collection and do an analysis on the flows captured.

arm_ms_pic57

We can see the flows captured and the Rule ID’s associated.  Nothing appears to be hitting the Default Allow Rule ID 1001 which is good.  The Application is functional and NTP is accepting connections from the VMs.  If we take a look at the first requirement:

  • Allow EMR Client Application to communicate with EMR Web/App Server

arm_ms_pic58arm_ms_pic59

You can see that both of the IP-based flows are captured under Rule ID 1030 which when you click on it, shows the correct DFW rule.  Let’s check the next requirement.

  • Allow EMR Web/App Server to communicate with EMR Database Server

arm_ms_pic60

You can see that by clicking on the Rule ID of 1029 for the EMR-WEB-01a to EMR-DB-01a flow, that that traffic is hitting the correct rule.  Let’s check the next requirement.

  • Block any unknown communications except the actual application traffic and restrict access to the EMR application to only a Clinician Desktop system running the EMR Client Application.

arm_ms_pic61

When we look for block based rules, we’re looking for anything hitting a block rule in the DFW.  In this case, Rule ID 1032 is a Block rule.  We seem to have two flows that are hitting that rule.

arm_ms_pic62

When we drill into Rule ID 1032, we see that it is indeed, our Block rule.  Let’s look at the last requirement.

  • Allow bi-directional communication between the Infrastructure Services and the entire EMR Application

arm_ms_pic63

From this picture, we can see that there are two flows from the EMR VMs to the NTP server.  Both are hitting Rule ID 1028.  When we open up that rule we see that it is indeed hitting the correct rule to allow NTP traffic to flow to the NTP server from the EMR.

arm_ms_pic64

This completes our requirements set forth by the customer to meet to secure the EMR application.

As you can see, ARM is very adept at helping simply micro-segmentation with VMware NSX.  Taking the concepts we learned and leveraged with Log Insight, we can remove quite a bit of manual processes involved with resolving services and VM’s IP addresses to their names.  I was able to perform this process in about 15-20 minutes outside of flow capture time from start to finish and that’s what makes ARM another very useful toolset to use for reducing the complexity of micro-segmentation.

The VMware NSX Platform – Healthcare Series – Part 4.1: Micro-segmentation Practical

In the previous blog post, we discussed how the concept of micro-segmentation provides a Zero-Trust security model for Healthcare applications.  We also discussed how that model would apply when we talk about security around a Healthcare organization’s EMR/EHR.  In this post we’re going to take those concepts and actually apply them to a Healthcare lab environment to show how we functionally achieve this outcome. With some applications however, organizations are not privy to the details of the communications flows for the application.  In this post, we’ll be leveraging VMware tools to help figure out how the application actually communicates so we can write our rule sets.  We’ll be focusing on using the NSX DFW and Log Insight to create our rules for the application.  The next blog post will be over using Service Composer and vRealize Network Insight and rule building at scale.

Picture1.png

Use case – Provide a Zero Trust security model around a Healthcare Organization’s EMR system.  Facilitate only the necessary communications both to the application and between the components of the application.

  • Allow EMR Client Application to communicate with EMR Web/App Server
  • Allow EMR Web/App Server to communicate with EMR Database Server
  • Block any unknown communications except the actual application traffic and restrict access to the EMR application to only a Clinician Desktop system running the EMR Client Application.
  • Allow bi-directional communication between the Infrastructure Services and the entire EMR Application – We’re going to skip this part for now as we’ll add it in later when we expand the use case.

Problem – The Healthcare organization does not have a clear understanding of how the application communicates within and outside the organization.  Organization wants to lock down the EMR application so that only known good workstations can access.

Technology used –   

Windows Clients:

  • Windows 10 – Clinician Desktop (my jump box system)
  • Windows 7 – Non-Clinician Desktop (random system on the network)

VMware Products

  • vSphere
  • vCenter
  • NSX
  • vRealize Log Insight

Application in question –

Open Source Healthcare Application:

  • OpenMRS – Open Source EMR system
    • Apache/PHP Web Server
    • MySQL Database Server

With the EMR system I have deployed now, OpenMRS, it consists of two systems; the Web/App server and the Database server.  The Web/App server runs queries and application-specific functions against the Database server.

I’m not going to go through how to deploy and install the VMware NSX Platform.  Suffice it to say, that’s covered very well in many other blog posts and deployment is rather trivial.  In this environment, I have 3 ESXi servers in my Compute1 Cluster.  All three hosts are prepared with the VMware NSX Distributed Firewall (DFW) software bundle.  Once the NSX DFW has been deployed, all virtual machines that reside on those hosts will be covered by the Layer 2-4 stateful Distributed Firewall that NSX provides at the virtual network card level of each virtual machine. To start the process of locking down the communications between the systems we need to first come up with our methodology for doing so.  Since the NSX Manager is connected to the vCenter Server, we can use vCenter-type objects to build our rules, in this case we’re going to use VM names.  When we move these systems to different networks and the rules still work, this will make more sense and show the agility of the VMware NSX Platform.

Picture2.png

For applications we’re unsure of how they interact, there are tools and methods to help build your rule sets for NSX DFW.  I going to use vRealize Log Insight and also vRealize Network Insight.  Both provide very granular ways to help build your rule sets and offer slightly different approaches overall.  When you write your rules you can leverage either the NSX DFW or Service Composer.

First let’s start with the methodology I use to build rules using NSX DFW and Log Insight

  • Create NSX Security Groups and Security Tags for each of the different ‘tiers’ of the application in question. The Security Tags will be applied to the VMs of each tier, and the Security Tag will be the criteria which places the VMs into the Security Group.  There are many different ways to included VMs in a Security Group and tags is just one.  We’ll be looking at more in future posts.  The IP addresses are there for reference purpose, not as a criterion for inclusion. 
    • Security Group – Application
      • EMR-SG-WEB
      • EMR-SG-DB
    • Security Tag – Application
      • EMR-ST-WEB
        • EMR-WEB-01a – 192.168.0.25
      • EMR-ST-DB
        • EMR-DB-01a – 192.168.0.27
  • Create an NSX Security Group for the entire application. This will be used to nest the different tier Security Groups into.
    • Security Group
      • EMR-SG-APP
        • EMR-SG-WEB
        • EMR-SG-DB
  • Create firewall rules in the NSX DFW interface. We start with very general rules for the entire application.  Once we take a look at the flows within Log Insight, we can write more specific rules for the application that will only comprise of the necessary ports and protocols for the application to function.
    • Allow All Inbound Log
      • Rule ID 1008 – DFW
    • Allow All Outbound Log
      • Rule ID 1007 – DFW
    • Block All Inbound Log
      • Rule ID 1006 – DFW
    • Block All Outbound Log
      • Rule ID 1005 – DFW

Security Groups

Picture3.png

Security Tags

Picture4.png

DFW Rules

Picture5.png

This methodology allows us to see all the traffic coming in and out and between the application and pipes it all to our logging application.  As the application generates traffic, we’ll be able to use either Log Insight or Network Insight to see it.  We simply apply the Security Tag to VMs and they’re placed into the appropriate Security Groups and subsequently the NSX DFW rules are applied.

Now that we have the application traffic being logged and the rules placed into the NSX DFW, we will bring up vRealize Log Insight and take a look at the traffic patterns.

Picture6.png

As we can see, we’re getting traffic connection hits on the two rules we should be getting hits on, the 1007 and 1008 rules which are the Allow All Inbound/Outbound Log rules.  This is exactly what we should be seeing.  When we dig into the connections and do an Interactive Analysis on each of the hits we’re getting on these two rules we see the following in the Field Table:

picture7

We can see that our rules are working and when traffic is generated to the application, we’re seeing the connections being established within the application and to the application.  With Log Insight we’re constrained to only seeing the IP address of the logs and not the DNS names. We can now pick out the connections to start writing our more specific rules:

  • 192.168.0.99 > 192.168.0.25 over TCP 8080
    • 192.168.0.99 is the Clinician Desktop that’s accessing the EMR WEB server. Since we were unsure who was accessing the EMR and we want to lock it down so only specific machines can access, we’ll leverage an IP Set here since this machine is outside the NSX environment.
  • 192.168.0.25 > 192.168.0.27 over TCP 3306
  • 192.168.0.27 > 192.168.0.25 over TCP 3306
    • This is an exampled of a stateful TCP flow. The application established a connection over 3306 to the database server and the database server responded back on 3306.  Since the NSX DFW is stateful, we can write one rule to allow the application server to talk to the database over 3306 and the database can response back on the same port without needing to open any ports or writing any rules for the database server for that communication to occur.

We’ll quickly add a new Security Group and IP Set for the Jump Box system so we can build a rule that only allows this system to communicate with the EMR.  I have another Windows 7 desktop called CWS-01a, 192.168.0.33, that we will attempt connection to the EMR from and show how we can restrict who can actually hit the EMR application from the web interface.

Picture8.png

Before we write our rules, the port 8080 does not match anything in the NSX Services list specifically for this application.  However, there is a listing for 3306, MySQL.  Being that this is the most critical Healthcare application, I recommend creating your own Services and Service Groups within NSX to accommodate these ports regardless of whether or not they’re in the Services list.  This provides a visual cue to anyone looking at modifying a service in NSX.  They will immediately know that this service is in use by the EMR and that they should be very careful making changes or removing it from NSX.

Services and Service Groups behave in a similar fashion to Security Groups.  You simply create the Services you want and add them as members of the Service Group.  You can find a post a did a while back on how to go through this process here.  I’ve already went ahead and built and added them.

picture9

picture10

When we take all this information and put it into NSX DFW rules we get the following configurations:

Picture11.png

Let’s break down this output and explain this in simple terms.

  • Rule 1 – Restrict EMR Access
    • EMR-SG-ACCESS (IP SET = 192.168.0.99) > EMR-SG-WEB over EMR-SVG-WEB (TCP 8080) is Allow and Applied To EMR-SG-WEB

Functionally we are allowing a system outside the NSX environment, my jump box with an IP of 192.168.0.99, access to the EMR-WEB-01a system over TCP port 8080.  Were applying this rule only to the EMR-SG-WEB Security Group.  This means that only the EMR-WEB-01a will get this rule in its firewall.  This reduces adding rules that don’t need to apply to a VM where they’re not needed.  The power of NSX in this scenario is that if the EMR system adds another WEB server that needs to talk to the DB, we can simply apply the EMR-ST-WEB Security Tag and that system will be immediately put into the EMR-SG-WEB Security Group and the above NSX DFW rules will apply to that machine too!  Consistently applying security through operational simplicity is just one of the many benefits NSX provides.

We then applied a similar rule to only allow EMR-WEB-01a to communicate with EMR-DB-01a over 3306 and applied those rules to each of the Security Groups to reduce rule sprawl.  Going back to the discussion about stateful firewalling from above, we only needed to add this one rule and when the WEB system communicates over 3306 to the DB, the response communication back to the WEB system is not necessary.

Finally, we have the last 4 rules for logging inbound and outbound communications for the entire application.  Now that we have more granular rules and have reduced the number of open ports down to the 2 necessary, we can turn off our Allow All rules and ensure the application is functioning.  At this point we’re not seeing any communication in and/or out of the application to any other systems so we can comfortably test the application.  Let’s go back to our use cases and requirements to see if we fulfilled them properly:

  • Allow EMR Client Application to communicate with EMR Web/App Server
    • Built EMR-SG-ACCESS > EMR-SG-WEB over 8080
  • Allow EMR Web/App Server to communicate with EMR Database Server
    • Built EMR-SG-WEB > EMR-SG-DB over 3306
  • Block any unknown communications except the actual application traffic and restrict access to the application to only a Clinician Desktop system running the EMR Client Application.
    • Restricted access to only 192.168.0.99
      • Test attempt to login from another system, 192.168.0.33, and verify that the access is blocked and hitting Rule ID 1006 that blocks unauthorized access to the application.
      • Test attempt to SSH from Web to DB
  • Allow bi-directional communication between the Infrastructure Services and the entire EMR Application – We’re going to skip this part for now as we’ll add it in later when we expand the use case.
    • We’re going to follow up on this requirement in the next post as we expand the environment out further to include more applications.

I’ve turned off the Allow All rules and we can now test our results to ensure the application works and we fulfill the requirements asked. Disabling the rules turns them grey.

picture12

Allow EMR Client Application to communicate with EMR Web/App Server – Rule 1009Allow EMR Web/App Server to communicate with EMR Database Server – Rule 1010

Picture13.png

picture15picture14

We can access the application from the Clinician Desktop as expected and application is working as we would not be able to login and perform a patient lookup.

Let’s see if the block requirements are working properly.

Block any unknown communications except the actual application traffic and restrict access to the application to only a Clinician Desktop system running the EMR Client Application.  Rule – 1006

  • Restricted access to only 192.168.0.99
    • Test attempt to login from another system, 192.168.0.33, and verify that the access is blocked and hitting Rule ID 1006 that blocks unauthorized access to the application.
    • Test attempt to SSH from Web to DB

Picture16.png

Trying to SSH from WEB to DB

Picture17.png

Let’s look in Log Insight to confirm

Picture18.png

Looks like we have covered all the bases with our rule sets we built.  The application is functional and the appropriate systems are able to access the EMR application.

I know this post was fairly long for an application that only had two ports, but in the next post we’re going to be adding more complexity to this environment.  The fundamentals are sound as we’ll be applying them to the rest of the applications we introduce.  Remember, the Healthcare EMR is connected to many ancillary systems in a Healthcare Organization, it doesn’t just function on its down. We’ll be adding in systems that will talk to our EMR and showing how to do micro-segmentation at scale using vRealize Network Insight and then leveraging Service Composer with NSX.  This will setup the foundation for subsequent posts in the series.

The VMware NSX Platform – Healthcare Series Part 3 – Micro-Segmentation Concept

When using an application-based policy approach, security is a critical part to the application workload.  Security is just as important as how much CPU or RAM you give an application workload.  The VMware NSX Platform introduces 3 primary use cases when it comes to security for application workloads.  We’re going to focus on the first use case:  Micro-segmentation and how it relates to Healthcare organizations.

picture1

A quick background on why Micro-segmentation is important and security trends in modern data centers. In most modern data centers, there has been a large uptick in the amount of traffic that occurs between systems rather than inbound-and-outbound of systems.  This is referred to as East-West traffic within the data center, versus North-South traffic in and out of the data center.  In the hardware-base world, security for East-West traffic is sometimes done either by sending the traffic from their applications to their perimeter firewalls or by purchasing hardware appliances to put inside the data center between the applications. This form of security is through isolation and segmentation of the applications.  You can do this at the entire application level through concepts such as Trust Zones, achieving what some call macro-segmentation, but when we place security at the workload level we achieve what’s called, micro-segmentation.  Micro-segmentation facilitates a Zero-Trust security model. Zero-Trust means that unless communications between systems are explicitly trusted, it’s implicitly untrusted.  The application of Micro-segmentation using a hardware-based approach creates a two-fold problem:

Lack of interior controls and security – If an organization simply does nothing, no use of micro-segmentation, to secure the East-West traffic within their data center, the perimeter firewall becomes the single point of entry and security to their environment from North-South.  This type of defense still provides protection, however once an attacker is able to break through that perimeter, unfettered access around the inside of the environment is rather easy.  With no lateral controls, the attack surface is enormous for attackers to work with.

Picture2.png

Operationally infeasible and lack of scale – If an organization puts in hardware appliances or use the external firewall to facilitate Micro-segmentation for both East-West and North-South traffic firewalling, those systems become operationally difficult to manage.  Hardware appliances are costly and only scale to a point.  Multiple user interfaces and policies that don’t scale as you add new workloads or even modify existing workloads into your data center and certainly don’t provide mobility in a virtualized environment.  As you add new applications you may need to add more firewalls.  If a workload needs to move, you may need to move or change the rules associated with that application.  And what happens when that application needs to talk to another application?  All those rules need to change as well.  While this can reduce the attack surface of the application, it’s operationally infeasible to support and lacks scaling as a long term option for customers.

Picture3.png

How does the VMware NSX Platform provide a business value around these issues? The VMware NSX Platform uses a software-defined micro-segmentation approach applied at the Virtual Machine workload level to facilitate a Zero-Trust security model.  This security is built into the vSphere hypervisor creating a distributed and scale-out firewalling architecture.  This architecture provides kernel-level performance and scales as your organization and workload requirements increase without the need to add more specific hardware appliances to the environment.

Picture4.png

Let’s focus on Healthcare customers specifically.  A recent study by the Ponemon Institute and IBM for 2016, shows that a security breach and exposure of a patient health record is now averaging $355 per record.  The average total cost to an organization in the US was $7.01 million and the average number of records breached was around 29,000.  Traditional methods of security, like those listed above, are no longer sufficient to prevent attacks.  While there is no ‘silver bullet’ to security, Healthcare organizations can provide a layered approach to security that helps reduce their attack surface overall and reduce the potential for exposure. The VMware NSX Platform, through the use of Micro-segmentation, allows Healthcare organizations to accomplish this.

The VMware NSX Platform can provide an application-based security policy around the critical and patient information sensitive applications within the data center.  This provides us the ability to effectively control all communication paths both in and out of the application, thus reducing the attack surface of that application immensely.

The EMR/EHR system for Healthcare organizations, represents a mission-critical application for the organization and houses the majority of patient record information.  For this example, we’re going to look at a typical installation of an Electronic Medical/Health Records system, EMR/EHR, and how traffic both in and out and between the servers within the application are secured. They can be comprised of several Windows/Linux and Appliance-based systems. Below is a typical example of the layout of an EMR/EHR system server architecture.  Most consist of a client application that connects to the Application Server which has a Database Server connection where the data is stored.

Picture5.png

Let’s take a look at traditional security approaches to East-West and North-South traffic isolation, first.  You’ll see below that for North-South traffic, the end user workstation could traverse through either a perimeter firewall or an internal data center firewall before it gets to the presentation layer of the EMR/EHR.  Also from an East-West perspective, to secure communications between the servers within the application and Shared Services, the traffic patterns will need to traverse through those same firewalls to either allow or block the communications necessary for the application to function. This creates a hair pinning effect that is operationally inefficient.

Picture6.png

With the Physical Firewall Policy, we’re now sending all the traffic through the external firewall to do the segmentation for the applications.  This firewall could also be an internal firewall between the applications.  Nevertheless, the premise stands.  Sending all the traffic through that firewall will not scale-out as your workloads increase and this is just one application in this environment.  Most Healthcare organizations have hundreds of applications they need to secure.

With the VMware NSX Platform, we instantiate a stateful, Layer 2-4, firewall at the Virtual Machine virtual network card level, which allows us to create security policies based on the application, that can secure the application in the host itself, rather than traversing to an external firewall.  This reduces the dependency on the external and internal physical firewalls for allowing and disallowing of traffic both in and out and between the EMR/EHR system and provides a much more operationally efficiency configuration for both network and operational resources.

Picture7.png

As you can see, the VMware NSX Platform has Security Policies created for each of the different applications, in this case the EMR App Server, the EMR DB Server, and the Infrastructure Services Servers.  Through micro-segmentation, we can setup NSX Security Policies that only allow the traffic that needs to occur within the application, to actually occur.  This enforcement is done in the hypervisor with no need to traverse to a hardware Firewall device to secure the workloads.  What we see here is:

  • The EMR Client Application initiates a connection to the EMR App Server.
  • The EMR App Server allows inbound communications to occur with the EMR Client Application and also allows communication to the EMR DB server.
  • The EMR DB Server only allows connections inbound from the EMR App Server. This functionally secures the EMR application to only allow the communications that are necessary for the application to function, and the EMR Client Application to connect to the system securely.
  • The EMR App and DB Servers are also allowing both inbound-and-outbound, communications to the Infrastructure Services servers.

Using the VMware VMware NSX Platform, Healthcare organizations can implement security at a much more granular level that provides a simple way to secure Healthcare organizations application workloads and reduce their attack surface.  Security is now implemented at the virtual machine workload level using the Application-Based Policy control.  This new model, scales as the application workload scale in the data center environment while still providing the same security posture consistently.