Manually Register NetApp VSC on Win2k12 R2

I was doing some lab work today when I ran across an issue with Windows Server 2012 R2 and the NetApp Virtual Storage Console 5.0 installation. It appears that the VSC 5.0 installation doesn’t work out of the box or is not supported on Windows 2012 R2 just yet. Regardless of what the reason is, let’s make it work.

Run the installation for the VSC as you normally would. Select your options as necessary.

vsc_manual_pic1You’ll notice through the installation that the install will hang and then provide a warning about manually opening the page for registering the VSC with vCenter Server.

vsc_manual_pic2Attempts to browse to the site will turn up nothing.

vsc_manual_pic3In the VSC 5.0 Admin guide there’s reference to manually registering the plugin with vSphere. You don’t have to do the entire procedure actually, you only need to perform a manual setup of the SSL certificate. You’ll first need to stop the following service:

vsc_manual_pic4Then you need to open an Administrative Command Prompt and run the following command:

vsc_manual_pic5vsc ssl setup –cn <insert vCenter Server FQDN>

This will bring up a prompt for a password for the keystore. By default, the first password is ‘changeit’. You need to enter your own password in the next prompt and you’ll be set.

vsc_manual_pic6You can no restart the service and attempt to register the plugin.

We should now be able to browse to the site and register the plugin with vCenter. ***Be sure to browse to ‘https://localhost:8141/Register.html’. Do not use a lowercase ‘R’ or you’ll get a 403 Forbidden page instead!***

vsc_manual_pic7Looks like we’re good to go.

Automating NetApp SnapMirror and PernixData FVP Write-Back Caching v1.0

I wanted to toss up a script I’ve been working on in my lab that would automate the transition of SnapMirror volumes on a NetApp array from using Write Back caching with PernixData’s FVP, to Write Through so you can properly take snapshots of the underlying volumes for replication purposes. This is just a v1.0 script and I’m sure I’ll modify it more going forward but I wanted to give people a place to start.

Assumptions made:

  • You’re accelerating entire datastores and not individual VMs.
  • The naming schemes between LUNs in vCenter and Volumes on the NetApp Filer are close.


  • You’ll need the DataONTAP Powershell Toolkit 3.1 from NetApp. Its community driven but you’ll still need a NetApp login to download it. It should be free to sign up. Here’s a link to it.
  • You’ll need to do some credential and password building first, the instructions are in the comments of the script.
  • You’ll need to be running FVP version 1.5.

What this script does:

  • Pulls the SnapMirror information from a NetApp Controller, specifically Source and Destination information based on ‘Idle’ and ‘LagTimeTS’ status. The ‘LagTimeTS’ timer is adjustable so you can focus in on SnapMirrors that have a distinct schedule based on lag time and aren’t currently in a transferring state.
  • Takes the name of the volumes in question and passes them through to the PernixData Management Server for transitioning from Write Back to Write Through and waiting an adjustable amount of time for the volume to change to Write Through and cache to de-stage back to the array.
  • Performs a SnapMirrorUpdate of the same volumes originally pulled and waits for an adjustable amount of time for the snapshots to take place
  • Resets the datastores back into Write Back with 1 Network peer (adjustable).

Comments and suggestions are always welcomed. I’m always open to learning how to make it more efficient and I’m sure there are several ways to tackle this.

You can download the script from here.


Enabling VAAI for NFS on NetApp

A quick post about VAAI, NetApp and VMware.  I was messing around with VAAI and installing the NetApp plugin for NFS in my VCAP test environment.  This is a pretty simple process and can be automatically deployed via the NetApp VSC, if installed.  Since I’m working on making sure I can do this from the CLI, think VCAP-DCA5, I’m practicing doing it from there.  I’m using the NetApp Simulator, which is probably the best thing that NetApp could have done for its customers.  Simply amazing piece of software that offers nearly all the functionality of a NetApp filer.

vaai_nfs_plugin_step1I placed the NetApp_bootbank_NetAppNasPlugin_1.0-018.vib on a datastore so that I could access it from the CLI

vaai_nfs_plugin_step2Easy stuff, nothing new here.  Now let’s get this thing installed from the CLI

vaai_nfs_plugin_step3Looks like we got it.  Reboot the host and we’re golden.  Check vCenter to see if our Hardware Acceleration is working.

vaai_nfs_plugin_step4Hmmm, not working as it should.  Let’s check to make sure the vib is showing up in the vib list.

vaai_nfs_plugin_step5Yeah we got it there.  What could be the problem?  Well, let’s check the NetApp to see the NFS options.

vaai_nfs_plugin_step6Ah ha!  There’s our problem right there.  There’s a few options not enabled.  These have to be enabled to allow VAAI functions on the NetApp for NFS datastores.  So let’s enable them

vaai_nfs_plugin_step7Now that it’s enabled, let’s go check our vCenter and NFS datastores again.  A quick rescan of the datastores and we’re working as intended.

vaai_nfs_plugin_step8Or we can check it from the CLI, since we’re concerned about knowing how to check this from there anyway

vaai_nfs_plugin_step9EDIT – So I was building a new controller out in the lab and I confirmed Mike’s assumption.  The need for ‘options nfs.v4.enable on’ is only a requirement using the version of the NetApp simulator that I’m using.  This option is NOT required for a real set of controllers.

NCDA Study Part 2.8 – Data ONTAP: Configuration

As always download the newest version of the blueprint breakdown from here.

This section covers Qtrees.  Qtrees are another logical segregation of a volume used to hold further information within.  Think of them as directories inside a volume with files in them.  They are integral components of products like SnapMirror and necessary components of SnapVault.  They can have quotas and different backup and security styles than their volume counterparts.  You can have 4994 qtrees per volume, per NetApp system.

We’re going to perform the following labs in both the CLI and System Manager where the capabilities exist to do in both:

  • Create Qtree
  • Copy Qtree
  • Rename Qtree
  • Move Qtree
  • Change Qtree security
  • Delete Qtree

Create Qtree – CLI

Creating a qtree is a very simple process.  You first need a volume to put the qtree in.  From ONTAP1, we’ll use the test_vol Volume and add a qtree named test_qtree.

qtree create /vol/test_vol/test_qtree
qtree status

cli_qtree_create_step1Create Qtree – System Manager

Creating a qtree from System Manager is easy as well but does require a few more steps.  Log into System Manager and the Data ONTAP instance.  Expand the device, click on Storage and click on Qtrees

osm_qtree_create_step1Click on Create, name the qtree appropriately, and then click Browse to locate the test_vol volume.  Click OK and then Create again.  This creates a default type qtree in the test_vol

osm_qtree_create_step2Copy Qtree – CLI

Copying a qtree is also pretty easy but is done in different ways.  Qtrees can’t natively be copied using a copy command, you can only copy them using either Qtree SnapMirror or by using NDMPCOPY.  Qtree SnapMirror is covered in a later topic so we’ll just use NDMPCOPY.  NDMPCOPY will require downtime while the copy is performed.  Qtree SnapMirror can be used to sync the qtree and then cut over to it.  It is the much more elegant solution for qtrees with a large amount of data in them.

 ndmpcopy /vol/test_vol/test_qtree /vol/test_vol2/test_qtree
qtree status

cli_qtree_copy_step1Copy Qtree – System Manager

You cannot copy a qtree from within System Manager

Rename Qtree – CLI

Renaming a qtree is not that hard either.  However, you can only do it from within the advanced shell in Data ONTAP, priv set advanced.

priv set advanced
qtree rename /vol/test_vol/test_qtree /vol/test_qtree_new
qtree status

cli_qtree_rename_step1Rename Qtree – System Manager

You cannot rename a qtree from within System Manager

Move Qtree – CLI

Moving a qtree is no different than copying a qtree.  The same rules apply. You can only use Qtree SnapMirror or NDMPCOPY.  After you’re done, you simply delete the old qtree location. This obviously means that you’ll need to ensure that you have enough room to keep both copies on the filer until you can delete the old one.  There’s no reason to show how to copy it, I already did that above.  Below we’ll see how to delete a qtree which would complete a ‘move’ process.

Move Qtree – System Manager

You cannot move a qtree from within System Manager

Change Qtree Security – CLI

By default, qtrees take on the security of the volume they’re in.  The default volume security style is determined by the wafl.default_security style option setting.  By default this is UNIX which means all volumes created and subsequent qtrees will be UNIX security by default.  This can be overridden to provide NTFS, UNIX or MIXED access to a qtree.  Below we’ll change the qtree security style to NTFS.

qtree status
qtree security /vol/test_vol/test_qtree ntfs
qtree status

cli_qtree_security_step1Change Qtree Security – System Manager

Changing the security of a qtree from within System Manager is one of the only things you can actually do to a qtree from System Manager once it’s created.  Log into System Manager and the Data ONTAP instance.  Expand the device, click on Storage and click on Qtrees.  Since we changed it to NTFS before, we’ll change it to Mixed now.

osm_qtree_security_step1Delete Qtree – CLI

Much the same with renaming a qtree, you can only delete a qtree from the CLI from the advanced shell.

priv set advanced
qtree delete /vol/test_vol/test_qtree

cli_qtree_delete_step1Delete Qtree – System Manager

Log into System Manager and the Data ONTAP instance.  Expand the device, click on Storage and click on Qtrees.  Select the test_qtree and click on the Delete button.  Check the OK to delete the Qtree(s) option and click Delete.


NCDA Study Part 2.7 – Data ONTAP: Configuration

As always download the newest version of the blueprint breakdown from here.

This next section focuses around LUN setup and configuration.  LUNs are the targets of block level storage protocols.  This includes both iSCSI and Fibre Channel.  When discussing LUNs in terms of Data ONTAP, they are really nothing more than a file that sits within a volume or within a qtree within a volume.  However this file is accessed completely different than say an exported NFS volume.  It is mapped to an igroup, which grants access to the LUN to another device, and the file behaves just like a disk drive.  This is a pretty dumbed down definition of a LUN.  I’m not discussing multipathing which is something that you absolutely need.  More in-depth discussion will occur in the SAN section.  For now, this is all basic configurations of LUNs on a NetApp storage device.  NetApp recommends a 1:1 relationship between volumes and LUNs.

We’re going to perform the following labs in both the CLI and System Manager where the capabilities to do in both exist.

  • Configure igroup for LUN connectivity/Add LUN to igroup
  • Create/Expand/Delete/Rename/Move LUN
  • Discuss Geometry issues with LUNs

Before we start any configuration, to perform the majority of these labs we’ll need to have an initiator and a target so we can see how a LUN shows up.  We’re going to do this with iSCSI since the simulator doesn’t have Fibre Channel capabilities and we don’t have an FC network either.  We’re going to use the following parts of the lab:

  • Student VM – added an extra NIC on the 192.168.175.x/24 network for iSCSI traffic.  (Typically iSCSI traffic would be on a separate VLAN all together, but this still does what we need it to do and also provides the necessary NIC that Windows Server 2008 R2 needs to run the MS iSCSI initiator over.)  I gave the second NIC the IP address of so that we can use that as the initiator IP address on the igroup configuration.
  • CentOS VM – added an extra NIC on the 192.168.175.x/24 network for iSCSI traffic.  I gave the second NIC the IP address of so that we can use that as the initiator IP address on the igroup configuration.
  • ONTAP1

To setup the iSCSI initiator within Windows Server 2008 R2, you need to perform the following steps:

Login to the VM and in the search bar type ‘iscsi’.  You’ll see two options.  You can select either.

windows_iscsi_step1You’ll be shown a warning message about the iSCSI service not running.  Click Yes to start the service and bring up the initiator configuration menu.

windows_iscsi_step2To continue we’ll need a bit of information from the NetApp, i.e. the target IQN of the device.  First confirm that the iSCSI service is running and then display the node name.  This is the IQN we need to verify we are connected to the correct device.

iscsi status
Iscsi nodename

netapp_iscsi_check_step1Now that we have ensured that iscsi is running and we know the target name, we can finish the Windows configuration.  In the iSCSI configuration menu, on the Target Tab we can put in the IP address of the ONTAP1 filer,  Press Quick Connect and we should be connected.  Verify the IQN is correct.  Click Done once verified.

windows_iscsi_step3Since we haven’t configured any disks yet, we won’t see anything show up in the Devices tab.  We will however, need the IQN from the initiator to configure the igroup on the NetApp.  This is under the Configuration tab under Initiator Name.  Typically this name is as combination of the initiator’s computer name and a randomly generated alphanumeric combination at the end.  For simplicity I shortened it to just ‘’.  This makes configuration a bit easier.  Make sure you reboot the device after changing the name.  In a production environment, this value has to be unique.  Typically there’s no reason it shouldn’t be as each machine should be named differently, but it could happen.  So be careful changing the initiator’s name.

windows_iscsi_step4Now we’re going to configure the iSCSI initiator for CentOS.  You can use this guide to help with the configuration.  Here’s exactly what I did from the terminal window in the CentOS VM:

 # su root
# yum install iscsi-initiator-utils
# /etc/init.d/iscsi start
# ping
# iscsidm –m discovery –t sendtargets –p
Starting iscsid:                                                                   [   OK   ],1000
# cat /etc/iscsi/initatorname.iscsi

Yahtzee!  It works and is connected.  Now that we have both initiators started and connected, we can go ahead with creating the igroup on the NetApp and the LUNs that we will associate to each device.

Configure igroup for LUN connectivity – CLI

This is pretty easy to do.  We’re going to create two igroups, one for our windows machine and one for our Linux machine then verify that the group is created.

igroup create –i –t windows
igroup create –i –t linux

netapp_igroup_step1Add LUN to igroup – CLI

Confirm the names of the LUNs we’re going to add to the igroup

lun show

netapp_igroup_step2We have two specific LUNs each configured with the appropriate OS type and 3GB in size.  We also can see that they differ slightly from the other two LUNs in the display.  The top two do not show as ‘mapped’.  This means they are not added to any igroup which we are about to do and you’ll see the change.  Now we can add them to the igroup then display the mappings to ensure they are correctly configured.

lun map /vol/STUDENT/STUDENT_lun
lun map /vol/CENTOS/CENTOS_lun
lun show -m

netapp_igroup_step3Now we can rescan for changes on the Windows and CentOS machines to see the volumes show up for us.  Format them and use them.

Open the iSCSI Initiator Properties and on the Targets tab click on Devices.  This should show us our new LUN attached to the system.

netapp_igroup_step4Now we just need to open up Disk Management and Rescan if necessary to see the volume show up.  Format it and we’re all set.

netapp_igroup_step5We can also verify that we’re going across the second NIC that we put on the VM from the CLI.

iscsi session show –v

netapp_igroup_step6All looks good.  Now we can do the Linux machine.  This is pretty simple, you just need to restart the iSCSI daemon and it will send a discovery and pull in the LUN.

su root
/etc/init.d/iscsi restart
fdisk –l

netapp_igroup_step7That’s it.  You can see that now the 3GB volume is showing up in fdisk.

Configure igroup for LUN connectivity – System Manager

I’m not going to go through how to do it with Linux as it’s no different than doing it for Windows using System Manager.

Log into System Manager and the Data ONTAP instance.  Expand the device, click on Storage and click on LUNs.  In the LUNs section, there is an Initiator Groups tab.

osm_igroup_step1Click on Create, name the group STUDENT, select the operating system Windows, and the type of iSCSI.

osm_igroup_step2Click on the Initiators tab and then put in the IQN of the initiator for the STUDENT machine and click Create.

osm_igroup_step3Add LUN to igroup – System Manager

Login to System Manager and the Data ONTAP instance.  Expand the device, click on Storage and click on LUNs.  In the LUNs section, there is a LUN Management tab.

osm_igroup_lun_step1Click on the STUDENT_lun LUN and click on Edit.  There is an Initiators groups tab with Edit LUN.  This will bring up all the initiators of the type that matches the operating system type of the igroup.  In other words, if the LUN is a Windows LUN, and the igroup is Windows, you’ll see the Windows initiator groups.  Check the Map box for STUDENT and click Save and Close.

From here you can create a new igroup if you want, show all the groups and even assign a custom LUN ID for the LUN.  If you don’t select a LUN ID, ONTAP will do it for you.


That takes care of that.

Create LUN – CLI

Creating a LUN from the CLI can be done two different ways.  You can use ‘lun create’ or ‘lun setup’.  The ‘lun setup’ command takes you step by step through the process.  Alternatively, then ‘lun create’ process expects that you know the steps.

We’ll create a new LUN, 1GB in size, Windows 2008 capable, residing in /vol/test_vol3, space-reserved, attached to igroup STUDENT, with a LUN ID of 1, and named ‘setuplun’

The ‘lun setup’ process:


Open up the STUDENT machine and rescan for the 1GB LUN.  That’s it.

cli_lun_setup_step2Expand LUN – CLI

Expanding a LUN is a simple process.  You first need to ensure that the encapsulating volume has the appropriate room for the LUN to expand into.  After that, it’s a simple command and rescan on the host to see the new size.  The command isn’t much different than the ‘vol resize’ command.

df –m /vol/test_vol3
lun resize /vol/test_vol3/setuplun +1g

cli_lun_resize_step1Now that we have expanded the LUN, we’ll rescan the STUDENT machine to see the change in drive size.

cli_lun_resize_step2Delete LUN – CLI

Deleting a LUN is very simple.  You have to un-map it first before you can destroy it.  Similar to taking a volume offline before being able to destroy it.

lun show –m
lun unmap /vol/test_vol3/setuplun STUDENT
lun destroy /vol/test_vol3/setuplun
lun show

cli_lun_delete_step1Verify the LUN is gone from the STUDENT machine by running a rescan.

cli_lun_delete_step2Rename LUN – CLI

Renaming a LUN involves using a non-intuitive way in the ‘lun move’ command.  We’re going to use that command and ‘move’ the LUN to the same volume in which it’s stored.  This effectively renames the LUN.  We’ll change the name to ‘setuplunrn’.

lun show –m /vol/test_vol3/setuplun
lun move /vol/test_vol3/setuplun /vol/test_vol3/setuplunrn
lun show /vol/test_vol3/setuplunrn
lun show -m

cli_lun_rename_step1Since we can’t see LUN names in Windows, there’s no reason to rescan the STUDENT host.

Move LUN – CLI

Like I stated above, the move command is used for two different types of tasks.  LUN moves can only occur within the same volume.  However you can move them within qtrees within the same volume.  If you want to move a LUN from volume to volume, you’ll need to do something like ‘lun clone’ and the split it out and remove the old LUN after you’ve verified it working properly.  That’s a deeper discussion further along.  We created a couple new qtrees to move the LUN to, qtree1 and qtree2.  I already moved the LUN into qtree1.

lun show –m
lun move /vol/test_vol3/qtree1/setuplun /vol/test_vol3/qtree2/setuplun
lun show –m

cli_lun_move_step1The key thing to realize here is that we moved this LUN into a completely different qtree without disruption.

Create LUN – System Manager

Creating a LUN in System Manager is a pretty easy process.  We’re going to use an existing volume, test_vol3, to create a LUN in and map it to STUDENT.  Start off by opening System Manager and navigating to Storage>LUNs

Click on Create and name the LUN ‘smlun’.  Make it for Windows and thin provision it.

osm_lun_create_step1Select the volume /vol/test_vol3 from the menu and Click OK

osm_lun_create_step2Select the STUDENT initiator group for mapping the LUN to.

osm_lun_create_step3Verify the settings are accurate and Click Next and Finish

Verify the LUN shows up

osm_lun_create_step4Verify the LUN shows up on the STUDENT machine by rescanning for storage

osm_lun_create_step5Expand LUN – System Manager

To expand a LUN start by navigating to Storage>LUNs and clicking on the LUN you want to expand and clicking Edit

osm_lun_expand_step1Change the value to 2GB and click Save and Close.  Since we thin-provisioned you will see a couple of messages that discuss that we’ll need to rescan for the space to show up.  Verify the amount of disk space changed to 2GB on the STUDENT machine by rescanning and checking the size.

osm_lun_expand_step2Delete LUN – System Manager

To delete a LUN start by navigating to Storage>LUNs and clicking on the LUN you want to delete and clicking Edit.  Click on the Initiator Groups tab at the top and de-select the STUDENT map selection.  Click on Save and Close when finished.  You should always un-map a LUN before deleting it to ensure that nothing is connected to it.

osm_lun_delete_step1Offline the LUN by clicking on the Status button and selecting Offline

osm_lun_delete_step2Delete the LUN by clicking on the Delete button, checking the box to engage the Delete button, and clicking on Delete.

osm_lun_delete_step3Rename LUN – System Manager

To rename a LUN start by navigating to Storage>LUNs and clicking on the LUN you want to rename and clicking Edit

osm_lun_rename_step1Change the name by adding a ‘2’ to the end and clicking Save and Close.  Verify the name changed

osm_lun_rename_step2Move LUN – System Manager

You cannot move a LUN from System Manager

Discuss Geometry issues with LUNs

LUNs are not able to be expanded more than the geometry will allow.  Since not everyone remembers what the original size of a LUN was, especially if you find yourself expanding things all the time and have tons and tons of LUNs, you can find out how larger you can grow the LUN in question by going into the diagnostic area of Data ONTAP.  Warning, going in here is not recommended outside a lab unless you know what you’re doing.  You can really screw something up.  You’ve been warned.

To access the diagnostic mode, you need to run the following:

priv set diag
lun geometry /vol/test_vol3/smlun

cli_lun_geometry_step1From here you can see that the LUN can be grown up to a maximum size of 502GB in size.  If you need to exceed that amount, you will have to create a new LUN and copy the data to it.


NCDA Study Part 2.6 – Data ONTAP: Configuration

As always download the newest version of the blueprint breakdown from here.

This section covers volumes and how they pertain to Data ONTAP.  Data ONTAP supports three different volume types:  root, traditional and flexible volumes or FlexVols.  The root volume houses special folders and files to manage the system.  Every system must have a root volume on it.  Traditional volumes are volumes that are closely tied to their aggregates.  What this means is that a traditional volume is the only volume that can sit in its aggregate.  It’s a 1:1 relationship.  No other volumes can reside in that aggregate.  Flexible volumes or FlexVols have relieved this type of restriction but allowing multiple FlexVols to reside in a single aggregate.

We’re going to perform the following labs in both the CLI and System Manager where the capabilities to do in both exist.

  • Create 32bit/64bit
  • Show Attributes
  • Root
    • Create/Copy/Modify/Move
  • Traditional
    • Create/Copy/Modify/Move
  • Flexible
    • Create/Copy/Modify/Move

Create 32bit/64bit – CLI

Creating a volume is a very simple procedure.  Traditional volumes are always 32bit, but FlexVols can be both 32 and 64bit.  However you can’t create a 32bit FlexVol in a 64bit aggregate.  The only way you can have a 32bit FlexVol within a 64bit aggregate is if you upgrade the aggregate to 64bit.  Even if you ‘vol move’ a 32bit volume to a 64bit aggregate, it will upgrade it.  Below is just a simple ‘vol create’ command.  The output I show shows how you can see the different types.  If you run the same exact commands on both aggregates, you get two different volumes in 32 and 64bit.

aggr status
vol create testvol64 aggr0 1g
vol create testvol32 aggr32 1g
vol status

32_64_bit_volume_step1As you can see, the type of volume created depends on the underlying aggregate.

Create 32bit/64bit – System Manager

This is also a pretty simple process.  I’m not going to go through creating both 32 and 64bit volumes in System Manager.  The process is exactly the same.  You simply choose the appropriate aggregate.  This means knowing which aggregates are 32 and which are 64bit if you’re naming schemes don’t denote that.  The reason is that you can’t tell what types the aggregates are from System Manager.  There’s no status or option that shows it.

Log in to System Manager and the Data ONTAP instance.  Expand the device, click on Storage and click on Volumes.


Click on Create, name the volume appropriately, select the size and snapshot reserve as well as whether you want the volume thin provisioned.  The key part is choosing the right aggregate.  We’re going to create another 32bit volume so we want to select the ‘aggr32’ aggregate.  This will ensure that our volume is 32bit.

osm_volume_create_step2When done, click on Create.  And now you’ll see the new volume in the status window.  That’s it.

osm_volume_create_step3Show Attributes – CLI

There are several attributes that you can set for a volume.  They are:

  • Name
  • Size
  • Security style
  • CIFS Oplocks?
  • Language
  • Space Guarantees
  • Quotas
  • Snapshots and scheduling
  • Root volume?

The best place to see all the attributes is from the CLI.  System Manager doesn’t show them as well.  So I’m only going to do the CLI.  We’re going to create a volume with the following attributes:

  • Name – attributevol
  • Size – 1GB
  • Security style – UNIX
  • CIFS Oplocks? – enabled
  • Language – Italian
  • Space Guarantees – volume
  • Quotas – user, all, 50MB
  • Snapshots and scheduling – every 1 hour
  • Root volume? – no

Doing all of this will require several commands.  All of the attribute settings are not available from the ‘vol create’ command.  To start, we can create a new volume called ‘attributevol’, set the language of the volume to Italian, set the space guarantee to ‘volume’ and set the size to 1GB.

vol create attributevol –l it –s volume aggr0 1g

This will do all of the above.  We can confirm this by running the following command:

vol status attributevol –v
vol lang attributevol

cli_attributes_step1Now to change the security style to UNIX, enable CIFS oplocks, enable and setup the quota for all users to 50MB and create the snapshot schedule.  Changing the security style is not intuitive.  There’s no volume security command in Data ONTAP.  To change volume security you have to use the qtree security command instead.  CIFS oplocks behaves in the same manner.  Quotas are managed through the /etc/quotas file.  This file can be a pain to edit manually.  There are two good ways to do it.  One is through System Manager which keeps very good track of it and is super easy.  The other way is to browse to the location on the filer and edit with something like Notepad.  You can also use ‘wrfile –a’ if you’d like although that can be dangerous.  Use with caution.

qtree security /vol/attributevol UNIX
cifs.oplocks.enable on
qtree oplocks /vol/attributevol enable
snap sched –V attributevol 0 6 12

cli_attributes_step2To enable quotas we first have to edit the /etc/quotas file.  Browse to \\\c$ and this will bring up the ability to access the /etc folder.  From here we can open the quotas file and paste in our configuration for quotas.

 #Quota target type disk files thold sdisk sfile
#----------- ---- --- ----- ---- ----- -----
* user@/vol/attributevol 50M

cli_attributes_step3Save the file and now we can enable quotas.  When you enable quotas in Data ONTAP, the first thing the filer does is read the /etc/quotas file.  If no entries exist quotas will not be enabled.  Since we’ve already done this, turning them on for the volume is simple.

quota on /vol/attributevol 

cli_attributes_step4Confirm by running the following command:

quota report –t

cli_attributes_step5We confirm snapshot scheduling by running the following command:

snap sched –V attributevol

cli_attributes_step6That takes care of that.  As you can see it’s tough to see all the attributes for a volume in one place.  But the commands are pretty simple to setup and use.

Create Root volume – CLI

Creating a root volume is no different than creating any other volume.  However the root volume is special.  It houses all the configuration files for Data ONTAP to run with.  There can only be one root volume on a filer in use.  In the current simulator, the root volume is 850MB.  This is not even close to the minimum is should be, refer to the Storage Management Guide for minimums on the root volume, but we’re going to create another one 1GB in size.

Create volume named ‘newroot’, set the space guarantee to volume and make the size 1GB.  All root volumes should have the space guarantee set to volume.

vol create newroot –s volume aggr0 1g
vol options newroot root
<confirm with y>

cli_root_create_step1You will receive a warning that you should copy over the contents from the current root to the new root volume.  If you don’t and you reboot the filer, it will be like you started from scratch.  Only after a reboot will this new volume be the new root volume.

Copy Root volume – CLI

Continuing after the creation of the new root volume, we want to copy the root.  You can do this copy for any volume and it works in exactly the same way.  We’re simply doing it with the root volume, much in the same way we did back when we built the lab and expanded the disks for our filers.  You can do this two different ways, vol copy or ndmpcopy.  I’m using ndmpcopy.  Make sure it’s enabled first.

ndmpd on
ndmpcopy /vol/vol0 /vol/newroot
<await completion>

cli_root_copy_step1Now that the root volume has been copied, you can reboot the filer at will and it will now boot from this newroot volume instead.  This enables us to destroy the old /vol/vol0 and rename to /vol/newroot.

Modify Root volume – CLI

Realistically, the only modifications that one would do to the root volume would be increasing the size if needed.  This is very basic.  We’re going to increase the size of newroot from 1GB to 2GB.

vol size newroot +1g

cli_root_modify_step1That’s it.  The response from Data ONTAP confirms that it increased.

Move Root volume –CLI

Volume moves are done between two aggregates within the same controller.  The only reason I could see moving a Root volume from one aggregate to another would be if you had a controller that had two aggregates for different disk types within the same controller and you wanted to move the Data ONTAP configuration files to less expensive disk.  There may be more reasons, but right now I can’t think of anything.  Below are the steps to move a volume to another aggregate.  It’s not the newroot volume we just created, but the steps are exactly the same.

Now you see me here…

cli_root_move_step1Now you see me there….

vol move start testvol32 aggr0
aggr show_space –h

cli_root_move_step2You’ll notice in the event messages that came up during the migration that the move kicked off a 64bit upgrade of the 32bit volume since we were moving from 32bit to 64bit aggregate.

Root volume – System Manager

You can’t create a root volume in System Manager.  You can create a new volume and then go to the CLI and make it the root volume, but System Manager doesn’t allow anything beyond creation.  The only changes you can make to a root volume are the security style, thin-provisioning, and resizing it.  I’m not going to go through modifying those attributes.  I’ve done it several times in the guide and frankly, it’s just too simple.

Create Traditional volume – CLI

As discussed before, traditional volumes are tightly integrated with their aggregate.  When you create the traditional volume, you do so using the ‘aggr create’ command stings.  For this, we’re going to create a traditional volume called ‘tradvol’ with its own set of disks.  Traditional volumes can only be 32bit.  When a traditional volume is created, you won’t see the aggregate it’s actually in. You only see the volume itself.

aggr create tradvol –v –l en –T FCAL 4
aggr show_space –h
vol status tradvol

cli_trad_create_step1As you can see, when you run a normal ‘aggr’ command, the traditional volume doesn’t show up as having an aggregate that it sits in.  You will notice in ‘vol status’ that it shows up and it’s not a ‘flex’, it’s ‘trad’ and you can see how it differs from a normal aggregate and volume.

Copy Traditional volume – CLI

Copying a traditional volume is no different than copying a FlexVol or root volume.  However there are several restrictions.  You cannot use ‘ndmpcopy’ or ‘vol copy’ on a traditional volume if you’re trying to copy that volume to a FlexVol.  Traditional volumes can only be copied to another traditional volume.

I created a second traditional volume called ‘tradvolcopy’ to copy ‘tradvol’ over to.  You can’t use ‘ndmpcopy’ to perform the copy.  You have to use ‘vol copy’.  To use ‘vol copy’ you need to restrict the destionation volume first.  Below is what happens when you try and ‘vol copy’ a traditional volume to a FlexVol.

vol restrict tradvolcopy
vol copy start tradvol tradvolcopy

cli_copy_trad_step1Here’s what happens when you perform the copy from traditional to traditional volume:

vol restrict tradvolcopy
vol copy start tradvol tradvolcopy

cli_copy_trad_step2Modify Traditional volume – CLI

The only real modifications you can do a traditional volume are what you can do to aggregates. You can add a RAID group or a disk to the existing RAID group to make it larger, but you can’t remove a RAID group and make it smaller.

aggr add tradvol –T FCAL 1

cli_modify_trad_step1Move Traditional volume – CLI

Traditional volumes can’t be moved either.

Traditional volume – System Manager

Traditional volumes show up in System Manger in the Aggregates section and in the volume section.  You can’t make any modifications to them.  But you can verify that it is a traditional volume.  Open System Manager and expand the ONTAP device, select Storage, then Aggregates.  In this view you can select the ‘tradvol’ aggregate and in the Details section see the type is ‘Traditional Volume’.  From the Disks section, you can select ‘Add to Aggregate’ then select any spare disk that’s available.


osm_modify_trad_step2Create Flexible volume – CLI

Creating a FlexVol from the CLI is the same process as earlier in the post.

vol create flexvol1 –s volume aggr0 1g
aggr show_space –h

cli_flexvol_create_step1Copy Flexible volume – CLI

To make a copy of a FlexVol, you can use both ‘ndmpcopy’ and ‘vol copy’.  The destination volume needs to be created beforehand.  We’re going to copy flexvol1 to flexvol2 using both ‘ndmpcopy’ and ‘vol copy’.

ndmpcopy /vol/flexvol1 /vol/flexvol2


vol restrict flexvol2
vol copy start flexvol1 flexvol2

cli_flexvol_copy_step2Modify Flexible volume – CLI

We’re going to change the size of ‘flexvol1’ to 2GB, and set the space guarantee to file.

cli_flexvol_modify_step1We’re going to move ‘flexvol1’ from aggr0 to aggr1.  To move a FlexVol, the guarantee needs to be

vol move start flexvol1 aggr1

cli_flexvol_move_step1One thing I noticed when I tried to do a ‘vol move’ was if the volume guarantee was set to ‘file’, you have to change it to ‘volume’ or ‘none’ before you can perform the move.  I included the output in the screenshot above.

Create Flexible volume – System Manager

We’re going to create a FlexVol called ‘osmflexvol’ in aggr0, size it to 1GB, disable the Snap Reserve and enable Thin Provisioning.  Start by opening System Manager and expanding the ONTAP filer, selecting Volumes and then clicking on Create.

osm_flexvol_create_step1Input the information and click on Create.  That’s it.

Copy Flexible volume – System Manager

You cannot copy a FlexVol from System Manager.

Modify Flexible volume – System Manager

We’re going to remove Thin Provisioning and change the security style from UNIX to NTFS.  Start by opening System Manager and expanding the ONTAP filer, selecting Volumes, selecting the ‘osmflexvol1’ volume and pressing Edit.

osm_flexvol_modify_step1Change the items and click on Save and Close.

Move Flexible volume – System Manager

You cannot move a FlexVol to another aggregate from System Manager

Replacing failed NetApp drive

I know this isn’t a study guide post.  I promise to continue as I get time but this one is a short and sweet one for future reference to myself and hopefully anyone else who needs it.

Each morning I wake up and before I get started with my day I tend to check my email on my phone.  Typically it’s filled with event messages about jobs that ran over the night or the occasional disk space warning message.  However I was greeted with a nice AutoSupport message to the tune of:


That sucks.  This filer resides at my other datacenter.  A quick login to the console and this is what I find:

 FILER1> vol status -f
Broken disks
RAID Disk             Device                  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
--------- ------      ------------- ---- ---- ---- ----- --------------    --------------
not responding 0a.10.7 0a    10  7   SA:A   0   SAS 10000 560000/1146880000 572325/1172123568

Looks like a failed drive.  Double check and make sure that the spare has taken over and is reconstructing:

FILER1> sysconfig -r
data      0a.10.3 0a    10  3   SA:A   0   SAS 10000 560000/1146880000 572325/1172123568 (reconstruction 8% completed)

Looks like the filer is doing everything it’s supposed to do just fine.  Since there’s nothing I can really do, I notify our sysadmin in that office and I go ahead with my morning routine and head into the office.  The beautiful thing about AutoSupport is that it goes ahead and creates a ticket with NetApp support and I just wait for the phone call from the technician concerning my 4-hour response replacement.

When I arrived at the office, our sysadmin tells me that he can’t locate the broken drive as it’s not blinking in the chassis.  This seems strange.

This is easily fixable.  From the CLI there’s an option to blink the LED on any drive in the array.  Since we know that 0a.10.7 is the failed drive, I go ahead and set the drive LED to blink for our sysadmin so he’s completely sure he’s replacing the correct drive.

 FILER1> priv set advanced
Warning: These advanced commands are potentially dangerous; use
them only when directed to do so by NetApp

FILER1*> blink_on 0a.10.7
<drive is now blinking and is then replaced by sysadmin>
FILER1*> Mon Apr  8 12:03:00 EDT [FILER1:monitor.globalStatus.ok:info]: The system's global status is normal.
FILER1*> blink_off 0a.10.7
FILER1*> priv set
FILER1> disk show –n

DISK       OWNER                      POOL   SERIAL NUMBER         HOME
------------ -------------              -----  -------------         -------------
0a.10.7      Not Owned                  NONE   PPWSDPRD

FILER1> disk assign 0a.10.7
Mon Apr  8 12:04:57 EDT [FILER1:diskown.changingOwner:info]: changing ownership for disk 0a.10.7 (S/N PPWSDPRD) from unowned (ID 4294967295) to FILER1 (ID XXXXXXXXXX)

And that takes care of that.  A pretty easy thing to fix, especially if you’re not on-site and you have to direct someone on which drive to change out remotely.