NCDA Study Part 2.6 – Data ONTAP: Configuration

As always download the newest version of the blueprint breakdown from here.

This section covers volumes and how they pertain to Data ONTAP.  Data ONTAP supports three different volume types:  root, traditional and flexible volumes or FlexVols.  The root volume houses special folders and files to manage the system.  Every system must have a root volume on it.  Traditional volumes are volumes that are closely tied to their aggregates.  What this means is that a traditional volume is the only volume that can sit in its aggregate.  It’s a 1:1 relationship.  No other volumes can reside in that aggregate.  Flexible volumes or FlexVols have relieved this type of restriction but allowing multiple FlexVols to reside in a single aggregate.

We’re going to perform the following labs in both the CLI and System Manager where the capabilities to do in both exist.

  • Create 32bit/64bit
  • Show Attributes
  • Root
    • Create/Copy/Modify/Move
  • Traditional
    • Create/Copy/Modify/Move
  • Flexible
    • Create/Copy/Modify/Move

Create 32bit/64bit – CLI

Creating a volume is a very simple procedure.  Traditional volumes are always 32bit, but FlexVols can be both 32 and 64bit.  However you can’t create a 32bit FlexVol in a 64bit aggregate.  The only way you can have a 32bit FlexVol within a 64bit aggregate is if you upgrade the aggregate to 64bit.  Even if you ‘vol move’ a 32bit volume to a 64bit aggregate, it will upgrade it.  Below is just a simple ‘vol create’ command.  The output I show shows how you can see the different types.  If you run the same exact commands on both aggregates, you get two different volumes in 32 and 64bit.

aggr status
vol create testvol64 aggr0 1g
vol create testvol32 aggr32 1g
vol status

32_64_bit_volume_step1As you can see, the type of volume created depends on the underlying aggregate.

Create 32bit/64bit – System Manager

This is also a pretty simple process.  I’m not going to go through creating both 32 and 64bit volumes in System Manager.  The process is exactly the same.  You simply choose the appropriate aggregate.  This means knowing which aggregates are 32 and which are 64bit if you’re naming schemes don’t denote that.  The reason is that you can’t tell what types the aggregates are from System Manager.  There’s no status or option that shows it.

Log in to System Manager and the Data ONTAP instance.  Expand the device, click on Storage and click on Volumes.


Click on Create, name the volume appropriately, select the size and snapshot reserve as well as whether you want the volume thin provisioned.  The key part is choosing the right aggregate.  We’re going to create another 32bit volume so we want to select the ‘aggr32’ aggregate.  This will ensure that our volume is 32bit.

osm_volume_create_step2When done, click on Create.  And now you’ll see the new volume in the status window.  That’s it.

osm_volume_create_step3Show Attributes – CLI

There are several attributes that you can set for a volume.  They are:

  • Name
  • Size
  • Security style
  • CIFS Oplocks?
  • Language
  • Space Guarantees
  • Quotas
  • Snapshots and scheduling
  • Root volume?

The best place to see all the attributes is from the CLI.  System Manager doesn’t show them as well.  So I’m only going to do the CLI.  We’re going to create a volume with the following attributes:

  • Name – attributevol
  • Size – 1GB
  • Security style – UNIX
  • CIFS Oplocks? – enabled
  • Language – Italian
  • Space Guarantees – volume
  • Quotas – user, all, 50MB
  • Snapshots and scheduling – every 1 hour
  • Root volume? – no

Doing all of this will require several commands.  All of the attribute settings are not available from the ‘vol create’ command.  To start, we can create a new volume called ‘attributevol’, set the language of the volume to Italian, set the space guarantee to ‘volume’ and set the size to 1GB.

vol create attributevol –l it –s volume aggr0 1g

This will do all of the above.  We can confirm this by running the following command:

vol status attributevol –v
vol lang attributevol

cli_attributes_step1Now to change the security style to UNIX, enable CIFS oplocks, enable and setup the quota for all users to 50MB and create the snapshot schedule.  Changing the security style is not intuitive.  There’s no volume security command in Data ONTAP.  To change volume security you have to use the qtree security command instead.  CIFS oplocks behaves in the same manner.  Quotas are managed through the /etc/quotas file.  This file can be a pain to edit manually.  There are two good ways to do it.  One is through System Manager which keeps very good track of it and is super easy.  The other way is to browse to the location on the filer and edit with something like Notepad.  You can also use ‘wrfile –a’ if you’d like although that can be dangerous.  Use with caution.

qtree security /vol/attributevol UNIX
cifs.oplocks.enable on
qtree oplocks /vol/attributevol enable
snap sched –V attributevol 0 6 12

cli_attributes_step2To enable quotas we first have to edit the /etc/quotas file.  Browse to \\\c$ and this will bring up the ability to access the /etc folder.  From here we can open the quotas file and paste in our configuration for quotas.

 #Quota target type disk files thold sdisk sfile
#----------- ---- --- ----- ---- ----- -----
* user@/vol/attributevol 50M

cli_attributes_step3Save the file and now we can enable quotas.  When you enable quotas in Data ONTAP, the first thing the filer does is read the /etc/quotas file.  If no entries exist quotas will not be enabled.  Since we’ve already done this, turning them on for the volume is simple.

quota on /vol/attributevol 

cli_attributes_step4Confirm by running the following command:

quota report –t

cli_attributes_step5We confirm snapshot scheduling by running the following command:

snap sched –V attributevol

cli_attributes_step6That takes care of that.  As you can see it’s tough to see all the attributes for a volume in one place.  But the commands are pretty simple to setup and use.

Create Root volume – CLI

Creating a root volume is no different than creating any other volume.  However the root volume is special.  It houses all the configuration files for Data ONTAP to run with.  There can only be one root volume on a filer in use.  In the current simulator, the root volume is 850MB.  This is not even close to the minimum is should be, refer to the Storage Management Guide for minimums on the root volume, but we’re going to create another one 1GB in size.

Create volume named ‘newroot’, set the space guarantee to volume and make the size 1GB.  All root volumes should have the space guarantee set to volume.

vol create newroot –s volume aggr0 1g
vol options newroot root
<confirm with y>

cli_root_create_step1You will receive a warning that you should copy over the contents from the current root to the new root volume.  If you don’t and you reboot the filer, it will be like you started from scratch.  Only after a reboot will this new volume be the new root volume.

Copy Root volume – CLI

Continuing after the creation of the new root volume, we want to copy the root.  You can do this copy for any volume and it works in exactly the same way.  We’re simply doing it with the root volume, much in the same way we did back when we built the lab and expanded the disks for our filers.  You can do this two different ways, vol copy or ndmpcopy.  I’m using ndmpcopy.  Make sure it’s enabled first.

ndmpd on
ndmpcopy /vol/vol0 /vol/newroot
<await completion>

cli_root_copy_step1Now that the root volume has been copied, you can reboot the filer at will and it will now boot from this newroot volume instead.  This enables us to destroy the old /vol/vol0 and rename to /vol/newroot.

Modify Root volume – CLI

Realistically, the only modifications that one would do to the root volume would be increasing the size if needed.  This is very basic.  We’re going to increase the size of newroot from 1GB to 2GB.

vol size newroot +1g

cli_root_modify_step1That’s it.  The response from Data ONTAP confirms that it increased.

Move Root volume –CLI

Volume moves are done between two aggregates within the same controller.  The only reason I could see moving a Root volume from one aggregate to another would be if you had a controller that had two aggregates for different disk types within the same controller and you wanted to move the Data ONTAP configuration files to less expensive disk.  There may be more reasons, but right now I can’t think of anything.  Below are the steps to move a volume to another aggregate.  It’s not the newroot volume we just created, but the steps are exactly the same.

Now you see me here…

cli_root_move_step1Now you see me there….

vol move start testvol32 aggr0
aggr show_space –h

cli_root_move_step2You’ll notice in the event messages that came up during the migration that the move kicked off a 64bit upgrade of the 32bit volume since we were moving from 32bit to 64bit aggregate.

Root volume – System Manager

You can’t create a root volume in System Manager.  You can create a new volume and then go to the CLI and make it the root volume, but System Manager doesn’t allow anything beyond creation.  The only changes you can make to a root volume are the security style, thin-provisioning, and resizing it.  I’m not going to go through modifying those attributes.  I’ve done it several times in the guide and frankly, it’s just too simple.

Create Traditional volume – CLI

As discussed before, traditional volumes are tightly integrated with their aggregate.  When you create the traditional volume, you do so using the ‘aggr create’ command stings.  For this, we’re going to create a traditional volume called ‘tradvol’ with its own set of disks.  Traditional volumes can only be 32bit.  When a traditional volume is created, you won’t see the aggregate it’s actually in. You only see the volume itself.

aggr create tradvol –v –l en –T FCAL 4
aggr show_space –h
vol status tradvol

cli_trad_create_step1As you can see, when you run a normal ‘aggr’ command, the traditional volume doesn’t show up as having an aggregate that it sits in.  You will notice in ‘vol status’ that it shows up and it’s not a ‘flex’, it’s ‘trad’ and you can see how it differs from a normal aggregate and volume.

Copy Traditional volume – CLI

Copying a traditional volume is no different than copying a FlexVol or root volume.  However there are several restrictions.  You cannot use ‘ndmpcopy’ or ‘vol copy’ on a traditional volume if you’re trying to copy that volume to a FlexVol.  Traditional volumes can only be copied to another traditional volume.

I created a second traditional volume called ‘tradvolcopy’ to copy ‘tradvol’ over to.  You can’t use ‘ndmpcopy’ to perform the copy.  You have to use ‘vol copy’.  To use ‘vol copy’ you need to restrict the destionation volume first.  Below is what happens when you try and ‘vol copy’ a traditional volume to a FlexVol.

vol restrict tradvolcopy
vol copy start tradvol tradvolcopy

cli_copy_trad_step1Here’s what happens when you perform the copy from traditional to traditional volume:

vol restrict tradvolcopy
vol copy start tradvol tradvolcopy

cli_copy_trad_step2Modify Traditional volume – CLI

The only real modifications you can do a traditional volume are what you can do to aggregates. You can add a RAID group or a disk to the existing RAID group to make it larger, but you can’t remove a RAID group and make it smaller.

aggr add tradvol –T FCAL 1

cli_modify_trad_step1Move Traditional volume – CLI

Traditional volumes can’t be moved either.

Traditional volume – System Manager

Traditional volumes show up in System Manger in the Aggregates section and in the volume section.  You can’t make any modifications to them.  But you can verify that it is a traditional volume.  Open System Manager and expand the ONTAP device, select Storage, then Aggregates.  In this view you can select the ‘tradvol’ aggregate and in the Details section see the type is ‘Traditional Volume’.  From the Disks section, you can select ‘Add to Aggregate’ then select any spare disk that’s available.


osm_modify_trad_step2Create Flexible volume – CLI

Creating a FlexVol from the CLI is the same process as earlier in the post.

vol create flexvol1 –s volume aggr0 1g
aggr show_space –h

cli_flexvol_create_step1Copy Flexible volume – CLI

To make a copy of a FlexVol, you can use both ‘ndmpcopy’ and ‘vol copy’.  The destination volume needs to be created beforehand.  We’re going to copy flexvol1 to flexvol2 using both ‘ndmpcopy’ and ‘vol copy’.

ndmpcopy /vol/flexvol1 /vol/flexvol2


vol restrict flexvol2
vol copy start flexvol1 flexvol2

cli_flexvol_copy_step2Modify Flexible volume – CLI

We’re going to change the size of ‘flexvol1’ to 2GB, and set the space guarantee to file.

cli_flexvol_modify_step1We’re going to move ‘flexvol1’ from aggr0 to aggr1.  To move a FlexVol, the guarantee needs to be

vol move start flexvol1 aggr1

cli_flexvol_move_step1One thing I noticed when I tried to do a ‘vol move’ was if the volume guarantee was set to ‘file’, you have to change it to ‘volume’ or ‘none’ before you can perform the move.  I included the output in the screenshot above.

Create Flexible volume – System Manager

We’re going to create a FlexVol called ‘osmflexvol’ in aggr0, size it to 1GB, disable the Snap Reserve and enable Thin Provisioning.  Start by opening System Manager and expanding the ONTAP filer, selecting Volumes and then clicking on Create.

osm_flexvol_create_step1Input the information and click on Create.  That’s it.

Copy Flexible volume – System Manager

You cannot copy a FlexVol from System Manager.

Modify Flexible volume – System Manager

We’re going to remove Thin Provisioning and change the security style from UNIX to NTFS.  Start by opening System Manager and expanding the ONTAP filer, selecting Volumes, selecting the ‘osmflexvol1’ volume and pressing Edit.

osm_flexvol_modify_step1Change the items and click on Save and Close.

Move Flexible volume – System Manager

You cannot move a FlexVol to another aggregate from System Manager


Replacing failed NetApp drive

I know this isn’t a study guide post.  I promise to continue as I get time but this one is a short and sweet one for future reference to myself and hopefully anyone else who needs it.

Each morning I wake up and before I get started with my day I tend to check my email on my phone.  Typically it’s filled with event messages about jobs that ran over the night or the occasional disk space warning message.  However I was greeted with a nice AutoSupport message to the tune of:


That sucks.  This filer resides at my other datacenter.  A quick login to the console and this is what I find:

 FILER1> vol status -f
Broken disks
RAID Disk             Device                  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
--------- ------      ------------- ---- ---- ---- ----- --------------    --------------
not responding 0a.10.7 0a    10  7   SA:A   0   SAS 10000 560000/1146880000 572325/1172123568

Looks like a failed drive.  Double check and make sure that the spare has taken over and is reconstructing:

FILER1> sysconfig -r
data      0a.10.3 0a    10  3   SA:A   0   SAS 10000 560000/1146880000 572325/1172123568 (reconstruction 8% completed)

Looks like the filer is doing everything it’s supposed to do just fine.  Since there’s nothing I can really do, I notify our sysadmin in that office and I go ahead with my morning routine and head into the office.  The beautiful thing about AutoSupport is that it goes ahead and creates a ticket with NetApp support and I just wait for the phone call from the technician concerning my 4-hour response replacement.

When I arrived at the office, our sysadmin tells me that he can’t locate the broken drive as it’s not blinking in the chassis.  This seems strange.

This is easily fixable.  From the CLI there’s an option to blink the LED on any drive in the array.  Since we know that 0a.10.7 is the failed drive, I go ahead and set the drive LED to blink for our sysadmin so he’s completely sure he’s replacing the correct drive.

 FILER1> priv set advanced
Warning: These advanced commands are potentially dangerous; use
them only when directed to do so by NetApp

FILER1*> blink_on 0a.10.7
<drive is now blinking and is then replaced by sysadmin>
FILER1*> Mon Apr  8 12:03:00 EDT [FILER1:monitor.globalStatus.ok:info]: The system's global status is normal.
FILER1*> blink_off 0a.10.7
FILER1*> priv set
FILER1> disk show –n

DISK       OWNER                      POOL   SERIAL NUMBER         HOME
------------ -------------              -----  -------------         -------------
0a.10.7      Not Owned                  NONE   PPWSDPRD

FILER1> disk assign 0a.10.7
Mon Apr  8 12:04:57 EDT [FILER1:diskown.changingOwner:info]: changing ownership for disk 0a.10.7 (S/N PPWSDPRD) from unowned (ID 4294967295) to FILER1 (ID XXXXXXXXXX)

And that takes care of that.  A pretty easy thing to fix, especially if you’re not on-site and you have to direct someone on which drive to change out remotely.