NCDA Study Part 2.8 – Data ONTAP: Configuration

As always download the newest version of the blueprint breakdown from here.

This section covers Qtrees.  Qtrees are another logical segregation of a volume used to hold further information within.  Think of them as directories inside a volume with files in them.  They are integral components of products like SnapMirror and necessary components of SnapVault.  They can have quotas and different backup and security styles than their volume counterparts.  You can have 4994 qtrees per volume, per NetApp system.

We’re going to perform the following labs in both the CLI and System Manager where the capabilities exist to do in both:

  • Create Qtree
  • Copy Qtree
  • Rename Qtree
  • Move Qtree
  • Change Qtree security
  • Delete Qtree

Create Qtree – CLI

Creating a qtree is a very simple process.  You first need a volume to put the qtree in.  From ONTAP1, we’ll use the test_vol Volume and add a qtree named test_qtree.

qtree create /vol/test_vol/test_qtree
qtree status

cli_qtree_create_step1Create Qtree – System Manager

Creating a qtree from System Manager is easy as well but does require a few more steps.  Log into System Manager and the Data ONTAP instance.  Expand the device, click on Storage and click on Qtrees

osm_qtree_create_step1Click on Create, name the qtree appropriately, and then click Browse to locate the test_vol volume.  Click OK and then Create again.  This creates a default type qtree in the test_vol

osm_qtree_create_step2Copy Qtree – CLI

Copying a qtree is also pretty easy but is done in different ways.  Qtrees can’t natively be copied using a copy command, you can only copy them using either Qtree SnapMirror or by using NDMPCOPY.  Qtree SnapMirror is covered in a later topic so we’ll just use NDMPCOPY.  NDMPCOPY will require downtime while the copy is performed.  Qtree SnapMirror can be used to sync the qtree and then cut over to it.  It is the much more elegant solution for qtrees with a large amount of data in them.

 ndmpcopy /vol/test_vol/test_qtree /vol/test_vol2/test_qtree
qtree status

cli_qtree_copy_step1Copy Qtree – System Manager

You cannot copy a qtree from within System Manager

Rename Qtree – CLI

Renaming a qtree is not that hard either.  However, you can only do it from within the advanced shell in Data ONTAP, priv set advanced.

priv set advanced
qtree rename /vol/test_vol/test_qtree /vol/test_qtree_new
qtree status

cli_qtree_rename_step1Rename Qtree – System Manager

You cannot rename a qtree from within System Manager

Move Qtree – CLI

Moving a qtree is no different than copying a qtree.  The same rules apply. You can only use Qtree SnapMirror or NDMPCOPY.  After you’re done, you simply delete the old qtree location. This obviously means that you’ll need to ensure that you have enough room to keep both copies on the filer until you can delete the old one.  There’s no reason to show how to copy it, I already did that above.  Below we’ll see how to delete a qtree which would complete a ‘move’ process.

Move Qtree – System Manager

You cannot move a qtree from within System Manager

Change Qtree Security – CLI

By default, qtrees take on the security of the volume they’re in.  The default volume security style is determined by the wafl.default_security style option setting.  By default this is UNIX which means all volumes created and subsequent qtrees will be UNIX security by default.  This can be overridden to provide NTFS, UNIX or MIXED access to a qtree.  Below we’ll change the qtree security style to NTFS.

qtree status
qtree security /vol/test_vol/test_qtree ntfs
qtree status

cli_qtree_security_step1Change Qtree Security – System Manager

Changing the security of a qtree from within System Manager is one of the only things you can actually do to a qtree from System Manager once it’s created.  Log into System Manager and the Data ONTAP instance.  Expand the device, click on Storage and click on Qtrees.  Since we changed it to NTFS before, we’ll change it to Mixed now.

osm_qtree_security_step1Delete Qtree – CLI

Much the same with renaming a qtree, you can only delete a qtree from the CLI from the advanced shell.

priv set advanced
qtree delete /vol/test_vol/test_qtree

cli_qtree_delete_step1Delete Qtree – System Manager

Log into System Manager and the Data ONTAP instance.  Expand the device, click on Storage and click on Qtrees.  Select the test_qtree and click on the Delete button.  Check the OK to delete the Qtree(s) option and click Delete.

osm_qtree_delete_step1

Advertisements

NCDA Study Part 2.7 – Data ONTAP: Configuration

As always download the newest version of the blueprint breakdown from here.

This next section focuses around LUN setup and configuration.  LUNs are the targets of block level storage protocols.  This includes both iSCSI and Fibre Channel.  When discussing LUNs in terms of Data ONTAP, they are really nothing more than a file that sits within a volume or within a qtree within a volume.  However this file is accessed completely different than say an exported NFS volume.  It is mapped to an igroup, which grants access to the LUN to another device, and the file behaves just like a disk drive.  This is a pretty dumbed down definition of a LUN.  I’m not discussing multipathing which is something that you absolutely need.  More in-depth discussion will occur in the SAN section.  For now, this is all basic configurations of LUNs on a NetApp storage device.  NetApp recommends a 1:1 relationship between volumes and LUNs.

We’re going to perform the following labs in both the CLI and System Manager where the capabilities to do in both exist.

  • Configure igroup for LUN connectivity/Add LUN to igroup
  • Create/Expand/Delete/Rename/Move LUN
  • Discuss Geometry issues with LUNs

Before we start any configuration, to perform the majority of these labs we’ll need to have an initiator and a target so we can see how a LUN shows up.  We’re going to do this with iSCSI since the simulator doesn’t have Fibre Channel capabilities and we don’t have an FC network either.  We’re going to use the following parts of the lab:

  • Student VM – added an extra NIC on the 192.168.175.x/24 network for iSCSI traffic.  (Typically iSCSI traffic would be on a separate VLAN all together, but this still does what we need it to do and also provides the necessary NIC that Windows Server 2008 R2 needs to run the MS iSCSI initiator over.)  I gave the second NIC the IP address of 192.168.175.20 so that we can use that as the initiator IP address on the igroup configuration.
  • CentOS VM – added an extra NIC on the 192.168.175.x/24 network for iSCSI traffic.  I gave the second NIC the IP address of 192.168.175.21 so that we can use that as the initiator IP address on the igroup configuration.
  • ONTAP1

To setup the iSCSI initiator within Windows Server 2008 R2, you need to perform the following steps:

Login to the VM and in the search bar type ‘iscsi’.  You’ll see two options.  You can select either.

windows_iscsi_step1You’ll be shown a warning message about the iSCSI service not running.  Click Yes to start the service and bring up the initiator configuration menu.

windows_iscsi_step2To continue we’ll need a bit of information from the NetApp, i.e. the target IQN of the device.  First confirm that the iSCSI service is running and then display the node name.  This is the IQN we need to verify we are connected to the correct device.

iscsi status
Iscsi nodename

netapp_iscsi_check_step1Now that we have ensured that iscsi is running and we know the target name, we can finish the Windows configuration.  In the iSCSI configuration menu, on the Target Tab we can put in the IP address of the ONTAP1 filer, 192.168.175.10.  Press Quick Connect and we should be connected.  Verify the IQN is correct.  Click Done once verified.

windows_iscsi_step3Since we haven’t configured any disks yet, we won’t see anything show up in the Devices tab.  We will however, need the IQN from the initiator to configure the igroup on the NetApp.  This is under the Configuration tab under Initiator Name.  Typically this name is as combination of the initiator’s computer name and a randomly generated alphanumeric combination at the end.  For simplicity I shortened it to just ‘iqn.1991-05.com.microsoft:student’.  This makes configuration a bit easier.  Make sure you reboot the device after changing the name.  In a production environment, this value has to be unique.  Typically there’s no reason it shouldn’t be as each machine should be named differently, but it could happen.  So be careful changing the initiator’s name.

windows_iscsi_step4Now we’re going to configure the iSCSI initiator for CentOS.  You can use this guide to help with the configuration.  Here’s exactly what I did from the terminal window in the CentOS VM:

 # su root
# yum install iscsi-initiator-utils
# /etc/init.d/iscsi start
# ping 192.168.175.10
# iscsidm –m discovery –t sendtargets –p 192.168.175.10
Starting iscsid:                                                                   [   OK   ]
192.168.175.10:3260,1000 iqn.1992-08.com.netapp:sn.4061490311
# cat /etc/iscsi/initatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:ebc90b7847e

Yahtzee!  It works and is connected.  Now that we have both initiators started and connected, we can go ahead with creating the igroup on the NetApp and the LUNs that we will associate to each device.

Configure igroup for LUN connectivity – CLI

This is pretty easy to do.  We’re going to create two igroups, one for our windows machine and one for our Linux machine then verify that the group is created.

igroup create –i –t windows iqn.1995-05.com.microsoft:student
igroup create –i –t linux iqn.1994-05.com.redhat:ebc90b7847e

netapp_igroup_step1Add LUN to igroup – CLI

Confirm the names of the LUNs we’re going to add to the igroup

lun show

netapp_igroup_step2We have two specific LUNs each configured with the appropriate OS type and 3GB in size.  We also can see that they differ slightly from the other two LUNs in the display.  The top two do not show as ‘mapped’.  This means they are not added to any igroup which we are about to do and you’ll see the change.  Now we can add them to the igroup then display the mappings to ensure they are correctly configured.

lun map /vol/STUDENT/STUDENT_lun
lun map /vol/CENTOS/CENTOS_lun
lun show -m

netapp_igroup_step3Now we can rescan for changes on the Windows and CentOS machines to see the volumes show up for us.  Format them and use them.

Open the iSCSI Initiator Properties and on the Targets tab click on Devices.  This should show us our new LUN attached to the system.

netapp_igroup_step4Now we just need to open up Disk Management and Rescan if necessary to see the volume show up.  Format it and we’re all set.

netapp_igroup_step5We can also verify that we’re going across the second NIC that we put on the VM from the CLI.

iscsi session show –v

netapp_igroup_step6All looks good.  Now we can do the Linux machine.  This is pretty simple, you just need to restart the iSCSI daemon and it will send a discovery and pull in the LUN.

su root
/etc/init.d/iscsi restart
fdisk –l

netapp_igroup_step7That’s it.  You can see that now the 3GB volume is showing up in fdisk.

Configure igroup for LUN connectivity – System Manager

I’m not going to go through how to do it with Linux as it’s no different than doing it for Windows using System Manager.

Log into System Manager and the Data ONTAP instance.  Expand the device, click on Storage and click on LUNs.  In the LUNs section, there is an Initiator Groups tab.

osm_igroup_step1Click on Create, name the group STUDENT, select the operating system Windows, and the type of iSCSI.

osm_igroup_step2Click on the Initiators tab and then put in the IQN of the initiator for the STUDENT machine and click Create.

osm_igroup_step3Add LUN to igroup – System Manager

Login to System Manager and the Data ONTAP instance.  Expand the device, click on Storage and click on LUNs.  In the LUNs section, there is a LUN Management tab.

osm_igroup_lun_step1Click on the STUDENT_lun LUN and click on Edit.  There is an Initiators groups tab with Edit LUN.  This will bring up all the initiators of the type that matches the operating system type of the igroup.  In other words, if the LUN is a Windows LUN, and the igroup is Windows, you’ll see the Windows initiator groups.  Check the Map box for STUDENT and click Save and Close.

From here you can create a new igroup if you want, show all the groups and even assign a custom LUN ID for the LUN.  If you don’t select a LUN ID, ONTAP will do it for you.

osm_igroup_step4

That takes care of that.

Create LUN – CLI

Creating a LUN from the CLI can be done two different ways.  You can use ‘lun create’ or ‘lun setup’.  The ‘lun setup’ command takes you step by step through the process.  Alternatively, then ‘lun create’ process expects that you know the steps.

We’ll create a new LUN, 1GB in size, Windows 2008 capable, residing in /vol/test_vol3, space-reserved, attached to igroup STUDENT, with a LUN ID of 1, and named ‘setuplun’

The ‘lun setup’ process:

cli_lun_setup_step1Verify:

Open up the STUDENT machine and rescan for the 1GB LUN.  That’s it.

cli_lun_setup_step2Expand LUN – CLI

Expanding a LUN is a simple process.  You first need to ensure that the encapsulating volume has the appropriate room for the LUN to expand into.  After that, it’s a simple command and rescan on the host to see the new size.  The command isn’t much different than the ‘vol resize’ command.

df –m /vol/test_vol3
lun resize /vol/test_vol3/setuplun +1g

cli_lun_resize_step1Now that we have expanded the LUN, we’ll rescan the STUDENT machine to see the change in drive size.

cli_lun_resize_step2Delete LUN – CLI

Deleting a LUN is very simple.  You have to un-map it first before you can destroy it.  Similar to taking a volume offline before being able to destroy it.

lun show –m
lun unmap /vol/test_vol3/setuplun STUDENT
lun destroy /vol/test_vol3/setuplun
lun show

cli_lun_delete_step1Verify the LUN is gone from the STUDENT machine by running a rescan.

cli_lun_delete_step2Rename LUN – CLI

Renaming a LUN involves using a non-intuitive way in the ‘lun move’ command.  We’re going to use that command and ‘move’ the LUN to the same volume in which it’s stored.  This effectively renames the LUN.  We’ll change the name to ‘setuplunrn’.

lun show –m /vol/test_vol3/setuplun
lun move /vol/test_vol3/setuplun /vol/test_vol3/setuplunrn
lun show /vol/test_vol3/setuplunrn
lun show -m

cli_lun_rename_step1Since we can’t see LUN names in Windows, there’s no reason to rescan the STUDENT host.

Move LUN – CLI

Like I stated above, the move command is used for two different types of tasks.  LUN moves can only occur within the same volume.  However you can move them within qtrees within the same volume.  If you want to move a LUN from volume to volume, you’ll need to do something like ‘lun clone’ and the split it out and remove the old LUN after you’ve verified it working properly.  That’s a deeper discussion further along.  We created a couple new qtrees to move the LUN to, qtree1 and qtree2.  I already moved the LUN into qtree1.

lun show –m
lun move /vol/test_vol3/qtree1/setuplun /vol/test_vol3/qtree2/setuplun
lun show –m

cli_lun_move_step1The key thing to realize here is that we moved this LUN into a completely different qtree without disruption.

Create LUN – System Manager

Creating a LUN in System Manager is a pretty easy process.  We’re going to use an existing volume, test_vol3, to create a LUN in and map it to STUDENT.  Start off by opening System Manager and navigating to Storage>LUNs

Click on Create and name the LUN ‘smlun’.  Make it for Windows and thin provision it.

osm_lun_create_step1Select the volume /vol/test_vol3 from the menu and Click OK

osm_lun_create_step2Select the STUDENT initiator group for mapping the LUN to.

osm_lun_create_step3Verify the settings are accurate and Click Next and Finish

Verify the LUN shows up

osm_lun_create_step4Verify the LUN shows up on the STUDENT machine by rescanning for storage

osm_lun_create_step5Expand LUN – System Manager

To expand a LUN start by navigating to Storage>LUNs and clicking on the LUN you want to expand and clicking Edit

osm_lun_expand_step1Change the value to 2GB and click Save and Close.  Since we thin-provisioned you will see a couple of messages that discuss that we’ll need to rescan for the space to show up.  Verify the amount of disk space changed to 2GB on the STUDENT machine by rescanning and checking the size.

osm_lun_expand_step2Delete LUN – System Manager

To delete a LUN start by navigating to Storage>LUNs and clicking on the LUN you want to delete and clicking Edit.  Click on the Initiator Groups tab at the top and de-select the STUDENT map selection.  Click on Save and Close when finished.  You should always un-map a LUN before deleting it to ensure that nothing is connected to it.

osm_lun_delete_step1Offline the LUN by clicking on the Status button and selecting Offline

osm_lun_delete_step2Delete the LUN by clicking on the Delete button, checking the box to engage the Delete button, and clicking on Delete.

osm_lun_delete_step3Rename LUN – System Manager

To rename a LUN start by navigating to Storage>LUNs and clicking on the LUN you want to rename and clicking Edit

osm_lun_rename_step1Change the name by adding a ‘2’ to the end and clicking Save and Close.  Verify the name changed

osm_lun_rename_step2Move LUN – System Manager

You cannot move a LUN from System Manager

Discuss Geometry issues with LUNs

LUNs are not able to be expanded more than the geometry will allow.  Since not everyone remembers what the original size of a LUN was, especially if you find yourself expanding things all the time and have tons and tons of LUNs, you can find out how larger you can grow the LUN in question by going into the diagnostic area of Data ONTAP.  Warning, going in here is not recommended outside a lab unless you know what you’re doing.  You can really screw something up.  You’ve been warned.

To access the diagnostic mode, you need to run the following:

priv set diag
lun geometry /vol/test_vol3/smlun

cli_lun_geometry_step1From here you can see that the LUN can be grown up to a maximum size of 502GB in size.  If you need to exceed that amount, you will have to create a new LUN and copy the data to it.

 

NCDA Study Part 2.6 – Data ONTAP: Configuration

As always download the newest version of the blueprint breakdown from here.

This section covers volumes and how they pertain to Data ONTAP.  Data ONTAP supports three different volume types:  root, traditional and flexible volumes or FlexVols.  The root volume houses special folders and files to manage the system.  Every system must have a root volume on it.  Traditional volumes are volumes that are closely tied to their aggregates.  What this means is that a traditional volume is the only volume that can sit in its aggregate.  It’s a 1:1 relationship.  No other volumes can reside in that aggregate.  Flexible volumes or FlexVols have relieved this type of restriction but allowing multiple FlexVols to reside in a single aggregate.

We’re going to perform the following labs in both the CLI and System Manager where the capabilities to do in both exist.

  • Create 32bit/64bit
  • Show Attributes
  • Root
    • Create/Copy/Modify/Move
  • Traditional
    • Create/Copy/Modify/Move
  • Flexible
    • Create/Copy/Modify/Move

Create 32bit/64bit – CLI

Creating a volume is a very simple procedure.  Traditional volumes are always 32bit, but FlexVols can be both 32 and 64bit.  However you can’t create a 32bit FlexVol in a 64bit aggregate.  The only way you can have a 32bit FlexVol within a 64bit aggregate is if you upgrade the aggregate to 64bit.  Even if you ‘vol move’ a 32bit volume to a 64bit aggregate, it will upgrade it.  Below is just a simple ‘vol create’ command.  The output I show shows how you can see the different types.  If you run the same exact commands on both aggregates, you get two different volumes in 32 and 64bit.

aggr status
vol create testvol64 aggr0 1g
vol create testvol32 aggr32 1g
vol status

32_64_bit_volume_step1As you can see, the type of volume created depends on the underlying aggregate.

Create 32bit/64bit – System Manager

This is also a pretty simple process.  I’m not going to go through creating both 32 and 64bit volumes in System Manager.  The process is exactly the same.  You simply choose the appropriate aggregate.  This means knowing which aggregates are 32 and which are 64bit if you’re naming schemes don’t denote that.  The reason is that you can’t tell what types the aggregates are from System Manager.  There’s no status or option that shows it.

Log in to System Manager and the Data ONTAP instance.  Expand the device, click on Storage and click on Volumes.

osm_volume_create_step1

Click on Create, name the volume appropriately, select the size and snapshot reserve as well as whether you want the volume thin provisioned.  The key part is choosing the right aggregate.  We’re going to create another 32bit volume so we want to select the ‘aggr32’ aggregate.  This will ensure that our volume is 32bit.

osm_volume_create_step2When done, click on Create.  And now you’ll see the new volume in the status window.  That’s it.

osm_volume_create_step3Show Attributes – CLI

There are several attributes that you can set for a volume.  They are:

  • Name
  • Size
  • Security style
  • CIFS Oplocks?
  • Language
  • Space Guarantees
  • Quotas
  • Snapshots and scheduling
  • Root volume?

The best place to see all the attributes is from the CLI.  System Manager doesn’t show them as well.  So I’m only going to do the CLI.  We’re going to create a volume with the following attributes:

  • Name – attributevol
  • Size – 1GB
  • Security style – UNIX
  • CIFS Oplocks? – enabled
  • Language – Italian
  • Space Guarantees – volume
  • Quotas – user, all, 50MB
  • Snapshots and scheduling – every 1 hour
  • Root volume? – no

Doing all of this will require several commands.  All of the attribute settings are not available from the ‘vol create’ command.  To start, we can create a new volume called ‘attributevol’, set the language of the volume to Italian, set the space guarantee to ‘volume’ and set the size to 1GB.

vol create attributevol –l it –s volume aggr0 1g

This will do all of the above.  We can confirm this by running the following command:

vol status attributevol –v
vol lang attributevol

cli_attributes_step1Now to change the security style to UNIX, enable CIFS oplocks, enable and setup the quota for all users to 50MB and create the snapshot schedule.  Changing the security style is not intuitive.  There’s no volume security command in Data ONTAP.  To change volume security you have to use the qtree security command instead.  CIFS oplocks behaves in the same manner.  Quotas are managed through the /etc/quotas file.  This file can be a pain to edit manually.  There are two good ways to do it.  One is through System Manager which keeps very good track of it and is super easy.  The other way is to browse to the location on the filer and edit with something like Notepad.  You can also use ‘wrfile –a’ if you’d like although that can be dangerous.  Use with caution.

qtree security /vol/attributevol UNIX
cifs.oplocks.enable on
qtree oplocks /vol/attributevol enable
snap sched –V attributevol 0 6 12

cli_attributes_step2To enable quotas we first have to edit the /etc/quotas file.  Browse to \\192.168.175.10\c$ and this will bring up the ability to access the /etc folder.  From here we can open the quotas file and paste in our configuration for quotas.

 #Quota target type disk files thold sdisk sfile
#----------- ---- --- ----- ---- ----- -----
* user@/vol/attributevol 50M

cli_attributes_step3Save the file and now we can enable quotas.  When you enable quotas in Data ONTAP, the first thing the filer does is read the /etc/quotas file.  If no entries exist quotas will not be enabled.  Since we’ve already done this, turning them on for the volume is simple.

quota on /vol/attributevol 

cli_attributes_step4Confirm by running the following command:

quota report –t

cli_attributes_step5We confirm snapshot scheduling by running the following command:

snap sched –V attributevol

cli_attributes_step6That takes care of that.  As you can see it’s tough to see all the attributes for a volume in one place.  But the commands are pretty simple to setup and use.

Create Root volume – CLI

Creating a root volume is no different than creating any other volume.  However the root volume is special.  It houses all the configuration files for Data ONTAP to run with.  There can only be one root volume on a filer in use.  In the current simulator, the root volume is 850MB.  This is not even close to the minimum is should be, refer to the Storage Management Guide for minimums on the root volume, but we’re going to create another one 1GB in size.

Create volume named ‘newroot’, set the space guarantee to volume and make the size 1GB.  All root volumes should have the space guarantee set to volume.

vol create newroot –s volume aggr0 1g
vol options newroot root
<confirm with y>

cli_root_create_step1You will receive a warning that you should copy over the contents from the current root to the new root volume.  If you don’t and you reboot the filer, it will be like you started from scratch.  Only after a reboot will this new volume be the new root volume.

Copy Root volume – CLI

Continuing after the creation of the new root volume, we want to copy the root.  You can do this copy for any volume and it works in exactly the same way.  We’re simply doing it with the root volume, much in the same way we did back when we built the lab and expanded the disks for our filers.  You can do this two different ways, vol copy or ndmpcopy.  I’m using ndmpcopy.  Make sure it’s enabled first.

ndmpd on
ndmpcopy /vol/vol0 /vol/newroot
<await completion>

cli_root_copy_step1Now that the root volume has been copied, you can reboot the filer at will and it will now boot from this newroot volume instead.  This enables us to destroy the old /vol/vol0 and rename to /vol/newroot.

Modify Root volume – CLI

Realistically, the only modifications that one would do to the root volume would be increasing the size if needed.  This is very basic.  We’re going to increase the size of newroot from 1GB to 2GB.

vol size newroot +1g

cli_root_modify_step1That’s it.  The response from Data ONTAP confirms that it increased.

Move Root volume –CLI

Volume moves are done between two aggregates within the same controller.  The only reason I could see moving a Root volume from one aggregate to another would be if you had a controller that had two aggregates for different disk types within the same controller and you wanted to move the Data ONTAP configuration files to less expensive disk.  There may be more reasons, but right now I can’t think of anything.  Below are the steps to move a volume to another aggregate.  It’s not the newroot volume we just created, but the steps are exactly the same.

Now you see me here…

cli_root_move_step1Now you see me there….

vol move start testvol32 aggr0
aggr show_space –h

cli_root_move_step2You’ll notice in the event messages that came up during the migration that the move kicked off a 64bit upgrade of the 32bit volume since we were moving from 32bit to 64bit aggregate.

Root volume – System Manager

You can’t create a root volume in System Manager.  You can create a new volume and then go to the CLI and make it the root volume, but System Manager doesn’t allow anything beyond creation.  The only changes you can make to a root volume are the security style, thin-provisioning, and resizing it.  I’m not going to go through modifying those attributes.  I’ve done it several times in the guide and frankly, it’s just too simple.

Create Traditional volume – CLI

As discussed before, traditional volumes are tightly integrated with their aggregate.  When you create the traditional volume, you do so using the ‘aggr create’ command stings.  For this, we’re going to create a traditional volume called ‘tradvol’ with its own set of disks.  Traditional volumes can only be 32bit.  When a traditional volume is created, you won’t see the aggregate it’s actually in. You only see the volume itself.

aggr create tradvol –v –l en –T FCAL 4
aggr show_space –h
vol status tradvol

cli_trad_create_step1As you can see, when you run a normal ‘aggr’ command, the traditional volume doesn’t show up as having an aggregate that it sits in.  You will notice in ‘vol status’ that it shows up and it’s not a ‘flex’, it’s ‘trad’ and you can see how it differs from a normal aggregate and volume.

Copy Traditional volume – CLI

Copying a traditional volume is no different than copying a FlexVol or root volume.  However there are several restrictions.  You cannot use ‘ndmpcopy’ or ‘vol copy’ on a traditional volume if you’re trying to copy that volume to a FlexVol.  Traditional volumes can only be copied to another traditional volume.

I created a second traditional volume called ‘tradvolcopy’ to copy ‘tradvol’ over to.  You can’t use ‘ndmpcopy’ to perform the copy.  You have to use ‘vol copy’.  To use ‘vol copy’ you need to restrict the destionation volume first.  Below is what happens when you try and ‘vol copy’ a traditional volume to a FlexVol.

vol restrict tradvolcopy
vol copy start tradvol tradvolcopy

cli_copy_trad_step1Here’s what happens when you perform the copy from traditional to traditional volume:

vol restrict tradvolcopy
vol copy start tradvol tradvolcopy

cli_copy_trad_step2Modify Traditional volume – CLI

The only real modifications you can do a traditional volume are what you can do to aggregates. You can add a RAID group or a disk to the existing RAID group to make it larger, but you can’t remove a RAID group and make it smaller.

aggr add tradvol –T FCAL 1

cli_modify_trad_step1Move Traditional volume – CLI

Traditional volumes can’t be moved either.

Traditional volume – System Manager

Traditional volumes show up in System Manger in the Aggregates section and in the volume section.  You can’t make any modifications to them.  But you can verify that it is a traditional volume.  Open System Manager and expand the ONTAP device, select Storage, then Aggregates.  In this view you can select the ‘tradvol’ aggregate and in the Details section see the type is ‘Traditional Volume’.  From the Disks section, you can select ‘Add to Aggregate’ then select any spare disk that’s available.

osm_modify_trad_step1

osm_modify_trad_step2Create Flexible volume – CLI

Creating a FlexVol from the CLI is the same process as earlier in the post.

vol create flexvol1 –s volume aggr0 1g
aggr show_space –h

cli_flexvol_create_step1Copy Flexible volume – CLI

To make a copy of a FlexVol, you can use both ‘ndmpcopy’ and ‘vol copy’.  The destination volume needs to be created beforehand.  We’re going to copy flexvol1 to flexvol2 using both ‘ndmpcopy’ and ‘vol copy’.

ndmpcopy /vol/flexvol1 /vol/flexvol2

cli_flexvol_copy_step1

vol restrict flexvol2
vol copy start flexvol1 flexvol2

cli_flexvol_copy_step2Modify Flexible volume – CLI

We’re going to change the size of ‘flexvol1’ to 2GB, and set the space guarantee to file.

cli_flexvol_modify_step1We’re going to move ‘flexvol1’ from aggr0 to aggr1.  To move a FlexVol, the guarantee needs to be

vol move start flexvol1 aggr1

cli_flexvol_move_step1One thing I noticed when I tried to do a ‘vol move’ was if the volume guarantee was set to ‘file’, you have to change it to ‘volume’ or ‘none’ before you can perform the move.  I included the output in the screenshot above.

Create Flexible volume – System Manager

We’re going to create a FlexVol called ‘osmflexvol’ in aggr0, size it to 1GB, disable the Snap Reserve and enable Thin Provisioning.  Start by opening System Manager and expanding the ONTAP filer, selecting Volumes and then clicking on Create.

osm_flexvol_create_step1Input the information and click on Create.  That’s it.

Copy Flexible volume – System Manager

You cannot copy a FlexVol from System Manager.

Modify Flexible volume – System Manager

We’re going to remove Thin Provisioning and change the security style from UNIX to NTFS.  Start by opening System Manager and expanding the ONTAP filer, selecting Volumes, selecting the ‘osmflexvol1’ volume and pressing Edit.

osm_flexvol_modify_step1Change the items and click on Save and Close.

Move Flexible volume – System Manager

You cannot move a FlexVol to another aggregate from System Manager

NCDA Study Part 2.5 – Data ONTAP: Configuration

Time to get back on that horse.  As promised another update to the study guide.

As always download the newest version of the blueprint breakdown from here.

There are four statuses of disks that can be seen within the Data ONTAP software and in a NetApp array.  Each disk that is added to the system will show up as a Spare to start.  From that point forward you can add the disk to a RAID Group, create a new RAID Group using that disk or keep it around as a Spare for the entire array.  The nice thing about Spares is that they are universal on the array.  They can be used to fill-in for any failed disk of the same type.  You’ll still want to follow best practices for using Spares followed by this document.

We’re going to perform the following labs in both the CLI and System Manager where the capabilities to do in both exist.

  • Identify Data/Parity/Double-Parity/Spare
  • Disk Sanitation
  • Assign Ownership
    • Auto
    • Manual

Identify Data/Parity/Double-Parity/Spare – CLI

To take a look and see what disks are what type from within the CLI, it’s pretty simple.  You just need to run the following command:

sysconfig –r

cli_identify_disks_step1I chopped it up a bit to show only the relevant areas, but you get the idea.  Easy stuff.

Identify Data/Parity/Double-Parity/Spare – System Manager

Open and connect to the Data ONTAP device, expand the device, Storage, and click on Disks.  This will show you the disks currently owned by the system.  From this screen we can see that the disk ‘State’ column shows what type the disk is to the system.

osm_identify_disks_step1

That’s really all there is to that from within System Manager.

Disk Sanitation – CLI

From the Data ONTAP 8.0 7-Mode Storage Management Guide, disk sanitation is basically the complete obliteration of data on the disk by randomly writing bit patterns to the disk.  This helps ensure that the data that was previously on the disk is impossible to recover.  There is a caveat to doing this on the simulator.  Unless the disk is an approved NETAPP disk, running storage show disk displays NETAPP as the vendor, then you cannot run this tool on the drive.  Also, to use these commands, you have to have the license installed.  I’ve read in various postings on the Internet that you cannot remove this license once installed.  You might wonder why you would want to ever remove the license.  Well, if the license is installed someone could run this command on one of your drives inadvertently thus destroying the drive’s data.  This is clearly not something we’d want to have happen.  So I’ve heard of people only installing it on a secondary non-production array to perform these sanitizing tasks.  This is probably a good practice if you have someone on staff that is curious and has more access than they should.  Obviously I shouldn’t have to say it, but be careful using these commands.

Since this is the simulator and I’m not going to run this command on a disk in one of my production arrays, I’ll only be posting the basic command to start the process:

sysconfig –r

We’re going to sanitize one of our spare drives since it’s not part of an aggregate.

disk sanitize start v5.27

cli_disk_sanitize_step1You’ll be met with an error, but that’s OK since this is the simulator.  I couldn’t get it to actually sanitize a disk.  Either way, this is how you start the process.  The following commands will provide you with status and with aborting the process.  It’s not recommended to stop the process once it’s started however.

disk sanitize status
disk sanitize abort

Disk Sanitation – System Manager

It is not possible to perform disk sanitation from System Manager.

Assign Ownership – Auto – CLI

The automatic assignment of disks is a default option that is enabled in Data ONTAP.  When you add more shelves you may want to disable this feature which I’ll show in the manual process below.  This prevents Data ONTAP from taking it upon itself to assign disks to a controller.  Removing ownership isn’t a pain, but if you have to remove say 500 of them, then you’ll hate life because you’ll be doing it through a script.  Well that’s at least how I had to do it.

The option to have Data ONTAP automatically assign ownership is through this option:

options disk.auto_assign <on/off>

I turned this option off initially and now I will turn it on so you can see from the CLI what it looks like when Data ONTAP takes ownership.

Verify that unassigned disks are available:

disk show –n

cli_disk_autoassign_step1Now that we can see that there are 7 free disks, we’ll turn on the auto assign option and watch them be sucked in by Data ONTAP:

options disk.auto_assign on

cli_disk_autoassign_step2That’s all there is to it.

Assign Ownership – Auto – System Manager

It is not possible to perform ownership assignment of disks from System Manager.

Assign Ownership – Manual – CLI

Manually assigning disks is a fairly simple process.  As a bonus lesson, I’m going to show how to remove ownership of disks.  This will reset the lab so I can manually assign them.  When we looked previously we saw the following disks as unassigned.

cli_disk_autoassign_step1Make special note of the device names.  We’ll need those to remove ownership.

To remove ownership of disks you have to switch to advanced mode on the filer.  Once we’re in advanced mode we’ll list the names of the devices to remove ownership and confirm it.

options disk.auto_assign off
priv set advanced
disk remove_ownership v5.22 v5.24 v5.25 v5.26 v5.27 v5.28 v5.29
y
priv set

cli_disk_manual_assign_step1That basically resets things back to normal.  Now we run the same commands.  Confirm that there are unowned disks:

disk show –n

cli_disk_manual_assign_step2We’re going to assign only the devices v5.27, v5.28 and v5.29 to ONTAP2.  To do this we simply do the following:

 disk assign v5.27 v5.28 v5.29 –o ONTAP2

cli_disk_manual_assign_step3Verify that the disks are showing up as assigned:

sysconfig –r

cli_disk_manual_assign_step4Assign Ownership – Manual – System Manager

It is not possible to perform ownership assignment of disks from System Manager.

 

NCDA Study Part 2.4 – Data ONTAP: Configuration

As always download the newest version of the blueprint breakdown from here.

This section pertains to RAID groups within Data ONTAP.  RAID groups consist of the physical disks within the shelves of the NetApp system.   Data ONTAP supports three different types of RAID groups, but only two natively.  RAID4 and RAID-DP are the two natively supported RAID types and are the ones most commonly run into.  Data ONTAP is capable of supporting RAID 0, but that’s only on third party types of storage and the RAID level isn’t truly protected by Data ONTAP but by the underlying storage array.  Data ONTAP simply uses RAID0 for all the third party arrays.  While RAID0 provides no protection on the Data ONTAP side, resiliency can be achieved by the underlying storage controller.  For this section, we’ll be concentrating on RAID4 and RAID-DP and their configuration.

Since RAID groups are components of an aggregate and plex, and to create one you have to do it at the aggregate level, refer back to this post on how to create a standard RAID4 and RAID-DP RAID group.  In this post I’m going to dig a bit deeper and do some other labs.

***Remember that RAID Group sizes differ based on disk type and the model type of the NetApp device.  In our case below we’re not pushing the limits on RAID Group size maximums but it’s an important concept to remember.  You should always attempt to create equalized RAID Groups if you need more than one.  This helps with performance.  Also, I’m not taking into consideration the best practice in regard to hot spare drives.  You’ll notice a few error messages when you add disks and don’t take spares into consideration.  Here they can be ignored, in the real world you need to think about what you’re doing in that regard.  Here is a link to a good thread about those practices and how many you should use per disk type in your array (refer to Tip#2 although the entire article is great).  ***

We’re going to perform the following labs in both the CLI and System Manager where the capabilities to do in both exist.

  • Add RAID4 group to existing aggregate
  • Add RAID-DP group to existing aggregate
  • Expand RAID Group
  • Add Disks to RAID Group
    • Single Disk
    • Specific Disk

Add RAID4 group to existing aggregate – CLI

The first step is determining the free spare disks that are available to be added to the RAID group.  This can be done using two different commands:

aggr status –r
sysconfig –r

cli_raid4_rg_add_step1

We have 7 spare disks that we can use to create a new RAID group and add it to ‘newaggr’.  To do this we perform the following commands to create a new RAID group, add the remaining 7 disks, preview what the RAID groups will look like and then create it:

aggr add newaggr –g new –T FCAL 7

cli_raid4_rg_add_step2

Add RAID-DP group to existing aggregate – CLI

This process is the exact same as above, however the only difference is that RAID-DP will use two disks for parity and double-parity.

The first step is determining the free spare disks that are available to be added to the RAID group.  This can be done using two different commands:

aggr status –r
sysconfig –r

cli_raiddp_rg_add_step1We have 7 spare disks that we can use to create a new RAID group and add it to ‘newaggr’.  To do this we perform the following commands to create a new RAID group, add the remaining 7 disks, preview what the RAID groups will look like and then create it:

aggr add newaggr –g new –T FCAL 7

cli_raiddp_rg_add_step2Add RAID4/RAID-DP group to existing aggregate – System Manager

I combined both RAID4 and RAID-DP because you have no control over setting the RAID group type when adding a new group anyway.  If you add a new group, it will always be the same type as rg0 because all RAID Groups must be the same size.    You can however only have a RAID Group that is sized for 7 disks, composed of 4 disks.  This is not optimized but is possible.  Either way, both ways are exactly the same.  When adding new RAID groups and disks via System Manager, it will halt you when you try to use all the disks without leaving a hot spare.  In this case we can’t add all the disks and keep the RAID groups the same size.  So we’ll just add 6 disks instead.  As long as we’re aware of the spares concern and the RAID group size concern, this isn’t an issue and still accomplishes what we’re trying to do for studying purposes.

There are actually two ways you can add a new RAID group and disk using System Manager.  The first way is really Data ONTAP doing it for you and the second way is really how you do it.  I’ll show both ways.

First way (automatic way) – This technically isn’t adding a RAID group in the sense that you’re actually performing the adding functionality, however it will add a RAID group for you.  The default behavior of adding disks to a RAID group is this; if a RAID group is sized larger than the amount of disks it is currently composed of, it will add any new disks to attempt to fill that group first.  If that group is already full, it will create another group automatically.  This is both good and bad because we want to try and keep our RAID group sizes as equal as possible.  You could end up with a group that is sized to 16 and a new group sized to 6 disks.  This could bring about performance problems.  So don’t use this first way if you are unsure of your RAID group size.  Once you add disks to a RAID group, you can’t take them out without destroying the aggregate first.  Be careful!  The ‘newaggr’ aggregate was built with a RAID group size of 7 disks.  Since it’s already composed of 7 disks, adding more disks to the aggregate will create a new RAID group.

Open and connect to the Data ONTAP device, expand the device, Storage, click on Disks

osm_rg_add_fw_step1From this screen you will be able to see the disks and how they are marked by ‘State’.  We’re looking for disks marked ‘Spare’.  Using the CTRL key, highlight 6 spare disks and click ‘Add to Aggregate’.  You can’t add all 7 because you will get the hot spare warning.

osm_rg_add_fw_step2Select the aggregate, ‘newaggr’ and click the ‘Add’ button.  This will add the disks.  You can verify this by clicking on the Aggregate option on the left, click on ‘newaggr’ and then selecting the ‘Disk Layout’ tab at the bottom.  You should now see two RAID groups, rg0 and rg1.  You will also notice the disk count is now 13.

osm_rg_add_fw_step3Second way (the advanced way) – This is a much more advanced way and has more customizable way to actually add a RAID group to an existing aggregate.

Open and connect to the Data ONTAP device, expand the device, Storage, Aggregates and click on ‘Add Disks’.

Click on the ‘Advanced’ option to expose the RAID Group configuration.  The key here is the RAID Group size and the ‘Add Disks To’ option.  In this shot the size is 7 and the option is to add disks to existing groups first, then Data ONTAP will create a new RAID Group if those are full.

osm_rg_add_rw_step1Click on ‘Select Disks’ and add 6 disks.  Remember that System Manager will give you a hard time about spares so you can only add 6 for now.  It’s not a big deal and still completes what we’re trying to accomplish.

osm_rg_add_rw_step2You’ll get the typical warning about the groups being unbalanced, but again you can ignore this message.

osm_rg_add_rw_step3Clicking on ‘Add’ will commit the changes and add the disks.  Now we can verify that it worked.  You should see 13 disks in the aggregate and two RAID Groups in the disk layout tab.

osm_rg_add_rw_step4You can only expand or re-size RAID Groups on a per-aggregate basis.  You cannot change the size of only one RAID Group in an aggregate unless the aggregate is composed of only one aggregate.  If it’s composed of two groups, you’ll be changing the size of both groups because all RAID Groups must be the same size.

Expand RAID Group – CLI

I started with two group 5 disk RAID Group setup and I’m going to expand them to 6 disks.  We can verify this by running the following command:

aggr status –r

cli_expand_rg_step1As you can see we have two 5 disk groups.  You won’t get a confirmation or any indication that the size actually changed unless you run both commands below.  One actually performs the operation and the other confirms the change.  Now we’re going to expand to 6 disks with the following command:

aggr options newaggr raidsize 6
aggr status

cli_expand_rg_step2Add Disks – Single – CLI

To add a disk to a RAID Group you must first take a look at the size of the group and how many disks are already assigned.  The following commands will verify the size of the group and show how many are there already.

aggr status
aggr status –r

cli_add_disks_rg_step1I cut out unnecessary output from the commands.  We can now understand that we have two RAID groups, they are 6 disks in size and they only have 5 disks in each group.  If we want to add a single disk to rg0, we simply run the following commands to add and verify:

aggr add newaggr –g rg0 –T FCAL 1
aggr status –r

cli_add_disks_rg_step2We can now see that the disk addition was completed and that rg0 now has 6 disks in its group and rg1 only has 5.

Add Disks – Specific – CLI

Now we’re going to add a specific spare disk from the spare pool to rg1 to bring it up to 6 disks in its group.  Looking at the following command we’re going to add spare disk v5.28 to rg1:

aggr status –r

cli_add_disks_rg_specific_step1We can see that v5.28 is free in the spare list so we can add it by running the following command and verify:

aggr add newaggr –g rg1 –d v5.28
aggr status –r

cli_add_disks_rg_specific_step2Add Disks – Single – System Manager

Open and connect to the Data ONTAP device, expand the device, Storage, Aggregates and click on ‘Add Disks’.  Click on the ‘Select Disks’ option and put in a single disk.

osm_add_disks_rg_step1Click ‘OK’ and then ‘Add’ to commit the changes.  Verify the changes.

osm_add_disks_rg_step2Add Disks – Specific – System Manager

To do a specific disk, you need to go into the Disks area that I talked about before.  While above I stated you could add a new RAID Group automatically, this area is more specifically for adding specific disks.

Open and connect to the Data ONTAP device, expand the device, Storage, click on Disks.  Select the disk from the list and then click on the ‘Add to Aggregate’ button.

osm_add_disks_rg_specific_step1Select the name of the aggregate and verify all the information is correct.  Click on ‘Add’ and then verify the disk was properly added.

osm_add_disks_rg_specific_step2

NCDA Study Part 2.3 – Data ONTAP: Configuration

As always download the newest version of the blueprint breakdown from here.

This section will cover plexes.  A plex is composed of RAID groups and is associated with an aggregate.  An aggregate will typically have one plex, plex0, and in the case of SyncMirror will have two plexes, plex0 and plex1.  In this case plex1 will be a mirrored copy of plex0.  For the blueprint’s concern, this section will cover just verifying via the CLI and System Manager where you can see the plex information in Data ONTAP.  Later, using SyncMirror you’ll be able to see both plex0 and plex1 in that configuration.

We’re going to perform the following labs in both the CLI and System Manager where the capabilities to do in both exist.

  • Verify

Verify Plex – CLI

To look at the plex information, you have to look at the aggregate.  When you run the command below you will notice how the hierarchy works in regards to plexes, aggregates and RAID groups.  You can use either of these commands to get the same results.

aggr status –r
sysconfig –r

cli_plex_verify_step1

Looking at this output, you will see that the Plex for aggregate 0 is online, normal and active.  You will also notice that the output starts with the aggregate, then plex, then the RAID group.  This aggregate is aggr0, with plex0 inside, and composed of rg0.

Verify Plex – System Manager

Using System Manager to look at the plex is fairly straightforward.  Once you open and connect to the Data ONTAP device, expand the device, Storage, click on Aggregates and then click on the Disk Layout tab at the very bottom.  This will bring up the plex view. cli_plex_verify_step2

NCDA Study Part 2.2 – Data ONTAP: Configuration

As always download the newest version of the blueprint breakdown from here.

The next section on the configuration overview covers aggregates.  As discussed aggregates are composed of a plex which is made up of disk groups and represent the combined performance of the disk groups it’s composed of.  After initial configuration there will always be an aggr0 on the device.  This aggregate holds the root volume, /vol/vol0, and cannot be deleted.  You can however create another aggregate from the available disks.  Below are the steps for creating the various aggregate types.

We’re going to perform the following labs in both the CLI and System Manager where the capabilities to do in both exist.

  • Create 32/64bit aggregates – 1 RAID4 and 1 RAID-DP to show differences
  • Enable/disable Snapshots

Create 32bit Aggregate in RAID 4– CLI

  • SSH to the device and confirm that there are free disks to create an aggregate from
aggr status –r

We have 14 FCAL 15000 RPM drives free to create the aggregate

cli_aggr32_create_step1
  • Create a 32bit aggregate using only 7 of the 14 free disks.  This command below will create a 32bit aggregate named ‘newaggr32’, in a RAID group consisting of 7 drives in RAID 4, using 7 of the FCAL disks we see as spares.
aggr create newaggr32 –t raid4 –r 7 –B 32 –T FCAL 7

cli_aggr32_create_step2

Our new aggregate is created and is brought online

  • Verify aggregate has the appropriate settings
aggr status

cli_aggr32_create_step3

Create 64bit Aggregate in RAID-DP– CLI

  • SSH to the device and confirm that there are free disks to create an aggregate from
aggr status -r

cli_aggr64_create_step1

  • Create a 64bit aggregate using only 7 of the 14 free disks.  This command below will create a 64bit aggregate named ‘newaggr64’, in a RAID group consisting of 7 drives in RAID 4, using 7 of the FCAL disks we see as spares.
aggr create newaggr64 –t raid_dp –r 7 –B 64 –T FCAL7

cli_aggr64_create_step2

  • Verify aggregate
aggr status

cli_aggr64_create_step3

Create Aggregate – System Manager

Creating an aggregate in System Manager is pretty straightforward.  I’m not going to create each type within the GUI, but you’ll see the options when we’re going through.  Once you open and connect to the Data ONTAP device expand the device, Storage and click on Aggregates

  • Click on Create and click Next

osm_aggr_create_step1

  • Name the aggregate, choose the RAID type and the block format then click Next

osm_aggr_create_step2

  • Choose the controller you want to create it on.  This only applies if you have dual controllers in the array.  Choose the disk group and click Next.

osm_aggr_create_step3

  • Click on Select Disks… and type in the number of disks you want to use to create the RAID Group and click OK

osm_aggr_create_step4

  • Select the RAID Group Size and the number of disks in each group.  Remember, RAID Groups are composed of Parity and in the case of RAID-DP, Double-Parity drives.  The more RAID Groups, the more parity drives.  Depending on the model of controller and disk type, RAID Groups can be different sizes.  In our case we only want one RAID Group because we’re not nearing the maximums size the RAID Group can be with only 7 FCAL drives.  This also limits wasting drives for Parity when we don’t need them.  Also, remember that RAID Groups can’t be shrunk once they’re created.  You can only add disks to them or grow the group size up to the maximum.  Pay special attention when creating the groups with lots of disks that you keep the RAID Group sizes very close to the same.  This will keep performance consistent.  System Manager will give you a warning if you have disparate RAID Group sizes.

osm_aggr_create_step5

  • This is the correct setting or the setting that we want to use.  Click Next to continue.

osm_aggr_create_step6

  • Verify that the settings are correct and click on Create.   Data ONTAP will zero the disks and create the aggregate.

osm_aggr_create_step7

  • The aggregate is now completed and online.

osm_aggr_create_step8

  • Verify that all looks good

osm_aggr_create_step9

Enable/Disable Snapshots – CLI

The next section covers enabling and disabling Snapshots on the aggregate.  Aggregate Snapshots can be used to revert the entire aggregate back to a point in time.  While this is very rarely done, with the exception of a dire emergency type of scenario, it is a configurable option.  You can’t enable or disable Snapshot options for the aggregate from within System Manager.  It can only be done via the CLI.

By default when a new aggregate is created from the CLI and from System Manager, the Snap Reserve is 0% and the Snap schedule is 0 1 4@9, 14, 19.  This means that a snapshot will be taken 0 weekly, 1 daily, 4 per hour at 9am, 2pm and 7pm each day and that a Snapshot doesn’t have dedicated area wthin the aggregate to store the Snapshots, it will consume free space within the aggregate as much as it can get.  This is the default schedule and default Snap Reserve setting for Data ONTAP 8.x.  It can be modified according to whatever the needs are.

Snapshots are enabled by default when an aggregate is created.  You will not get a confirmation when you use this command.  Also, it will not delete any existing Snapshots either.  You’ll have to delete them by hand.

aggr options newaggr nosnap on
snap delete –A  newaggr nightly.0

cli_aggr_snap_step1

Deletes a Snapshot of the aggregate ‘newaggr’ named ‘nightly.0’.

You can effectively disable Snapshots by either using the on/true command above or by turning off the Snap schedule all together.

Snap sched –A newaggr
snap sched –A newaggr 0 0 0
snap sched –A newaggr

cli_aggr_snap_step2

Here are the commands to show the default Snapshot schedule for all aggregates:

snap sched –A

As you can see the default schedule is 0 1 4@9, 14, 19.  You can change the schedule to something else by using the following command:

snap sched –A newaggr 1 1 1@4,12,20

cli_aggr_snap_step3

1 weekly snapshot, 1 daily snapshot and 1 hourly snapshot at 4am, 12pm and 8pm.