NCDA Study Part 2.2 – Data ONTAP: Configuration

As always download the newest version of the blueprint breakdown from here.

The next section on the configuration overview covers aggregates.  As discussed aggregates are composed of a plex which is made up of disk groups and represent the combined performance of the disk groups it’s composed of.  After initial configuration there will always be an aggr0 on the device.  This aggregate holds the root volume, /vol/vol0, and cannot be deleted.  You can however create another aggregate from the available disks.  Below are the steps for creating the various aggregate types.

We’re going to perform the following labs in both the CLI and System Manager where the capabilities to do in both exist.

  • Create 32/64bit aggregates – 1 RAID4 and 1 RAID-DP to show differences
  • Enable/disable Snapshots

Create 32bit Aggregate in RAID 4– CLI

  • SSH to the device and confirm that there are free disks to create an aggregate from
aggr status –r

We have 14 FCAL 15000 RPM drives free to create the aggregate

cli_aggr32_create_step1
  • Create a 32bit aggregate using only 7 of the 14 free disks.  This command below will create a 32bit aggregate named ‘newaggr32’, in a RAID group consisting of 7 drives in RAID 4, using 7 of the FCAL disks we see as spares.
aggr create newaggr32 –t raid4 –r 7 –B 32 –T FCAL 7

cli_aggr32_create_step2

Our new aggregate is created and is brought online

  • Verify aggregate has the appropriate settings
aggr status

cli_aggr32_create_step3

Create 64bit Aggregate in RAID-DP– CLI

  • SSH to the device and confirm that there are free disks to create an aggregate from
aggr status -r

cli_aggr64_create_step1

  • Create a 64bit aggregate using only 7 of the 14 free disks.  This command below will create a 64bit aggregate named ‘newaggr64’, in a RAID group consisting of 7 drives in RAID 4, using 7 of the FCAL disks we see as spares.
aggr create newaggr64 –t raid_dp –r 7 –B 64 –T FCAL7

cli_aggr64_create_step2

  • Verify aggregate
aggr status

cli_aggr64_create_step3

Create Aggregate – System Manager

Creating an aggregate in System Manager is pretty straightforward.  I’m not going to create each type within the GUI, but you’ll see the options when we’re going through.  Once you open and connect to the Data ONTAP device expand the device, Storage and click on Aggregates

  • Click on Create and click Next

osm_aggr_create_step1

  • Name the aggregate, choose the RAID type and the block format then click Next

osm_aggr_create_step2

  • Choose the controller you want to create it on.  This only applies if you have dual controllers in the array.  Choose the disk group and click Next.

osm_aggr_create_step3

  • Click on Select Disks… and type in the number of disks you want to use to create the RAID Group and click OK

osm_aggr_create_step4

  • Select the RAID Group Size and the number of disks in each group.  Remember, RAID Groups are composed of Parity and in the case of RAID-DP, Double-Parity drives.  The more RAID Groups, the more parity drives.  Depending on the model of controller and disk type, RAID Groups can be different sizes.  In our case we only want one RAID Group because we’re not nearing the maximums size the RAID Group can be with only 7 FCAL drives.  This also limits wasting drives for Parity when we don’t need them.  Also, remember that RAID Groups can’t be shrunk once they’re created.  You can only add disks to them or grow the group size up to the maximum.  Pay special attention when creating the groups with lots of disks that you keep the RAID Group sizes very close to the same.  This will keep performance consistent.  System Manager will give you a warning if you have disparate RAID Group sizes.

osm_aggr_create_step5

  • This is the correct setting or the setting that we want to use.  Click Next to continue.

osm_aggr_create_step6

  • Verify that the settings are correct and click on Create.   Data ONTAP will zero the disks and create the aggregate.

osm_aggr_create_step7

  • The aggregate is now completed and online.

osm_aggr_create_step8

  • Verify that all looks good

osm_aggr_create_step9

Enable/Disable Snapshots – CLI

The next section covers enabling and disabling Snapshots on the aggregate.  Aggregate Snapshots can be used to revert the entire aggregate back to a point in time.  While this is very rarely done, with the exception of a dire emergency type of scenario, it is a configurable option.  You can’t enable or disable Snapshot options for the aggregate from within System Manager.  It can only be done via the CLI.

By default when a new aggregate is created from the CLI and from System Manager, the Snap Reserve is 0% and the Snap schedule is 0 1 4@9, 14, 19.  This means that a snapshot will be taken 0 weekly, 1 daily, 4 per hour at 9am, 2pm and 7pm each day and that a Snapshot doesn’t have dedicated area wthin the aggregate to store the Snapshots, it will consume free space within the aggregate as much as it can get.  This is the default schedule and default Snap Reserve setting for Data ONTAP 8.x.  It can be modified according to whatever the needs are.

Snapshots are enabled by default when an aggregate is created.  You will not get a confirmation when you use this command.  Also, it will not delete any existing Snapshots either.  You’ll have to delete them by hand.

aggr options newaggr nosnap on
snap delete –A  newaggr nightly.0

cli_aggr_snap_step1

Deletes a Snapshot of the aggregate ‘newaggr’ named ‘nightly.0’.

You can effectively disable Snapshots by either using the on/true command above or by turning off the Snap schedule all together.

Snap sched –A newaggr
snap sched –A newaggr 0 0 0
snap sched –A newaggr

cli_aggr_snap_step2

Here are the commands to show the default Snapshot schedule for all aggregates:

snap sched –A

As you can see the default schedule is 0 1 4@9, 14, 19.  You can change the schedule to something else by using the following command:

snap sched –A newaggr 1 1 1@4,12,20

cli_aggr_snap_step3

1 weekly snapshot, 1 daily snapshot and 1 hourly snapshot at 4am, 12pm and 8pm.

Advertisements

One thought on “NCDA Study Part 2.2 – Data ONTAP: Configuration

  1. Pingback: NCDA Study Part 2.4 – Data ONTAP: Configuration | vWilmo

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s