Automating NetApp SnapMirror and PernixData FVP Write-Back Caching v1.0

I wanted to toss up a script I’ve been working on in my lab that would automate the transition of SnapMirror volumes on a NetApp array from using Write Back caching with PernixData’s FVP, to Write Through so you can properly take snapshots of the underlying volumes for replication purposes. This is just a v1.0 script and I’m sure I’ll modify it more going forward but I wanted to give people a place to start.

Assumptions made:

  • You’re accelerating entire datastores and not individual VMs.
  • The naming schemes between LUNs in vCenter and Volumes on the NetApp Filer are close.


  • You’ll need the DataONTAP Powershell Toolkit 3.1 from NetApp. Its community driven but you’ll still need a NetApp login to download it. It should be free to sign up. Here’s a link to it.
  • You’ll need to do some credential and password building first, the instructions are in the comments of the script.
  • You’ll need to be running FVP version 1.5.

What this script does:

  • Pulls the SnapMirror information from a NetApp Controller, specifically Source and Destination information based on ‘Idle’ and ‘LagTimeTS’ status. The ‘LagTimeTS’ timer is adjustable so you can focus in on SnapMirrors that have a distinct schedule based on lag time and aren’t currently in a transferring state.
  • Takes the name of the volumes in question and passes them through to the PernixData Management Server for transitioning from Write Back to Write Through and waiting an adjustable amount of time for the volume to change to Write Through and cache to de-stage back to the array.
  • Performs a SnapMirrorUpdate of the same volumes originally pulled and waits for an adjustable amount of time for the snapshots to take place
  • Resets the datastores back into Write Back with 1 Network peer (adjustable).

Comments and suggestions are always welcomed. I’m always open to learning how to make it more efficient and I’m sure there are several ways to tackle this.

You can download the script from here.


3 thoughts on “Automating NetApp SnapMirror and PernixData FVP Write-Back Caching v1.0

  1. Pingback: Getting the PernixData FVP Acceleration Policy from the CLI | vWilmo

  2. how did the script go? any new updates? We are about to demo FVP on a PowerEdge cluster using NetApp storage. Any advice?

    • I ended up not using it. My RPO/RTO isn’t 0 so the script doesn’t make too much sense in my case after I thought about it more. If you want consistent snapshots, then sure but if your RPO is 4 hours and you lose your data at 3.5, it’s not going to much matter what hadn’t made it to the backend storage anyway. Plus FVP also has a section in the interface that will give you a size of how much write-back data isn’t sync’d back to the storage backend. When I monitored it more and more, I found that there really wasn’t a time when the backend wasn’t in sync with the write-back cache.

      As far as advice, I would run the software in their ‘monitoring mode’ first and look at your workloads that are having write problems and try write-back on them first. Take it with baby steps with write-back and slowly expand it out to other systems. We eventually moved all primary workloads to write-back. As far as write-through, well I just turned that on pretty much from the start. There’s not much that can go wrong in that mode. It’s a great product, can’t find much wrong with it at all. Hope it all works out. If you have any other questions, don’t hesitate to ask.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s