One of my favorite sessions at EMC World this year was titled “VNX FAST VP – Optimizing Performance and Utilization of Virtual Pools” presented by Product Manager Susan Sharpe.
As a technical person I always worry about having to sit through sessions which end up being too sales focused or that the presenter will not have enough technical knowledge to answer the “hard ones” come question time. This was not the case and it was evident that Susan had probably been around from the very beginning of FAST VP and answered every question thrown at her with ease.
I had hopes of taking my notes from this session and writing up a post about the changes to FAST VP but as you would expect Chad over at VirtualGeek was quick off the mark with this post covering the goodness to come with INYO.
Rather than post the same information I decided to post something on the three points which relate directly to FAST VP and share why as someone who designs and implements VNX storage solutions…these changes were much-needed and welcomed with open arms.
Mixed Raid VP Pools
Chad started off In this section by saying this was the number #1 change requested, and I totally agree.
When you look at the best practice guide for VNX Block and File, the first thing that stands out in terms of disk configuration is that EMC recommend disks be added to a pool in a 4+1 Raid 5 configuration. however when you add NL-SAS drives to the pool you get a warning message pop up… “EMC Strongly Recommends Raid 6 be used for NL-SAS drives 1TB or larger when used in a pool” .. something along those lines.
So the problem here is that until INYO is released, you can’t mix raid types. This means to follow best practice when adding NL-SAS drives larger than 1TB to a pool, you need to make everything Raid 6, including those very costly SSD disks. This of course means your storage efficiency goes out the window.
Why The Warning ? – In my opinion this comes about because of the rebuild times associated with large NL-SAS drives, while the chances of having a double drive failure within a raid group during rebuild is very unlikely, it is potentially a lot higher than the chances of a double drive failure with the smaller faster SAS drives. Never say Never right ?
FAST VP Pool Automatic Rebalancing
As an EMC partner we have access to a really cool application which produces heat maps from CX/VNX NAR files and shows the utilization ratios for the private raid groups within a pool (and much more). It was common to see one or more private raid groups doing considerably more I/O than the others and without the “rebalance” functionality it was difficult to remedy. (to be fair this was typically seen on pools without FAST VP enabled)
Now with INYO adding drives will cause the pool to re-balance as well as what might possibly be an automated/scheduled rebalance of the pool data across all drives. This now means when a customers heat map shows that raid groups are over utilized, you can throw some more disk at it and let the re-balance do its thing.
Higher Core Efficiency
A number of times over the last few years I’ve encountered customers who were moving away from “competitor vendor X” to EMC storage and were used to much larger raid groups and sometimes it was a tough pill to swallow when they expected to get “X TB” and got considerably less having configured the pool using 4+1 Raid 5. (which to date is still best practice)
Susan and Chad both mention that EMC engineering looked at the stats from customer VNX work loads and decided that 4+1 was rather conservative so in order to improve and drive better storage efficiency they would open up support for the following parity configurations.
- 8+1 for RAID 5 (used with 10K/15K SAS or SSDs)
- 14+2 for RAID 6 (target is NL SAS)
If you’ve come from a unified (Celerra) background then 8+1 is nothing new and I don’t expect this to cause too much concern. Having these additional parity configurations just makes configuring a VNX that much more flexible and allows us to keep a larger range of people happy.
What you decide to use may depend on the number of DAE’s you have, the expected work load and also if your lucky enough to also have FAST Cache enabled. The best piece of free advice I can give is “Know your workload”
I’m really excited about these improvements and I think this is really going to make a lot of people happy !