Archive for the ‘CLARIION’ Category

One of my favorite sessions at EMC World this year was titled “VNX FAST VP – Optimizing Performance and Utilization of Virtual Pools” presented by Product Manager Susan Sharpe.

As a technical person I always worry about having to sit through sessions which end up being too sales focused or that the presenter will not have enough technical knowledge to answer the “hard ones” come question time. This was not the case and it was evident that Susan had probably been around from the very beginning of FAST VP and answered every question thrown at her with ease.

I had hopes of taking my notes from this session and writing up a post about the changes to FAST VP but as you would expect Chad over at VirtualGeek was quick off the mark with this post covering the goodness to come with INYO.

Rather than post the same information I decided to post something on the three points which relate directly to FAST VP and share why as someone who designs and implements VNX storage solutions…these changes were much-needed and welcomed with open arms.

Mixed Raid VP Pools

Chad started off In this section by saying this was the number #1 change requested, and I totally agree.

When you look at the best practice guide for VNX Block and File, the first thing that stands out in terms of disk configuration is that EMC recommend disks be added to a pool in a 4+1 Raid 5 configuration. however when you add NL-SAS drives to the pool you get a warning message pop up… “EMC Strongly Recommends Raid 6 be used for NL-SAS drives 1TB  or larger when used in a pool” .. something along those lines.

So the problem here is that until INYO is released, you can’t mix raid types. This means to follow best practice when adding NL-SAS drives larger than 1TB to a pool, you need to make everything Raid 6, including those very costly SSD disks. This of course means your storage efficiency goes out the window.

Why The Warning ? – In my opinion this comes about because of the rebuild times associated with large NL-SAS drives, while the chances of having a double drive failure within a raid group during rebuild is very unlikely, it is potentially a lot higher than the chances of a double drive failure with the smaller faster SAS drives. Never say Never right ?

FAST VP Pool Automatic Rebalancing

As an EMC partner we have access to a really cool application which produces heat maps from CX/VNX NAR files and shows the utilization ratios for the private raid groups within a pool (and much more). It was common to see one or more private raid groups doing considerably more I/O than the others and without the “rebalance” functionality it was difficult to remedy. (to be fair this was typically seen on pools without FAST VP enabled)

Now with INYO adding drives will cause the pool to re-balance as well as what might possibly be an automated/scheduled rebalance of the pool data across all drives. This now means when a customers heat map shows that raid groups are over utilized, you can throw some more disk at it and let the re-balance do its thing.

Higher Core Efficiency

A number of times over the last few years I’ve encountered customers who were moving away from “competitor vendor X” to EMC storage and were used to much larger raid groups and sometimes it was a tough pill to swallow when they expected to get “X TB” and got considerably less having configured the pool using 4+1 Raid 5. (which to date is still best practice)

Susan and Chad both mention that EMC engineering looked at the stats from customer VNX work loads and decided that 4+1 was rather conservative so in order to improve and drive better storage efficiency they would open up support for the following parity configurations.

  • 8+1 for RAID 5 (used with 10K/15K SAS or SSDs) 
  • 14+2 for RAID 6 (target is NL SAS)

If you’ve come from a unified (Celerra) background then 8+1 is nothing new and I don’t expect this to cause too much concern. Having these additional parity configurations just makes configuring a VNX that much more flexible and allows us to keep a larger range of people happy.

What you decide to use may depend on the number of DAE’s you have, the expected work load and also if your lucky enough to also have FAST Cache enabled. The best piece of free advice I can give is “Know your workload”

I’m really excited about these improvements and I think this is really going to make a lot of people happy !

Advertisements

When it comes to zoning, everyone has their own way to do things, but I thought id share a couple of the things I do to make my life easier down the road, and who knows maybe yours too.

I’m going to mainly concentrate on the zoning of EMC arrays here, but to go broader I think its worth while noting that “single initiator single target” is defiantly the way to go.

In almost every site which I go into, I will find alias names setup for a CLARIION as shown below.

cx4_spa0 or often ill see cx4_spa_port0

There is absolutely nothing wrong with this and it gives you all the information you need such as the Array Type, Storage Processror and Port which is connected. This works well when you have single array, but often people get stuck with additional alias names when they have a second array of the same type. There are of course a number of ways you can go about differentiating the arrays but few keep things tidy and consistent.

The convention I use will depend on how their existing Aliases are configured eg;

vnx_spa_port0 or vnx_spa0 or vnx_spa_port_0 (all of which are examples I have seen in the field). 

So what will I use ? I will configure vnx_3ea13252_spa_0 or  vnx_3ea13252_spa_port0 again depending on existing configurations.

So where does the 3ea13252 come from ? Well every time you zone an array the wwn will be different and that’s the point. I’ve created a diagram below to show a typical CLARRION / VNX wwn which you would expect to see when you go looking on the array or on the switch after connecting it.

As soon as you see the first 3 octets of 50:06:01 then you know it’s an EMC CLARIION array, the next octet is the Storage Processor Port, and the last 4 octets is the unique array ID assigned to the array.

While my alias might not be the most attractively constructed Alias you’ve ever seen, its does give me a very clear path for adding additional VNX arrays to the same switched network as the array ID will never be the same.

vnx_3ea13252_spa_port0

vnx_1ca19253_spa_port0

If I come back and need to either add an additional array yet again, or make zoning changes, It’s really clear and hard to get wrong. Most of all I like it because its tidy and consistent, maybe it might be a touch on the obsessive compulsive side, and one might argue that port descriptions can be used to get around this….. well I have a golden rule of never trusting port descriptions as they are often out of date. (which means wrong).

This is not EMC best practice, this is just my own way of doing things which works well for me. 🙂

I’ve been working on a storage project over the last few months and I ran into a problem with MirrorView /S which turned out to be a bug in the CLARIION flare code. I thought id write a quick post about it incase anyone comes across the same problem.

The Problem :

The symptoms I saw were as follows;

  • Enabling the first mirror of a LUN per SP showed no issues, the LUN replicated and showed to be in a consistent state.
  • Enabling additional mirrors of LUNS on the same SP caused the initial mirror to re synchronize, (but would never complete).
  • Enabling  additional mirrors caused the hosts average read/write queue times to shoot through the roof causing huge performance problems.
Fix:

The arrays I was working on were running flare version 04.30.000.5.004 which needed to be upgraded to 04.30.000.5.517. After the upgrade I initiated a sync on all mirrors and everything starting working as expected.