Archive for the ‘Celerra’ Category

I’ve been working on a storage project over the last few months and I ran into a problem with MirrorView /S which turned out to be a bug in the CLARIION flare code. I thought id write a quick post about it incase anyone comes across the same problem.

The Problem :

The symptoms I saw were as follows;

  • Enabling the first mirror of a LUN per SP showed no issues, the LUN replicated and showed to be in a consistent state.
  • Enabling additional mirrors of LUNS on the same SP caused the initial mirror to re synchronize, (but would never complete).
  • Enabling  additional mirrors caused the hosts average read/write queue times to shoot through the roof causing huge performance problems.
Fix:

The arrays I was working on were running flare version 04.30.000.5.004 which needed to be upgraded to 04.30.000.5.517. After the upgrade I initiated a sync on all mirrors and everything starting working as expected.

Advertisements

If you’ve ever configured CIFS shares on your Celerra before (VSA included) you’ll know when mapping to the shares using Windows 2000, XP or Server 2003 you get the default CIFS server comment added in windows explorer.

The comment will display as “EMC-SNAS:T<Dart Version>” shown below as “EMC-SNAS:T6.0.36.4

To stop this from happening you can typically handle this two ways.

#1. On an existing CIFS server: Run the following command as user nasadmin

server_cifs <vdm> -add compname=cifs server>,domain=<domain>,interface=<interface name> -comment ” “

You can see from the command I run that I’m using a virtual data mover, if you’re not then replace this with <server_x>

#2. When creating a new CIFS server: One of the benefits of creating a CIFS server from the command line is you can add the switch -comment ” “ at the time of creation which gets around this problem from word go.

If you’ve mapped drives prior to the change then mapped drives will still show the default CIFS comment because windows caches it in the registry. I did a search for EMC-SNAS and found it in the following location (shown below).

Just delete the entry from the registry and at next login the mapped drive will appear without the comment.

In the last couple of months I’ve deployed two Celerra NS120’s and a Celerra VG2 gateway for which I experienced the same problem preventing anyone from being to login to Unisphere post install.

The symptoms with the NS120’s was quite different to that of the gateway, the NS120’s gave a clear blatant certificate error which let you know the certificate was not valid and basically stated to please come back 3 hours in the future for then the certificate would be valid and accepted.

The VG2 gateway install was completely different and manifested its self to look more like a browser compatibility issue. When launching Unisphere the pop up window would launch but the login prompt would not appear.

The NS120’s were deployed in the afternoon and I didn’t have the need to login immediately so I packed up and left it for the next day. The next day I turned up and logged in without an issue as the certificate was now valid.

The VG2 gateway was more like a day out of sync and I thought id login, correct the time on the control station and try to re-generate the SSL certificates using the command /nas/sbin/nas_config -ssl but this unfortunately did not work.

I did some searching in PowerLink hoping to find something related and found the following knowledge base article;

emc257320 “After NAS 6.0 fresh installation, unable to log in to Unisphere”

While searching PowerLink I found another article related to the Clariion, but almost identical in nature. The fix for the Clariion was nicer as it allowed you to put your clock forward, login, then generate a new certificate and then put your time/date back to the corect time.

emc247504 “Certificate date and time error prevents access to CX4 storage system after upgrade to Release 30.”

The problem seems to be present in both FLARE 30 and DART 6.0 and can show up during install or upgrade.

Hopefully anyone with the same problem might find this post before spending hours thinking it’s a browser or java issue.

Hope this helps.

I was talking to a friend last week who asked me two provocative yet valid questions, and I thought It would be worth posting about because at the end of our conversation he had opened his mind to a few concepts he had not considered.

Question #1   “Whats all this Unified Storage Hooey about?” 

Probably a good place to start for people who are not overly familiar with the term “Unified Storage” is to define it. In my own words “Unified Storage” is about being able to present and manage storage using multiple protocols e.g. BLOCK (FC), NFS, iSCSI and CIFS from the same storage array.

It’s no secret that most of us (myself included) have adopted the mind-set “Virtualize Everything” and with that I don’t need to explain where BLOCK, NFS and iSCSI have a place in the virtualization world as they are all widely used and proven.

You might say that typically people tend to pick one protocol and stick with it and you wouldn’t be wrong, but think about being able to tier storage by protocol rather than how you normally associate tiering in the storage world which tends to centered around disk type e.g EFD, FC, SATA. If you look at the cost of FC switch ports and HBA’s compared to Ethernet ports and adapters, it makes total sence that you might consider putting tier 1 production systems on FC storage while placing your test and development systems on IP storage such as NFS or iSCSI.

Question #2   “What good is CIFS to me in a predominantly Virtual world”

When virtualization started taking off years ago we all went mad, it was all about server consolidation and In most cases we virtualized anything that responded to a ping request. What happened often was people virtualized file servers and ended up with virtual machines with huge .vmdk or rdm’s which generally did two things.

  • Used huge amounts of valuable storage
  • Caused huge bottle necks and backup windows blew out

 So I’ll address those two points and try to also cover some other key points which in my mind, make putting file data on the Celerra a no brainer.

De-Duplication: This feature is actually single instance plus compression. People are always sceptical about the figures which storage vendors put out in the market as they are often not achievable, but I can tell you that I have personally seen 1TB file systems reduce by 50% and in some cases 60% with Celerra De-Duplication enabled.

NDMP Backup: File systems containing user data tend to be quite dense, typically this causes traditional backup systems grief and a full backup window can end up being 4-5 days with large amounts of data. NDMP backups using EMC NetWorker can be configured to pull the data over the network or directly to a fibre connected device (tape or VTL drive) for LAN free backup. Celerra NVB (block based backup) handles dense file systems really well, Ive seen backup windows shrink from days to hours.

Snapshots: If you’ve ever been a backup administrator for a large organization you’ll know that on a bad week you can end up spending way to much time recovering user data. Celerra allows you to configure scheduled snapshots of file systems which are then available via the windows previous version tab built into XP service pack 3 and above operating systems.

Archiving: How much of that user data has actually been accessed in the last 6 months ? Introduce EMC’s File Management Appliance (Rainfinity) and you have the ability to move data from expensive FC disk to cheaper SATA storage while being transparent to users accessing the data.

I could go on and on about the other features such as High Availability, Replication, File Level retention, File Filtering, etc, etc but I think this post might end being too much of a mammoth read so I might pick some of these and elaborate more in up coming posts.

I’m not saying it’s a mistake to virtualize file servers because in certain cases there are advantages e.g. being able to replicate and protect using VMware Site Recovery Manager is a good example where its advantageous, but I am saying the larger the amount of data, the more beneficial it is to consider moving to the Celerra.

I’ve been deploying Celerra’s for a while now and with technology changing as fast as it does, you really need to be across the applied practice, best practice and NAS Matrix documents found in PowerLink to validate your storage design.

For the last couple of years the majority of the Celerra’s Ive deployed have been shipped with DART 5.6 code and the backend Clariion flare version has typically been what we refer to as 28,29 and most recently flare 30.

One of the first things you learn when deploying Celerra’s is you need to read the NAS Matrix guide to determine which RAID configurations are supported because although you may be able to create certain RAID types with x number of disks on the backend Clariion, It doesn’t mean the Celerra will let you use it… this is typically where people sit for 20-30 mins performing multiple “rescan storage” operations hoping that storage will magically appear.

One of the biggest changes in Flare 30 was the introduction of storage Pools which is quite different to the traditional “Raid Group” concept which has been around for donkeys years. If you want to read a bit more in-depth information about Storage Pools, then check out Matt Hensleys post here.

What I couldnt find in the latest NAS Matrix guide or in any of the documentation I found was conformation that I could present storage from the Clariion storage pool to the Celerra, so after carving a LUN from the storage pool made up of 10 FC disks, I presented it through to the Celerra and performed a rescan…… rescan…… and one more for good luck…..rescan, but unfortunately no luck and no LUN.

At this stage I remembered often performing tasks from the command line on the control station often spits out more debug information so I thought id give it a shot.

And that confirmed it. ” Fully provisioned CLARIION pool device was not marked because it is not supported in this realease”

The answer to this problem in the end was “thin provisioning”. I created a thin provisioned LUN from the Clariion Pool, added the LUN to the Celerra storage group and performed a rescan and there was my LUN.

In the end I found the following primus solution EMC241591 which outlines 3 conditions that will cause this error, fully provisioned LUNs was one of them along with Auto Tier and Compression being enabled.

Cause : Certain new CLARiiON FLARE Release 30 features are not supported by Celerra and the initial 6.0 NAS code release.  CLARiiON LUNs containing the following features will not be diskmarked by the Celerra, resulting in diskmark failures similar to those described in the previous Symptom statements:

  • CLARiiON FAST Auto-Tiering is not supported on LUNs used by Celerra.
  • CLARiiON Fully Provisioned Pool-based LUNs (DLUs) are not supported on LUNs used by Celerra.
  • CLARiiON LUN Compression is not supported on LUNs used by Celerra.

 

DEAR EMC SANTA

Posted: December 16, 2010 in Celerra, Technical Wish List

It’s that time of the year again and with Christmas just around the corner I thought over the next few days I would throw some ideas out there incase product management, product development or just people in the right places somehow stumble across my technical wishes.

Technical Wish # 1

Tones of customers are using CELERRA for Block or NAS based storage for VMware Site Recovery Manager deployments. The majority of our customers have migrated file data onto the CELERRA and replicate the Virtual Data Mover and CIFS file systems associated.

Now since most customers using NFS are likely to have the EMC Celerra Plug-in for VMware  it would make sense to either extend the functionality or create a  new Plug-In which would allow VDM (Virtual Data Mover) and associated File Systems to failover within the Plug-in.

Currently you have to A. Perform this using Celerra Manager / Unisphere Manager, and B. Currently have to perform separate actions to failover the VDM and File Systems. Having this integrated with the existing or creating a new Plug-In would be just down right sexy.

 

Since Celerra NFS datastores have become supported by both Replication Manager and VMware Site Recovery Manager, I’ve been migrating Celerra customers off iSCSI to NFS mainly because of the overhead needed to facilitate Snapshots/Replication of iSCSI luns.

In versions prior to 5.3 you needed to have a Linux host configured as the VMware Proxy to allow snapshots of NFS datastores, but the 5.3 release now supports this on a Windows operating system.