NetWorker 7.6 SP2 with support for VMware VADP has finally become GA (Generally Available) and as with any new build, the first thing I do is download the documentation to see whats new or changed.

After downloading all available docs from Powerlink, I went and opened the document titled “Licensing Guide” which you would think should cover VADP licensing right ? ….. Wrong!.

Next I went and looked at “EMC NetWorker VMware Integration Guide“, I was hopeful this would contain the information I was after, but after reading the section “Licensing NetWorker support for VMware” I felt very much like I did when I saw the movie Mulholland Drive for the first time. 

However, today I was glad to receive an email from a colleague who found the following post on the NetWorker Community site by Eric Carter at EMC helping to clear up some of the misleading/confusing statements regarding licensing for VADP in 7.6 SP2 documentation.

Hopefully it wont be too long till we see updated documentation on Powerlink. (I’m also hoping the VADP licensing information gets added to the licensing guide document)

(click on the image below to enlarge)


Well its finally arrived, NetWorker 7.6 SP2 with support for VADP is now GA.

I’ve got a number of posts about this coming up, but for the moment, just a quick one to say it’s here, and to list some of the published features.

It’s fair to say that existing NetWorker customers have been waiting for VADP support for a considerable time now, so its great to finally see it arrive. However without putting a damper on things I would like to suggest customers do not rush off and upgrade without reading the release notes and considering there are still a number of open escalation’s in this build which are logged for issues with VADP.

VMware: vStorage API for Data Protection (VADP)

  • Integrated with vStorage API for Data Protection (VADP)
  • Change block tracking (CBT) support (file-based)
  • Single-step image-level backups & restores
  • Support for file level recovery from Windows VMs

 Flexible Single Step VM Recovery

  • To the original, new, or configure a new virtual machine
  • Recover the VM to the same/different vCenter, ESX Server, or ESX datastore

 NetWorker VADP proxy options

  • Physical or virtual servers
  • Windows Server 2008 R2, Windows Server 2008 or Windows Server 2003

DD Boost or Avamar Grid deduplication backups

  • DD Boost on VADP Proxy with Dedicated NetWorker Storage Node
  • Support for Global Encryption and Compression Directives for  NTFS image and file level backups
  • VMware VADP and In-Guest backups of the same UNIX/Linux based virtual machine

I’ve been working on a storage project over the last few months and I ran into a problem with MirrorView /S which turned out to be a bug in the CLARIION flare code. I thought id write a quick post about it incase anyone comes across the same problem.

The Problem :

The symptoms I saw were as follows;

  • Enabling the first mirror of a LUN per SP showed no issues, the LUN replicated and showed to be in a consistent state.
  • Enabling additional mirrors of LUNS on the same SP caused the initial mirror to re synchronize, (but would never complete).
  • Enabling  additional mirrors caused the hosts average read/write queue times to shoot through the roof causing huge performance problems.

The arrays I was working on were running flare version which needed to be upgraded to After the upgrade I initiated a sync on all mirrors and everything starting working as expected.

When it comes to VMware integration, two of my favorite pieces of software now support the new EMC VNX/VNXe product range.

The first link below is to one of Chad’s latest posts covering the latest release of the VMware vCenter plugin.

The second link goes to PowerLink where you can read more on the new features and support in Replication Manager 5.3 SP2.

EMC vCenter Plugin update-VNX/VNXe support

EMC Replication Manager 5.3 SP2 VNX/VNXe support

If you’ve ever configured CIFS shares on your Celerra before (VSA included) you’ll know when mapping to the shares using Windows 2000, XP or Server 2003 you get the default CIFS server comment added in windows explorer.

The comment will display as “EMC-SNAS:T<Dart Version>” shown below as “EMC-SNAS:T6.0.36.4

To stop this from happening you can typically handle this two ways.

#1. On an existing CIFS server: Run the following command as user nasadmin

server_cifs <vdm> -add compname=cifs server>,domain=<domain>,interface=<interface name> -comment ” “

You can see from the command I run that I’m using a virtual data mover, if you’re not then replace this with <server_x>

#2. When creating a new CIFS server: One of the benefits of creating a CIFS server from the command line is you can add the switch -comment ” “ at the time of creation which gets around this problem from word go.

If you’ve mapped drives prior to the change then mapped drives will still show the default CIFS comment because windows caches it in the registry. I did a search for EMC-SNAS and found it in the following location (shown below).

Just delete the entry from the registry and at next login the mapped drive will appear without the comment.

In the last couple of months I’ve deployed two Celerra NS120’s and a Celerra VG2 gateway for which I experienced the same problem preventing anyone from being to login to Unisphere post install.

The symptoms with the NS120’s was quite different to that of the gateway, the NS120’s gave a clear blatant certificate error which let you know the certificate was not valid and basically stated to please come back 3 hours in the future for then the certificate would be valid and accepted.

The VG2 gateway install was completely different and manifested its self to look more like a browser compatibility issue. When launching Unisphere the pop up window would launch but the login prompt would not appear.

The NS120’s were deployed in the afternoon and I didn’t have the need to login immediately so I packed up and left it for the next day. The next day I turned up and logged in without an issue as the certificate was now valid.

The VG2 gateway was more like a day out of sync and I thought id login, correct the time on the control station and try to re-generate the SSL certificates using the command /nas/sbin/nas_config -ssl but this unfortunately did not work.

I did some searching in PowerLink hoping to find something related and found the following knowledge base article;

emc257320 “After NAS 6.0 fresh installation, unable to log in to Unisphere”

While searching PowerLink I found another article related to the Clariion, but almost identical in nature. The fix for the Clariion was nicer as it allowed you to put your clock forward, login, then generate a new certificate and then put your time/date back to the corect time.

emc247504 “Certificate date and time error prevents access to CX4 storage system after upgrade to Release 30.”

The problem seems to be present in both FLARE 30 and DART 6.0 and can show up during install or upgrade.

Hopefully anyone with the same problem might find this post before spending hours thinking it’s a browser or java issue.

Hope this helps.

I was talking to a friend last week who asked me two provocative yet valid questions, and I thought It would be worth posting about because at the end of our conversation he had opened his mind to a few concepts he had not considered.

Question #1   “Whats all this Unified Storage Hooey about?” 

Probably a good place to start for people who are not overly familiar with the term “Unified Storage” is to define it. In my own words “Unified Storage” is about being able to present and manage storage using multiple protocols e.g. BLOCK (FC), NFS, iSCSI and CIFS from the same storage array.

It’s no secret that most of us (myself included) have adopted the mind-set “Virtualize Everything” and with that I don’t need to explain where BLOCK, NFS and iSCSI have a place in the virtualization world as they are all widely used and proven.

You might say that typically people tend to pick one protocol and stick with it and you wouldn’t be wrong, but think about being able to tier storage by protocol rather than how you normally associate tiering in the storage world which tends to centered around disk type e.g EFD, FC, SATA. If you look at the cost of FC switch ports and HBA’s compared to Ethernet ports and adapters, it makes total sence that you might consider putting tier 1 production systems on FC storage while placing your test and development systems on IP storage such as NFS or iSCSI.

Question #2   “What good is CIFS to me in a predominantly Virtual world”

When virtualization started taking off years ago we all went mad, it was all about server consolidation and In most cases we virtualized anything that responded to a ping request. What happened often was people virtualized file servers and ended up with virtual machines with huge .vmdk or rdm’s which generally did two things.

  • Used huge amounts of valuable storage
  • Caused huge bottle necks and backup windows blew out

 So I’ll address those two points and try to also cover some other key points which in my mind, make putting file data on the Celerra a no brainer.

De-Duplication: This feature is actually single instance plus compression. People are always sceptical about the figures which storage vendors put out in the market as they are often not achievable, but I can tell you that I have personally seen 1TB file systems reduce by 50% and in some cases 60% with Celerra De-Duplication enabled.

NDMP Backup: File systems containing user data tend to be quite dense, typically this causes traditional backup systems grief and a full backup window can end up being 4-5 days with large amounts of data. NDMP backups using EMC NetWorker can be configured to pull the data over the network or directly to a fibre connected device (tape or VTL drive) for LAN free backup. Celerra NVB (block based backup) handles dense file systems really well, Ive seen backup windows shrink from days to hours.

Snapshots: If you’ve ever been a backup administrator for a large organization you’ll know that on a bad week you can end up spending way to much time recovering user data. Celerra allows you to configure scheduled snapshots of file systems which are then available via the windows previous version tab built into XP service pack 3 and above operating systems.

Archiving: How much of that user data has actually been accessed in the last 6 months ? Introduce EMC’s File Management Appliance (Rainfinity) and you have the ability to move data from expensive FC disk to cheaper SATA storage while being transparent to users accessing the data.

I could go on and on about the other features such as High Availability, Replication, File Level retention, File Filtering, etc, etc but I think this post might end being too much of a mammoth read so I might pick some of these and elaborate more in up coming posts.

I’m not saying it’s a mistake to virtualize file servers because in certain cases there are advantages e.g. being able to replicate and protect using VMware Site Recovery Manager is a good example where its advantageous, but I am saying the larger the amount of data, the more beneficial it is to consider moving to the Celerra.