Archive for the ‘Replication Manager’ Category

When it comes to VMware integration, two of my favorite pieces of software now support the new EMC VNX/VNXe product range.

The first link below is to one of Chad’s latest posts covering the latest release of the VMware vCenter plugin.

The second link goes to PowerLink where you can read more on the new features and support in Replication Manager 5.3 SP2.

EMC vCenter Plugin update-VNX/VNXe support

EMC Replication Manager 5.3 SP2 VNX/VNXe support

Since Celerra NFS datastores have become supported by both Replication Manager and VMware Site Recovery Manager, I’ve been migrating Celerra customers off iSCSI to NFS mainly because of the overhead needed to facilitate Snapshots/Replication of iSCSI luns.

In versions prior to 5.3 you needed to have a Linux host configured as the VMware Proxy to allow snapshots of NFS datastores, but the 5.3 release now supports this on a Windows operating system.

 

I’ve seen a number of times recently a scenario where a Replication Manager jobs starts to fail with an error indicating the Celerra file system holding the iSCSI lun is full.

The cause of the problem is Replication Manager fails to clean up a failed snapshot, when the next job runs it fails because there is no longer sufficient space in the file system for the snapshot to be created.

Replication Manager 5.2.3 has resolved this issue as well as another I’ve noted below.

Here are the extracts from the release notes.

Also if you use Replication Manager to clone or replicate iSCSI LUNS using the Celerra loopback then you might also want to look at applying 5.2.3 to resolve the following issue.

 

There has always been a lot of debate in the VMware community about which IP storage protocol performs best and to be honest I’ve never had the time to do any real comparisons on the EMC Celerra, but recently I stumbled across a great post by Jason Boche comparing the performance of NFS and iSCSI storage using the Celerra NS120, you can read this here.

What you’ll find reading the later part of Jason’s post is once a few tweaks were made for NFS the results were actually very similar. So if there is not a clear winner on the day, how do we best decide which is the best storage protocol for your VMware environment ?

The “Bigger Picture”

I can tell you in the past I have always deployed Celerra’s using iSCSI and I like to think this choice was made with the “Bigger Picture” in mind. If we go back in time and look at VMware support matrix’s you’ll notice that a lot of the add ons such as VMware Consolidated Backup, Storage vMotion, Site Recovery Manager all supported iSCSI well before NFS was officially supported.

It was these considerations early on that lead me down the iSCSI path, and then of course later on iSCSI was something I become comfortable with so naturally it become my protocol of choice.

Another consideration with the Celerra was integration with EMC’s Replication Manager which could be used to provide application consistent snapshots of Exchange, SQL, Oracle and VMFS datastores when iSCSI was used.

That was before, how about now ?

So a couple of years down the track and things have changed considerably, VMware Consolidated Backup, Storage vMotion, Site Recovery Manager 4.0 and Replication Manager 5.2.2 now all support VMware NFS datastores.

Ready to change to NFS yet ?

Even with all these changes, I still was not ready to move away from iSCSI to NFS because vSphere 4 brought major improvements to the VMware software iSCSI initiator which allows multiple VMKernel ports to be bound to the iSCSI initiator to give the ESX host multiple paths so the storage array.

Shame on me

So earlier on in the post I talked about the “Bigger Picture“, Is the improvement toVMware’s software iSCSI initiator part of the bigger picture ?

No, this is a small technical nice to have feature, but really I needed to take a step back and think about what NFS means to Celerra and what makes NFS appeal more so than iSCSI. (Keep reading to find out)

Where NFS trumps iSCSI on the Celerra

Replication Manager When creating an iSCSI lun you first create a file system, think of the file system as a container for the iSCSI lun.

Without the need for snapshots, this file system only needs to be fractionally bigger then the iSCSI lun to accommodate for meta data, but as soon as you need to start performing snapshots of iSCSI luns, the requirements for additional file system overhead change completely.

Long story short with a fully provisioned iSCSI lun, the minimum file system space required to perform a snapshot of the lun (and not taking into account changed data) is 2 x the published lun size, there are of course ways to reduce this required overhead and if you want to read more about this you can read one of my older posts about this here.

Replication Manager 5.2.2 as mentioned earlier, now supports snapshots of NFS datastores, the good news here is the Celerra uses a totally different method for the snapshots of NFS file systems (using a dedicated savevol rather than its own file system) and allowing 20% overhead for snapshots is a realistic figure.

Site Recovery Manager VMware Site Recovery Manager using Celerra Replicator also uses the Celerra snapshot functionality to replicate Source iSCSI luns to a remote Celerra. The same overhead requirements as noted in the Replication Manager section are applicable here also.

Site Recovery Manager 4 of course now supports NFS, existing customers feeling the pain from the overhead needed to support iSCSI snapshots can at least now migrate everything to NFS datastores and claim back a tone of that valuable capacity.

EMC Celerra I beleive was the first storage platform to provide automated fail back, this is done using a vCenter plugin, the new version now supports NFS.

Celerra NFS vSphere Plugin EMC recently released a new Celerra plugin which integrates with the Celerra, you really have to check out this YouTube video to see how cool some of the integrated features are when using NFS.

Deduplication EMC Celerra has supported deduplication for some time now, almost every release of DART recently has brought with it considerable improvements. The one that’s caught my eye in the latest release 5.6.48 is now optimized to work with VMDK files in an NFS data store, reducing space consumption by up to 50 percent. The overhead of accessing the compressed VMDK is ess than 10 percent. When using the Celerra NFS vSphere plugin an administrator can select which virtual machines in the data store to compress. If the additional overhead is too much then the administrator can simply uncompress the virtual machine on the fly.

Summary

The point of this post is not to advocate NFS over iSCSI, my intention here is really just to show how important it is to take that step back and look at the overall solution before you rush ahead choose a protocol which may not end up being the best choice for your environment.

As a consultant who implements systems, reviewing the two different protocols was a good reminder to myself not to get too stuck in my ways, things change !

Ive had Replication Manager 5.2 integrated with VMware VI3, hanging off an EMC Celerra using iSCSI for some time now, and ever since vSphere was released Ive been meaning to test the functionality to make sure everything works and to see if there are any changes.

Now having set this up with 3.5 update 4 hosts I remembered one of the key steps is changing the advanced LVM.EnableResignature option to 1 which allows a snapshot of an existing lun with a matching header to  automatically re signature and be presented back to the host. If you want to read more about how this works then Chad Sakac has a really good post about this here.

 Here is a screenshot showing this on a ESX 3.5 host.

LVM

 

The next step was to build my self a vSphere 4 host and integrate it into my existing lab setup, after building the host and searching through the advanced options I realised the LVM.EnableResignature option was not available and after a quick google it didnt take long to find this post by Duncan at Yellow-Bricks.

After configuring the host in Replication Manager I performed a Mount of an existing snapshot to my vSphere1 host, the task completed successfully but I was unable to see the lun on the host.

The image below shows the snapshot has been succesfully mounted to the host.

mount

You can see only the default datastore on local disk and the original Celerra Lun is shown.

datastor

 

 

 

Next I went to Configuration >> Storage >> Add Storage >> Disk/Lun and there it was.

addstorage

 After selecting the lun and clicking next, I was now presented with three options as shown in the screenshot below.

addstorage_options

 

As Duncan pointed out in his post, yes this can be done through the GUI but its more fun from the command line, im also a firm believer of learning to do tasks from both the command line and the GUI.

So with that said im now going to use vicfg-volume to resignature the lun.

First lets check the existing header ID  using vicfg-volume.pl – -server vsphere1 -l

vicfg-volume

Now lets resignature the lun using vicfg-volume.pl – -server -r <existing_header>

resig

Next I check to see if the lun is now visible, and there it is.

snap

 

 

 

 

Just as a last note to this post I just wanted to mention Ive not yet found anything in the release notes to confirm vSphere is supported with Replication Manager 5.2.2, so at the moment this is nothing more than an informational post. Ill make sure to update this as soon as Ive confirmed this is in fact supported.

Oh and just incase you’re wondering why this change has come about…. With VI3 and LVM.EnableResignature it was an all or nothing setting, now with vSphere 4 you can change this on a per Lun basis, actually a good thing once you know about it.

If you’re a VMware shop you have to check out some of the demos online, Replication Manager is a Brilliant product !