Archive for November, 2009

Myself and colleague Preston de Guise recently performed beta testing for the latest build of EMC NetWorker, I can only speak for my self but I was quite surprised by the  lack of regressed bugs I found and overall it’s looking very good.

I was also hoping  NetWorker 7.6 was going to show that product development teams were still expanding on NetWorker’s already existing support for virtualization and also listening to what partners and customers were asking for….Im glad to say that 7.6 has a couple of new features which I think have shown exactly that.

Hypervisor Support

This was first introduced in 7.5 (I wanted to cover this as there have been changes). NetWorker now allows an administrator to configure a single or multiple virtual center servers which allows NetWorker to display a map or table view of the VMware Infrastructure.

7.6 has added a nice feature which now incorporates the VCB Proxy server into the map, below is an example of this from my test lab.

from left to right we have vSphere host, Virtual Machine, NetWorker group showing Virtual Machines within and then to the far right we have the vcb proxy server.

The great thing about this feature is it gives you a detailed map showing which virtual machines are configured in NetWorker groups.

One of the greatest things about virtualization is we no longer talk about days and weeks to provision a new system, now we talk about minutes, especially when deploying from a template. While this has made life a tone easier for us, Ive seen many times situations where people would  create virtual machines willy nilly, put them into production and forget to notify the backup administrator of A. The existence of the virtual machine and B. The backup requirements.

This NetWorker feature makes it very easy for the backup administrator to report on virtual machines not configured in NetWorker groups.

New 7.6 VCB Functionality

Only VCB Image backup required to perform Image based or file based recovery: Now with 7.6 NetWorker can perform file level recoveries of the file system inside the VMDK file from the FULLVM backup (Windows Only) This means we no longer need to have additional groups or client instances with VCB file system savesets configured.

Directive: There is a  new VCB Directive, this when configured against a VCB client instructs networker to skip the backup of the following windows system folders, Windows\System and Windows\System32

VCB Framework Settings: Traditionally all VCB framework settings were configured in the config.js file on the proxy server, now in 7.6 this information is now stored in the proxy servers “Application Information” client resource field. This helps to make as much of the proxy configuration visable within NetWorker as possible.

The following example displays all the possible attribute values used for VCB configuration

VCB_HOST=any.vc-or-esx.com
VCB_BACKUPROOT=G:\mnt
VCB_TRANSPORT_MODE=hotadd
VCB_VM_LOOKUP_METHOD=ipaddr
VCB_PREEXISTING_MOUNTPOINT=delete
VCB_PREEXISTING_SNAPSHOT=delete
VCB_MAX_RETRIES=2
VCB_MAX_BACKOFF_TIME=15

Single Step Recovery: Although not new to 7.6, this was introduced in NetWorker 7.5 and greatly simplifies the process of recoverying a virtual machine. Tradidtionally in prior versions, you would need to recover the FULLVM saveset back to the proxy server and then use VMware Convertor to import the machine back into virtual center. With single step recovery you just populate the fileds as shown in the screenshot below and let NetWorker do all the work. The only prerequaite for this to fuction correctly is VMware Convertor 3.0 or above must be installed on the backup server.

 

 

 

 

 

 

 

 

 

 

 

 

Notifications

Ive already mentioned NetWorkers ability to display map and table view’s of your virtual infrastucture to help identify unprotected virtual machines, but as of 7.5 and also included in 7.6 is a new notification which alerts when new virtual machines are created in your environment, but not configured in an active NetWorker group.

Ill be posting some additional information and screenshots from 7.6 testing over the next couple of days so be sure to check back.

 

 

 

When creating iSCSI LUNS on a Celerra I typically have all file systems created before hand and then use the command line to create iSCSI LUNS in each file system.

A couple of weeks ago I decided for some reason to use the wizard “Create new iSCSI LUN” and noticed something that was not present in the older versions of DART.

Heres a screenshot showing the option to virtually provision the LUN

This seems to have come in with DART version 5.6.45 as is not present with the latest version of the Celerra Simulator which runs 5.6.43

For those of you who are interested in how to create LUNS from the control station, heres a couple of examples.

Create a virtually provisioned 100GB iSCSI Lun using LUN ID of 5, configured on iSCSI Target “ProdTarget” inside file system “ProdVMFS”

server_iscsi server_2 -l -number 5 -create ProdTarget -size 102400M -fs ProdVMFS -vp yes

If you’re creating destination “ReadOnly” LUNS for iSCSI replication then we use the same command but add a switch to configure the LUN as read only.

server_iscsi server_2 -l -number 5 -create ProdTarget -size 102400M -fs ProdVMFS -vp yes -readonly yes

 

I was doing some work on a couple of EMC Celerra’s this week and found an issue which I expect is a bug in the DART code, I managed to work around it using the command line, so I thought I would post incase anyone else comes across this same issue.

Initially I had fully provisioned LUNS on Both Celerra’s but decided in order to conserve some file system space,  tear everything down and recreate the LUNS on the DR Celerra using virtual provisioning, this greatly reduces the amount of file system space needed to allow replication of iSCSi LUNS to occur.

Here is the error I got while running through the replication wizard

The error here is basically saying ” I can’t find any suitable iSCSI LUNS to replicate to” which is sometimes the case when you forget to do one of the following two things.

1. Ensure source and target iSCSI LUNS are identical in size

2. Ensure the target LUNS have been configured as “READ ONLY” LUNS.

I knew it was neither of these two things and after a couple minutes of thinking about it,  the only thing to have changed was the LUNS at DR were now virtually provisioned.

After a little bit of thought I decided that I would try to create the replication task from the control station and sure enough the command completed successfully and the LUN replicated without any issues. 

Hers the syntax I used

nas_replicate -create LUN7 -source -lun 7 -target iqn.1992-05.com.emc:ckm000850009000000-1 -destination -lun 7 -target iqn.1992-05.com.emc:ck2000800009760000-2 -interconnect SRMReplication -source_interface ip=192.168.0.10 -destination_interface ip=192.168.5.10

Also if you are replicating between two Celerra’s, make sure that both are running the same version of DART and your running at least 5.6.45 or later.

*****UPDATE*******

 

This issue is resolved in DART 5.6.45

Vmware vCenter Linked Mode enables a common inventory view across multiple instances of vCenter Server.

Over the last year Ive deployed a number of VI3 systems with Site Recovery Manager in the mix and one of the things that always annoyed me was having to manage each site separately. Now with vCenter Linked Mode both sites can be managed from a single view. (Including all Site Recovery Manager functionality)

Something I found common on the forums was VMware customers using Virtual Center “Standard”  edition at the production site, but purchasing “Foundation” for the recovery site as it was considerably cheaper.

Now where am I going with this ? Lets take a look at one of the images I found in one of the Vmware documents I had laying around.

 

Ahhhh … no Linked Mode with vCenter Foundation.

Not everyone will think this is a big deal…. personally, having a single view for managing multiple vCenter sites is brilliant, its defiantly worth considering going with at least Standard at the recovery site for this additional feature.