Archive for August, 2009

If you’ve been using EMC NetWorker to perfrom VCB exports then you’ll know that after the VCB export has been backed up from the holding tank to Disk or Tape, it is then removed from the VCB holding tank.

I had a request from a customer this week “We want to be able to use NetWorker to perform VCB exports but configure it to leave the export on disk so we can then replicate to our DR site”

No problem I said, Ill go and configure that in the config.js file…. <looks>……<looks more>……. hmmm well I have options to remove existing snapshots from snapshot manager, I have an option to delete the mount point if it already exists, and I have an option to leave the snapshot in snapshot manager after the backup but  oh no, no option to leave the export in the holding tank.

After thinking about this for a while I figured that the deletion of the export was actually done by the Networker VCB interoperability module.

After reading down through the code, right at the end I found the clean up section as shown in the screenshot below.

CleanUp

The next thing I did was comment out all the lines after “Clean things up” but making sure to leave “return result” and then saved the file.

I started the VCB group,  the virtual machine exported out to the holding tank, NetWorker saved the export to the Disk Device, the group completed successfully and the export was left in the holding tank as I wanted.

I’m still in the process of testing this to make sure it does not have any adverse effects, but from what I can see at the moment its all good. If you change this you will need to make sure you change the option in the config.js file to delete existing mount points other wise you will have failures.

Also if you upgrade the NetWorker interoperability module, its likley you will need to apply this change again.

Hope this helps.

My last post hopefully shed a little light on file system requirements for iSCSI luns on the EMC Celerra when replication or snapshots will be used. If you missed this and you think it might be of interest you can read it here.

First lets verify what the current value for sparseTWS is set to

[nasadmin@EMCNAS ~]$ server_param server_2 -f nbs -i sparseTws

server_2 :

name                    = sparseTws

facility_name           = nbs

default_value           = 0

current_value           = 0

configured_value        =

user_action             = none

change_effective        = immediate

range                   = (0,1)

description             = Enables sparse TWS support

server_2 : done

 

You can see the default and current value are both set to zero which means this feature is currently not in use.

Now lets enable support for spareTWS

[nasadmin@EMCNAS~]$ server_param server_2 -f nbs -m sparseTws -v 1

server_2 :

name                    = sparseTws

facility_name           = nbs

default_value           = 0

current_value           = 1

configured_value        = 1

user_action             = none

change_effective        = immediate

range                   = (0,1)

description             = Enables sparse TWS support

[nasadmin@EMCNAS ~]$

Also to verify this has been enabled correctly you can also browse using Celerra Manager to the Data Mover parameters tab, if correctly enabled you should see an entry called sparseTWS with a vaule of 1.

What uses Tempoary Writable Snapshots ?

SRM is a good example, every time you hit that “TEST” button SRM via the SRA (Storage Replication Adapter) is creating a Temporary Writable Snapshot of the replicated lun and presenting it to the ESX host configured at the recovery site.

Replication Manager is another good example of an application that takes advantage of the Celerra Temporary writable snapshots. Ive configured our Replication Manager server to snapshot our production VMware iSCSI luns which I then present to another ESX host which we refer to as the mount host. Once Ive done testing or backing up the data I cant then un mount and remove the TWS.

By configuring the sparseTWS option with fully provisioned luns, we reduce the file system needed by the size of the published lun.

Ive deployed a number of Celerra’s into customer sites over the last year and although the feature packed Celerra can present storage using CIFS and NFS, the majority of systems have been deployed using iSCSI for VMware environments.

Now if your not planning to replicate or snapshot your iSCSI luns, then these considerations do not apply, the additional file system space needed is only a fraction of the lun size to store meta data.

If your familiar with the Celerra then you’ll know that each iSCSI lun sits inside a Celerra file system, While it is possible to have multiple iSCSI luns inside a single file system, its not a good idea especially if you plan to snapshot your iSCSI luns. So with that said, the actual sizing considerations are actually around the size of the Celerra file system which will be used to store A. The Lun  B. The Snapshots and C.The temporary writable snapshots.

The size of the file system needed to host each iSCSI lun really depends on how you choose to deploy your iSCSI luns, lets look at each of these options and begin by defining a general equation.

 File System minimum = Size of Lun + Snapshots + Size of TWS (Temporary Writable Snapshot)

Here is a table showing the variables we will use later on during the sizing examples.

var

   

 

Fully Provisioned iSCSI Lun with Fully Provisioned TWS

thick

Celerra allocates enough space for a 100 percent data change (A) during the initial snapshot as a precaution. When the next snapshot is created, additional space is allocated equivalent to the amount of changed data since the first snapshot.

The host then writes and additional 2GB of data which now brings the total to 12GB.

The Snapshot is then promoted and presented to another host, the host writes an additional 2GB of data in the space allocated for the TWS. Once the host is finished with the snapshot the TWS is then unmounted and removed. As shown in the image the total file system space needed was 22GB.

The following equation can be used here;

FS min =L + [A + ((N-1)*C] + [M*L]

 

Fully provisioned lun with virtually provisioned TWS

thick_thinktws

Now this configuration gives us the protection of a fully provisioned lun with a virtually provisioned TWS.

Once again, Celerra allocates enough space for a 100 percent data change (A) during the initial snapshot as a precaution. When the next snapshot is created, additional space is allocated equivalent to the amount of changed data since the first snapshot.

Because the TWS is virtually provisioned we reduce the total file system needed to just 14GB

The following equation can b used here;

FS min = L +[A + ((N-1)*C] + [M*T]

 

Virtually provisioned lun with virtually provisioned TWS

thin

Here Celerra takes full advantage of virtual provisioning and no longer is requires (L) published lun size and only requires (A) actual data.

The snapshot space does not require any initial allocation of (A) and the mounted snapshots require only the space for the changed data. The file system space needed to support this, just 6GB.

Here the following equation can be used;

FS min =A + [N*C] + [M*T]

If your wondering how to enable the sparseTWS support on the Celerra, check out my next post which covers this.

Also thanks to EMC’s Chris Stacey for hooking me up with the images used in this post.

Also if you have a EMC Powerlink account you can grab the sizing document from here.