Archive for February, 2009

Know your client ID’s

Posted: February 20, 2009 in EMC Networker



I’ve come across a number of cases recently where a great deal of confusion has come about due to clients being renamed, clients being recreated after having been deleted, clients being copied from the NMC and people forgetting to blank out the client ID to allow Networker to recreate a unique “clientid”


For those of you who are new to Networker, this is what a client id looks like 8213d3d5-00000004-4547a485-4547a484-00740000-c0a8000a at the end of the day its just a big number that’s used as a unique identifier.


Put it this way, no harm comes from knowing and tracking your client ID’s.


Here is a simple way of achieving this.


Step1. Create a txt file called input.txt and in this file paste the following txt


sh name

sh client id

sh aliases

print type:nsr client


Step2. Launch a cmd windows and type the following command;

nsradmin -i input.txt |smtpmail -h smtpserver -s “Client Id’s” -f sender recipient (obviously replacing sender and recipient with valid email addresses)



This launches nsradmin, uses the contents of the input file to determine what to print and then pipes it out through smtpmail to send a notification to your email address.


You could set this up as a windows scheduled task which runs once a week and with a rule created within your email application you can automatically file them as they come in. Of course check them periodically as scheduled tasks are prone to error due to changing passwords.


Something that’s been lacking from the Networker Management Console pre 7.5 was the ability to create VCB clients via the configuration wizard, well 7.5 now has this built in !

Ive taken some screen shots and below ill take you through the configuration wizard to show just how easy it is.

Step1. Open Networker management console,  click on the configuration tab from the top menu, highlight clients in the left hand panel and then select the configuration tab from the menu as shown in the screen shot below.



Step2. Enter the hostname of the virtual machine and tick the “virtual client” check box and select next.
Step3. Next select the option to configure a VCB type backup.
Step4. Here enter the VCB proxy server hostname and select the backup type, I plan to cover the different options in another post but for this demo we will select “Image” for a FULLVM export.
Step5. Next we select the browse policy, retention policy, schedule and have the option to add a comment if required.
Step6. Here we have the option to add the client to an existing group or create a new group, if you create a new group you get the option to configure start time and the option to enable auto start on the group.
Step7. Each VCB proxy server that is not the backup server is considered a storage node, here we have the option to configure which storage node the data gets sent to, keep in mind that if you change this value then all client instances are effected and all data (including traditional agent backups) get sent to this configured storage node.
Note# There is also the option to set the recovery storage node, I really like this option being here as its typically not something people think about.
Step8. And that’s it, here we have a summary page to display the configuration for this client instance.

Celerra storage rescan error

Posted: February 18, 2009 in Celerra

Today I was finishing off some of the storage tasks I had set my self when I come across the same problem I noticed on the NS20 I installed a couple of months ago, only the error was different so I thought id write up a post in case anyone came across the same problem.

Here’s the scenario……

You create a Raid Group e.g. 4+1 Raid5 in Navisphere,  then carve that up into 4 x equal size luns, thats it job well done.

Now before you go any further you or higher powers above decide that you/they don’t want 4 x luns but instead only want to create just 1 x big lun.  Now you go back into Navisphere, remove the luns from the storage group, unbind the luns and create everything over again but this time creating 1 x big  lun.

Now go back into Celerra Manager and perform a rescan……. ERROR !!!!   hmmm ok now what ?

so the actual error in Celerra Manager  is;


Rescan Failed

Make sure the system is not in fail over state, if the system is failed over,execute the restore actions to bring it to ok state

message ID : 13692698625

Message ID: 13692698625

The actual reason I believe for the error is the Celerra NASDB has unique HEX  identifiers for the old luns and the new ones dont match. The easiest way to clean this up Ive found is to delete the entry’s using nas_disk

If you not familiar with the Control Station and NAS commands then I recommend logging a case with EMC support.

1. Login to the CS as root or nasadmin and change to SU

2.  Run nas_disk -l the screenshot below shows what I had configured originally, look at d9 to d12 which are all 375566 MB. (These are the origional 4 x Luns I created)


3. Now run the following command for each lun, d9 through to d12;

nas_disk -d d9

4. Now go back into Celerra Manager and perform another storage scan.

You should now be getting a successful rescan with status OK


Now if you run nas_disk -l again you should see the one big sucker (d15) 


As noted above I actually had the same problem with the NS20 but the error was totally different and even more obscure, it had something to do with “peer traffic” at least the error now is less cryptic. I suspect the change in error description is something to do with the new 5.6.40 code on the NS120

I hope this helps someone.

I was thinking just the other day that I would do a post on browse and retention of savesets but Preston beat me to it with this great post here which covers changing the browse and retention times on savesets. Soooooooo I thought I would do something along the same lines but slightly different.

Just the other day I had a call from a customer who had data on his staging disk that was going to expire due to a short browse and retention policy, but needed to keep the data on the disk to perform a recovery once the replacement server was built. (yip the server was a total loss)

We have to do two things here, stop the savesets from being staged by disabling the staging policy and secondly use nsrmm to extent the browse/retention times to ensure the savesets dont get removed from the disk  ( NSRIM -X does this by default every 24 hours)

So heres how you go about it !!

Perform a mminfo command to identify all the savesets on the disk/volume.

mminfo -avot -xc/ -r “ssid,cloneid” -q volume=vcbstagedisk.001

 The important thing to note here is;

A.The option to specify cloneid as well as the saveset, this is something you should always do as its very possible you might have clone’s of the savesets on other volumes, this ensures we change the values for the savesets located on the staging disk.
  B.The -xc/ to seperate the ssid and cloneid from one another using a forward slash which is used in our command later when specifying ssid/cloneid

Once you have the command sussed then add > savesets.txt to the command to save the output to a txt file. If you edit the file you should see something like this.


Then you can run a command from the windows command promt, an example is shown below;

For /F %1  in (savesets.txt) do nsrmm -S %1 -w browse -e retent  (Youll need to replace browse and retent with your choosen dates, I usually extent by 1 month)

Once the command completes you can then go and check the results by running the following command which now also reports out on saveset browse and retent.

mminfo -avot -r “ssid,cloneid,ssbrowse,ssretent” -q volume=vcbstagedisk.001

You should now see something like this.

 ssid                clone id          browse         retent
1883000520  1228689096 8/12/2010 8/12/2010
1095095097  1229312824 15/12/2010 15/12/2010
2839988292  1229375556 16/12/2010 16/12/2010
3797515640  1230601592 30/12/2010 30/12/2010
3683778379  1234304843 11/02/2010 11/02/2010

For you people out there with Linux/Unix backup servers you can use something like;

for i in $(savesets.txt); do $(nsrmm -w browse -e retent $i); done

7.5 Update enabler required

Posted: February 14, 2009 in EMC Networker

If youve been running a Networker 7.3.x or Networker 7.4.x server and your looking to move to the latest 7.5 build, dont forget to request a 7.5 update enabler before the upgrade as youll find everything stops working without it.

The actual enabler doesnt cost anything, I beleive its just EMC’s way of tracking the number of customers migrating to the newer build (This was also the case when customers moved from 7.1.x or 7.2.x to 7.3)

This was just released a couple of days ago. Ive had a quick look through the release notes and I cant wait to give the P2V of a Linux machine ago.

Whats new I hear you ask………

  1. Physical to virtual machine conversion support for Linux (RHEL, SUSE and Ubuntu) as source
  2. Physical to virtual machine conversion support for Windows Server 2008 as source
  3. Hot cloning improvements to clone any incremental changes to physical machine during the P2V conversion process
  4. Support for converting new third-party image formats including Parallels Desktop virtual machines, newer versions of Symantec, Acronis, and StorageCraft
  5. Workflow automation enhancements to include automatic source shutdown, automatic start-up of the destination virtual machine as well as shutting down one or more services at the source and starting up selected services at the destination
  6. Target disk selection and the ability to specify how the volumes are laid out in the new destination virtual machine
  7. Destination virtual machine configuration, including CPU, memory, and disk controller type

Here are the supported operating systems

  • Windows 2000 SP4
  • Windows XP Professional (32 bit and 64 bit)
  • Windows 2003 (32 bit and 64 bit)
  • Windows Vista (32 bit and 64 bit)
  • Windows Server 2008 (32 bit and 64 bit)
  • Red Hat Enterprise Linux 2.1 (32 bit)
  • Red Hat Enterprise Linux 3.0 (32 bit and 64 bit)
  • Red Hat Enterprise Linux 4.0 (32 bit and 64 bit)
  • Red Hat Enterprise Linux 5.0 (32 bit and 64 bit)
  • Red Hat Linux Advanced Server 2.1 (32 bit)
  • SUSE Linux Enterprise Server 8
  • SUSE Linux Enterprise Server 9 (32 bit and 64 bit)
  • SUSE Linux Enterprise Server 10 (32 bit and 64 bit)
  • Ubuntu 5.x
  • Ubuntu 6.x
  • Ubuntu 7.x (32 bit and 64 bit)

New Networker 7.5 features

Posted: February 11, 2009 in EMC Networker, VMware

Now I think ill update this post every couple of days with a quick outline of each new feature that Networker 7.5 has introduced, since my main area of interest is virtualization so ill start with the new “Virtualization” tab in the networker configuration menu.

  • Virtualization



Configuration is easy, right click…Enable auto discovery…configure the identity of the VC server…specify username and password of an account with local administrator privileges on the VC server…check  the “Enable” box and select OK. That’s it your up and running, now right click on the virtualization tab again and “Run auto discovery”

You should now see your Data Center, Cluster, ESX servers and virtual machines, the mapping is really good and makes it easy to determine what VM is on what ESX server

You can right click on any of the VM’s and use the client configuration wizard to configure traditional backups or VCB backups which in previous versions of Networker had to be done manually.

Now the coolest thing ive seen so far with the virtualization intergration is a new notification called “New Virtual Machine”. This notification can be configured to send an email alert to a single user or distribution group, the alert gets sent everytime the auto discover task runs (by default every 24 hours unless user initiated)

NetWorker hypervisor: (notice) Found 1 new Virtual Machine(s) on ‘VirtualCenterServer’: NewVMTest,

While these features are warmly welcomed I still feel the functionality is rather limited, I suspect what we see here is just the beginning of Networkers intergration into a virtual world.