Archive for the ‘Tips and Tricks’ Category

After writing up this post on FAST VP INYO Goodness last night, I went to delete my notes I had taken on the IPAD and realised I’d forgotten one very cool new feature which may have slipped under most people’s radar.

When creating a LUN from a FAST VP enabled pool in the current version of FLARE, you have the following options to select from.

  • Auto-Tier
  • Highest Available Tier
  • Lowest Available Tier

These of course are Auto-Tier policy’s; selecting one determines which algorithm is used to distribute data through promotions and demotions of 1GB slices between storage tiers.

At LUN creation time I refer to these as “Initial Data Placement” policy’s, a term which I’ve actually taken from one of the VNX best practice documents found on PowerLink. Each policy directly impacts which storage tier the data is first allocated to.

The Highest and Lowest options are  self-explanatory; Auto-Tier unless I’m mistaken uses an algorithm to distribute the data over all available storage tiers which in my opinion increases the risk of having performance issues before the pool has had sufficient time to warm up.

When you create a LUN, you’ll find that Auto-Tier is actually the default selection, however I always change this to Highest Available Tier to ensure that data starts off on the highest performing disk and once the migration of data is complete I switch the policy to Auto-Tier to let FAST work its magic.

But Now…… INYO introduces a new policy

  • Start High, Then Auto-Tier

The introduction of this policy effectively means I no longer have to go and remember to do this manually. While some  might think this is a non event in terms of new features, to me its a good example of how FAST VP is evolving based on feedback from partners and customers…. and that I like.

Performance and efficiency of data is all about the locality of the data, if you’re migrating data from an older storage array to a VNX, the last thing you want is for people to complain about performance. Although Auto-Tier is the default option when creating a LUN, the option Highest Available Tier is the policy recommended in the best practice documentation.

Advertisements

If you’ve ever configured CIFS shares on your Celerra before (VSA included) you’ll know when mapping to the shares using Windows 2000, XP or Server 2003 you get the default CIFS server comment added in windows explorer.

The comment will display as “EMC-SNAS:T<Dart Version>” shown below as “EMC-SNAS:T6.0.36.4

To stop this from happening you can typically handle this two ways.

#1. On an existing CIFS server: Run the following command as user nasadmin

server_cifs <vdm> -add compname=cifs server>,domain=<domain>,interface=<interface name> -comment ” “

You can see from the command I run that I’m using a virtual data mover, if you’re not then replace this with <server_x>

#2. When creating a new CIFS server: One of the benefits of creating a CIFS server from the command line is you can add the switch -comment ” “ at the time of creation which gets around this problem from word go.

If you’ve mapped drives prior to the change then mapped drives will still show the default CIFS comment because windows caches it in the registry. I did a search for EMC-SNAS and found it in the following location (shown below).

Just delete the entry from the registry and at next login the mapped drive will appear without the comment.

In the last couple of months I’ve deployed two Celerra NS120’s and a Celerra VG2 gateway for which I experienced the same problem preventing anyone from being to login to Unisphere post install.

The symptoms with the NS120’s was quite different to that of the gateway, the NS120’s gave a clear blatant certificate error which let you know the certificate was not valid and basically stated to please come back 3 hours in the future for then the certificate would be valid and accepted.

The VG2 gateway install was completely different and manifested its self to look more like a browser compatibility issue. When launching Unisphere the pop up window would launch but the login prompt would not appear.

The NS120’s were deployed in the afternoon and I didn’t have the need to login immediately so I packed up and left it for the next day. The next day I turned up and logged in without an issue as the certificate was now valid.

The VG2 gateway was more like a day out of sync and I thought id login, correct the time on the control station and try to re-generate the SSL certificates using the command /nas/sbin/nas_config -ssl but this unfortunately did not work.

I did some searching in PowerLink hoping to find something related and found the following knowledge base article;

emc257320 “After NAS 6.0 fresh installation, unable to log in to Unisphere”

While searching PowerLink I found another article related to the Clariion, but almost identical in nature. The fix for the Clariion was nicer as it allowed you to put your clock forward, login, then generate a new certificate and then put your time/date back to the corect time.

emc247504 “Certificate date and time error prevents access to CX4 storage system after upgrade to Release 30.”

The problem seems to be present in both FLARE 30 and DART 6.0 and can show up during install or upgrade.

Hopefully anyone with the same problem might find this post before spending hours thinking it’s a browser or java issue.

Hope this helps.

I’ve been deploying Celerra’s for a while now and with technology changing as fast as it does, you really need to be across the applied practice, best practice and NAS Matrix documents found in PowerLink to validate your storage design.

For the last couple of years the majority of the Celerra’s Ive deployed have been shipped with DART 5.6 code and the backend Clariion flare version has typically been what we refer to as 28,29 and most recently flare 30.

One of the first things you learn when deploying Celerra’s is you need to read the NAS Matrix guide to determine which RAID configurations are supported because although you may be able to create certain RAID types with x number of disks on the backend Clariion, It doesn’t mean the Celerra will let you use it… this is typically where people sit for 20-30 mins performing multiple “rescan storage” operations hoping that storage will magically appear.

One of the biggest changes in Flare 30 was the introduction of storage Pools which is quite different to the traditional “Raid Group” concept which has been around for donkeys years. If you want to read a bit more in-depth information about Storage Pools, then check out Matt Hensleys post here.

What I couldnt find in the latest NAS Matrix guide or in any of the documentation I found was conformation that I could present storage from the Clariion storage pool to the Celerra, so after carving a LUN from the storage pool made up of 10 FC disks, I presented it through to the Celerra and performed a rescan…… rescan…… and one more for good luck…..rescan, but unfortunately no luck and no LUN.

At this stage I remembered often performing tasks from the command line on the control station often spits out more debug information so I thought id give it a shot.

And that confirmed it. ” Fully provisioned CLARIION pool device was not marked because it is not supported in this realease”

The answer to this problem in the end was “thin provisioning”. I created a thin provisioned LUN from the Clariion Pool, added the LUN to the Celerra storage group and performed a rescan and there was my LUN.

In the end I found the following primus solution EMC241591 which outlines 3 conditions that will cause this error, fully provisioned LUNs was one of them along with Auto Tier and Compression being enabled.

Cause : Certain new CLARiiON FLARE Release 30 features are not supported by Celerra and the initial 6.0 NAS code release.  CLARiiON LUNs containing the following features will not be diskmarked by the Celerra, resulting in diskmark failures similar to those described in the previous Symptom statements:

  • CLARiiON FAST Auto-Tiering is not supported on LUNs used by Celerra.
  • CLARiiON Fully Provisioned Pool-based LUNs (DLUs) are not supported on LUNs used by Celerra.
  • CLARiiON LUN Compression is not supported on LUNs used by Celerra.

 

I recently needed to log into a CLARIION for which the admin password was unknown. I did a search on PowerLink and found an article which was fairly vague so I thought id quickly post showing the steps needed to get this sorted.

You will need the service cable which come with your CLARIION, it’s a serial to micro DB9 which connects your workstation or laptop to the management port on the Storage Processor.

My first challenge was my laptop did not have a serial port so I had to go out and purchase a USB to Serial cable.

My second challenge was the instructions on Powerlink talked about Dial Up Networking which is something I remembered from the dark ages (pre Windows XP). I mucked around for about 30-40 minutes with Windows 7 trying to get a PPP connection setup but in the end it was just easier to power on a XP virtual machine  and pass the USB to Serial adapter through to the XP virtual machine.

Pass the USB to Serial Adapter through to the Virtual Machine

Goto Control Panel and select Network Connections

Select the option to create a new connection

Select “Setup an Advanced Connection”

Select “Connect directly to another computer”

Select “Guest”

Name the connection

Select the device

Check the box to add a shortcut on the desktop and select finish

Now launch the connection using the shortcut and edit the properties, you need to change the speed to 115200 otherwise the connection will fail. Then enter Username and Password (see below for details) and connect.

Once a connection has been established, Launch your web browser and connect to the following URL http://192.168.1.1/setup

Now what you need to do is select the option  “Destroy Security and Domain information“, before you do this, just note it does exactly what it says its going to do.

While your waiting for the management service to restart you can disconnect the service cable and patch your laptop back into the network and browse to the Storage Processor as you normally would. When Navisphere loads you’ll be notified that global security is not enabled and prompted to supply a new password for the admin account.

<Username and Password>

Im not going to give out the username and password for the PPP connection as the purpose of the post is to demonstrate the procedure.

If you do need to perform these steps then I suspect you will be able to either find the support article I found which lists the 3 passwords which have typically been used over the years or you could raise a service request with EMC support, Im sure if you give them a valid CLARIION serial number they will give you the passwords.

 *Update*

If you look at the comments below you’ll see Scott Lowe reminded me there is also a service Lan port on each SP which can be used to access the <ipaddress>/setup page.

I’ve used the LAN port a number of times before (typically to change Storage Processor IP’s) as it allows me to connect my laptop directly rather than having to plug my laptop in the customer corporate network, but what I wanted to test was if you could login using the same username/password which you use for the PPP connection.

After testing this using the username/password used for the PPP connection, It seems in order to connect to the <ipaddress>/setup page via the LAN service port, you actually need to know the admin (or other global account with admin privileges) which means at this stage you’ll be stopped in your lost password tracks. I’m pretty sure now the only way to reset the password is using the PPP connection over the serial cable.

If you’re wondering what IP Address you should assign to your laptop, and what address to substitute <ipaddress> for, then check out Dan’s post here which covers exactly what you need to know.



Ive come across the same problem at a number of customer sites recently, so I thought this might be something worth posting about.

The problem typically comes about as customers who’ve traditionally had vCenter running on a 32bit Operating System, now move to a x64bit Operating System to support vCenter 4.1 which enforces x64 bit architecture.

Now the actual root cause of the problem is the NetWorker 64bit client does not have the necessary nsrvim binary which is responsable for communicating with vCenter.

The good news is the fix is actually very simple, within the properties of the vCenter server youve configured in the NetWorker Management Console, just change the command host to any other networker system running a 32bit version of the client ( 7.5 and above ).

Typically the command host and the end point field have the same server configured, but as you can see from the screenshot below Ive just changed this to one of my test clients named “Client1”

 

Save the changes and away it goes again. Hope this helps.

Here’s the scenario,

  1. Using the Avamar management console, Add a Virtual Machine to the “Default Virtual Machine Group”
  2. Perform a Backup of the Virtual Machine
  3. Delete Virtual Machine using VI Client
  4. Perform a Virtual Machine Recovery back into Virtual Infrastructure
  5. Power Virtual Machine On
  6. Perform a Backup of the recovered Virtual Machine

I manually initiated a backup and saw the error shown below (A scheduled backup resulted in the error “NO VM“)

Because my NFS file systems are replicated I decided to perform a recovery back to a dedicated non replicated NFS datastore. At power on Virtual Center detects the Virtual Machine configuration file has moved and prompts with the following

This virtual machine might have been moved or copied.
In order to configure certain management and networking features, VMware ESX needs to know if this virtual machine was moved or copied.

If you don’t know, answer “I copied it”.
  • Cancel
  • I moved it
  • I copied it

I wasnt able to find anything on PowerLink about this, but I believe this is the root cause of the problem, Im always moving, copying and recovering Virtual Machines in the LAB and I’ve become accustom to selecting “I copied it” which generates a new UUID and before now have never had any problems.

I suspect when you add the Virtual Machine to Avamar it uses the UUID to track the VM as its common for VM’s to be renamed in vCenter. What I should have done is select “I Moved It” which results in the UUID being kept.

What I did to get around this was retire the Virtual Machine (choosing to keep the backup data based on retention) and then re-add the Virtual Machine again.

Backups were now successful.

I still need to do some more testing to confirm my suspicions around this being UUID related, but I would like to think that if this was the case, future versions of Avamar would let you update the configuration for situations where the Virtual Machine name stays the same but UUID changes.

I’ll update this post once I’ve confirmed.