Today I was finishing off some of the storage tasks I had set my self when I come across the same problem I noticed on the NS20 I installed a couple of months ago, only the error was different so I thought id write up a post in case anyone came across the same problem.
Here’s the scenario……
You create a Raid Group e.g. 4+1 Raid5 in Navisphere, then carve that up into 4 x equal size luns, thats it job well done.
Now before you go any further you or higher powers above decide that you/they don’t want 4 x luns but instead only want to create just 1 x big lun. Now you go back into Navisphere, remove the luns from the storage group, unbind the luns and create everything over again but this time creating 1 x big lun.
Now go back into Celerra Manager and perform a rescan……. ERROR !!!! hmmm ok now what ?
so the actual error in Celerra Manager is;
Make sure the system is not in fail over state, if the system is failed over,execute the restore actions to bring it to ok state
message ID : 13692698625
The actual reason I believe for the error is the Celerra NASDB has unique HEX identifiers for the old luns and the new ones dont match. The easiest way to clean this up Ive found is to delete the entry’s using nas_disk
If you not familiar with the Control Station and NAS commands then I recommend logging a case with EMC support.
1. Login to the CS as root or nasadmin and change to SU
2. Run nas_disk -l the screenshot below shows what I had configured originally, look at d9 to d12 which are all 375566 MB. (These are the origional 4 x Luns I created)
3. Now run the following command for each lun, d9 through to d12;
nas_disk -d d9
4. Now go back into Celerra Manager and perform another storage scan.
You should now be getting a successful rescan with status OK
Now if you run nas_disk -l again you should see the one big sucker (d15)
As noted above I actually had the same problem with the NS20 but the error was totally different and even more obscure, it had something to do with “peer traffic” at least the error now is less cryptic. I suspect the change in error description is something to do with the new 5.6.40 code on the NS120
I hope this helps someone.