Celerra storage rescan error

Posted: February 18, 2009 in Celerra

Today I was finishing off some of the storage tasks I had set my self when I come across the same problem I noticed on the NS20 I installed a couple of months ago, only the error was different so I thought id write up a post in case anyone came across the same problem.

Here’s the scenario……

You create a Raid Group e.g. 4+1 Raid5 in Navisphere,  then carve that up into 4 x equal size luns, thats it job well done.

Now before you go any further you or higher powers above decide that you/they don’t want 4 x luns but instead only want to create just 1 x big lun.  Now you go back into Navisphere, remove the luns from the storage group, unbind the luns and create everything over again but this time creating 1 x big  lun.

Now go back into Celerra Manager and perform a rescan……. ERROR !!!!   hmmm ok now what ?

so the actual error in Celerra Manager  is;

Error:

Rescan Failed

Make sure the system is not in fail over state, if the system is failed over,execute the restore actions to bring it to ok state

message ID : 13692698625

Message ID: 13692698625

The actual reason I believe for the error is the Celerra NASDB has unique HEX  identifiers for the old luns and the new ones dont match. The easiest way to clean this up Ive found is to delete the entry’s using nas_disk

If you not familiar with the Control Station and NAS commands then I recommend logging a case with EMC support.

1. Login to the CS as root or nasadmin and change to SU

2.  Run nas_disk -l the screenshot below shows what I had configured originally, look at d9 to d12 which are all 375566 MB. (These are the origional 4 x Luns I created)

d9-d12_1

3. Now run the following command for each lun, d9 through to d12;

nas_disk -d d9

4. Now go back into Celerra Manager and perform another storage scan.

You should now be getting a successful rescan with status OK

success_12

Now if you run nas_disk -l again you should see the one big sucker (d15) 

after_1

As noted above I actually had the same problem with the NS20 but the error was totally different and even more obscure, it had something to do with “peer traffic” at least the error now is less cryptic. I suspect the change in error description is something to do with the new 5.6.40 code on the NS120

I hope this helps someone.

Advertisements
Comments
  1. Eric Gray says:

    Great post. Finding this saved me a bunch of time today.

  2. John T says:

    Just to let you know, this is STILL relevant, helped me out a bunch today 🙂 Cheers!

  3. Jimmy says:

    Awesome. This worked great, thx!

  4. Dustin says:

    Wow, many thanks. Your scenairo is exactly like mine, first they want 2 bigg-ish disks..ok, done. No they want one really big disk…ok, rescan error.

    I’ll remember to delete any volumes instead of just ripping them out of navisphere next time.

  5. NoneYA says:

    Yes it did! Thanks a bunch.

  6. TheOX says:

    Thanks a lot Brian, this just saved me a lot of time!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s