Archive for March, 2010

EMC Celerra FAQ’s

Posted: March 31, 2010 in Celerra

Over the last couple of months Ive had a fair number of questions from readers wanting to know more about the EMC Celerra, there are a few common questions which keep coming up so I thought it would be worth doing a post to try to answer some of the FAQ’s.

 Does Celerra support EFD (Solid State) disks ?

 Absolutely !

The following raid configurations are supported

  • 8+1 Raid 5
  • 4+1 Raid 5
  • 2+2 Raid 1/0

Is the Celerra really just a Clariion with NAS functionality ?

The Celerra does indeed use the Clariion for the storage layer, which of course is one of the reasons its such a beast. The NAS functionality comes via FC connected datamovers (blades) which can be seen in the image below. Note each datamover is connected to both storage processors, everything about the Celerra screams high availability.

What protocols does the Celerra support ?

The Celerra is EMC’s unified storage platform, whatsunified storage all about you ask ? Well simply put it means the Celerra can provision storage using any of the following storage protocols CIFS, NFS, ISCSI, as well as  FC.

The image below shows the Celerra presenting storage to an ESX host using all three VMware supported storage protocols.

You may have noticed in the list of protocols CIFS was listed, you might be wondering why you would use this functionality within a Virtual Infrastructure, keep an eye out for up coming posts as ill be covering this.

How many network interfaces does the datamover have ?

The first image shows the datamover with 4 copper ethernet ports. All ports are 10/100/1000

The second image below shows the newer 10GB model which become availalble end of 2009. The datamover has two copper 10/100/1000 ports and two optical 10GB ports.

When it comes to networking there are typically two things people ask about, Performance and High Availibility.

If required the ports can be aggregated together to create a single logical device using either EtherChannel or LACP. The key thing to remember here is this does not automatically increase bandwidth. I was planning on going into this with much more detail but I remembered Jason Nash has already done a great post on fault tolerant Celerra networking so rather than repeat the same information, check out his post here.

Can I have two active datamovers ?

In higher end models with additional datamovers you may decide to have 2 or more active datamovers with another configured in standby mode. For models which only support two datamovers, while it is technically possible………… don’t do it ! 

A Celerra with only 2 datamovers (blades) should always be configured active/standby.

Is iSCSI off the datamover true block ?

No each iSCSI lun is actually configured within a file system, so here the Celerra uses emulation. If you want true native block you’ll need to order additional iSCSI modules for the storage processors. The image below shows additional iSCSI modules in slots A1,A2 and B1,B2.

Whats the difference between Integrated, FC and Gateway models ?

NS120 Integrated –

The image below shows the storage processor for an integrated unit. The first module is the management module which is common across integrated and FC models. Port 2 and 3 on each storage processor are used to connect the datamovers (blades), port 1 is disabled (on the NS120)  and port 0 connects the storage processor to the LCC controller on the disk enclosure.

An integrated system essentially means the backend Clariion is dedicated to the datamovers, you don’t have spare FC front end ports to allow additional hosts to be connected. All management is done using Celerra Manager.

NS120 FC Enabled –

Looking at the image below, the first thing you’ll notice is now we have an additional FC module in each storage processor. The additional ports can be used to either directly connect hosts, or can be connected to a switched storage area network.

An FC enabled system allows you to present storage from the Clariion to the Celerra datamovers, as well as other FC connected hosts. Management is done using Celerrra Manager and Navisphere Manager.

Gateway –

At this time I dont have an image showing the Celerra gateway, but essentially the gateway consists of the datamover and control station components. The datamovers are connected to an existing Clariion or Symmetrics storage array.

Unboxing Our Celerra NS120

Posted: March 15, 2010 in Celerra

I get a tone of questions about EMC Celerra so when our new NS120 turned up last week I thought it would be a good opportunity to take some photos and post giving people less familiar with EMC kit a little insight as to why the Celerra is such a beast.

I actually finished this post earlier tonight but after reading over it I decided it was a bit all over the place so im going to break it down into two separate posts.

This first post will be a few photos showing the new hardware, the second post to follow will contain a lot more information about the Celerra Unified Storage platform.

Heres the Celerra in the travel rack

I know it’s not pretty here, but once the bezels go on it looks like something futuristic and sexy, much like Lucy Lawless in the TV series Battlestar Galactica

Rear View

DataMover ( one of two )

Storage Processors

Control Station

Power Supply A (right Side) and Standby Power Supply B (Left Side)

Storage Rack

Boxes of software and cables

Here’s a photo of the boxes which get shipped with the Celerra. Inside these bigger boxes are smaller boxes containing bezels, cables and EMC Software.

 

ESX4i Still No Boot From SAN Support

Posted: March 10, 2010 in VMware

Over the years I have  always recommended the full ESX install over the ESXi build for a number of reasons, heres a couple of the better technical ones.

  • Scripted Installs
  • Boot from SAN Support
  • Web Access
  • Jumbo frames supported on VMKernel Interfaces

We have all known for sometime now that VMware plan on converging the two ESX architectures, so recently I’ve making an effort to start adapting because it’s not a matter of if but instead when this will happen. (very soon I’m guessing)

Just recently a customer with existing hardware wanted to reprovision existing blades for a vSphere implementation, each blade had its own internal SAS drive which led me to recommend either an additional drive was added to each blade to allow for a raid 1 mirror, or  consider Boot from SAN.

The only problem was I found the following exert in the latest vSphere 4 update 1 documentation which shows this as being experimental for ESXi.

The customer in the end decided to add an additional drive to each blade, but I was surprised to see this was still not supported.

I’m hoping this changes in the near future.

For sometime now I’ve been trying to find more information about VMware’s new vStorage API  and while digging around I stumbled across this great VMware document called  “Designing Backup Solutions for VMware vSphere”      

The document in its own words has been written to “Introduce code developers to the procedures necessary to create backup and restore software for virtual machines”      

I know what your thinking….. “Your not a developer !!” ……No not even close but the document did however answer a tone of questions I had about what functionality I might be able to expect from EMC’s up coming release which is going to support the vStorage API.  

If I get time this week I may post listing which functionality in particular I would like to see built into NetWorker.  

The image below links to the VMware document.. enjoy !