Celerra gets 10 Gigabit Native iSCSI plus much more (Part1)

Posted: January 9, 2010 in Celerra, VMware

For those of you who are not familiar with Celerra, this is EMC’s multi protocol unified storage platform which does it all, iSCSI, NFS, CIFS, as well as FC SAN all bundled with Celerra Replicator for built-in replication of iSCSI LUNS, File Systems and Virtual Data Movers. (Brilliant for VMware, Site Recovery Manager and VMware View)

Just recently Chad from virtualgeek left a message on one of my posts saying a new Celerra Simulator had been released, so off I went to take a look and sure enough its based on the newly released 5.6.47 version of DART. After doing a bit of searching around on PowerLink I found an excellent document which I’ve used for the information below to cover not all but the most beneficial in my opinion of the changes in this release…. oh and Ill br breaking it up into two posts so its doesn’t blow out into a monster.


With DART version 5.6.47 comes new support for hardware, here are some of the highlights.

  • 8Gb FC FLEX I/O Module support in the NS-960 and NS-G8 blade, enabling full 8Gb X-Blade to Storage connectivity
  • FLEX I/O Module Upgrades for NS-960 and NS-G8 X-Blades
  • Native array (AUX) support for 10Gb/s iSCSI – Support for configuring back-end dual port 10GbE iSCSI Flex I/O Modules for use with MirrorView or as host connect. With the NS-120 only one 10Gb/s iSCSI I/O module can be configured and for the NS480 and NS-960 a maximum of two 10Gb/s iSCSI I/O modules are supported
  • Flash Drive new configuration options. Flash drives now support new raid configurations: Raid-1/0, Raid-6 configurations. In addition, flash drive configurations at install are optimized for improved performance. Flash drive on Celerra now allows the same configuration options as CLARiiON.
  • 2TB 5400 rpm SATA will be offered and orderable in Direct Express
  • 200GB flash drives are offered and orderable in Direct Express as of September 2009
  • 600GB 15K rpm FC and SAS drives will be offered and orderable in Direct Express
  • 2.5” 146GB and 300GB 10k rpm SAS drives for NX4 will be offered and orderable in Direct Express


Celerra Deduplication was introduced in DART 5.6.45, it was a welcome addition to an already impressive feature set, but it did however have a few limitations and annoyances which needed to be overcome. Ive covered some of these below.

1.  NDMP backup of a Deduplicated file system resulted in the files being uncompressed (and rehydrated) in memory before being saved to tape meaning the dedup benefits did not extend to the backup system/media.

This behaviour has changed in DART 5.6.47 and the backup occurs against the compressed objects meaning dedup now does infact extend to the backup system/media

2.  As noted below the maximum file size supported in previous DART versions was 200MB.

 As of DART 5.6.47 this limitation has now been increased to 8TB which I suspect now covers almost any file your likely to place on the Celerra file system.

3.  In previous versions of DART, the parameters shown below could only be changed at the data mover level meaning the parameters applied to all dedup enabled file systems. Any changes made also needed to be done using the command line from the control station.

As of DART 5.6.47 you can now apply different parameters to different file systems and can all be done using the Celerra Manager web interface. Im also really glad to see these default values are now much more aggressive.

Here is a list of the default parameters in previous DART versions and the new defaults which automatically apply when an upgrade to 5.6.47 occurs.

  • Access time – Default value moves to 8 days from 30 days
  • Modification time – Default value moves to 8 days from 60 days
  • Maximum size – Default value moves to 8TB from today’s 200 MB
  • Minimum size – default value of file size stays at 24KB
  • Case sensitive – whether to use case-sensitive comparison on data for NFS. Default is case-insensitive comparisons to match with CIFS,
  • Comparison method – allow selection of method for deduplication (disabled, SHA-1, byte by byte comparison)
  • Minimum scan interval – The minimum number of days between scans for a file system. Default value remains 7 days


Actually not specific to 5.6.47 but with DART 5.5 and above comes support for a Celerra to be configured as a target for IOMEGA IX4-200D devices, if you’re wondering how this might be of benefit, consider a company with remote branches with IOMEGA IX4-200D devices located at each site all being able to replicate back to the primary Celerra storage array.


What is block support ?–  Native block support is the ability to configure iSCSI connections in the back-end (CLARiiON array) of a Celerra Unified storage platform for connectivity to hosts using a native CLARiiON block experience.

Previously, iSCSI support on the Celerra Unified Storage platform was available only using Celerra file emulation of iSCSI LUNs. This file-based iSCSI implementation continues to be an option for customers today. Native iSCSI supports all of CLARiiON functionality, including local and remote replication, through Navisphere . It also supports PowerPath, and RecoverPoint.

What are the key differences between file based iSCSI and Native iSCSI for VMware implementations ? —

Native iSCSI

  • Identical management model for iSCSI and FC LUNs (with Navisphere)
  • Virtualization aware Navisphere
  • Storage Viewer vCenter Plug-in
  • Navisphere Quality of Service (NQM) and VMware DRS affinity
  • RecoverPoint VMware SRM integration
  • MirrorView VMware SRM integration
  • General CLARiiON Functionality: Dynamic LUN Migration / FAST, 30 second SP failover

File-based iSCSI

  • 1000 read/write snaps of an iSCSI LUN
  • Replicator VMware SRM integration with automated failback (ability to replicate both file and block storage with a single solution)
  • Advanced IP networking features: FSN, Trunking, and EtherChannel for improved network availability and bandwidth.

What technical use cases does native iSCSI apply to ?

  • Tier 1 iSCSI requirements

–        High performance (no iSCSI emulation on a file)

–        Fast failover (block style failover vs. NAS style failover)

–        Consistency groups (required for managing protection scenarios for hosts with multiple LUNs that require to be kept in synchronization)

–        Quality of service support (Navi Quality of Service Manager – NQM)

–        Large LUN sizes (>2TB)

  • A need to replicate all block data (FC and iSCSI) via RecoverPoint (RecoverPoint not supported with Celerra)
  • Ability to snap an FC LUN and make the snap available to an iSCSI connected host.

The addition of Native iSCSI support is an excellent move from EMC but one thing I want to emphasize is dont think for one minute that file based iSCSI off the data mover is not enterprise ready, Ive implemented small, medium and large VMware implementations using file based iSCSI and performance has never been an issue and yes this includes applications such as Exchange, Microsoft SQL Server, Oracle, and SAP.

At the end of the day when you implement any iSCSI solution you need to go back to the basics and look at networking and storage configuration before you worry about the emulation overhead with file based iSCSI.

Keep an eye out for Part 2 coming up.

  1. Jason Boche says:

    Good post – looking forward to part 2. I’m trying to get up to speed on EMC Celerra.


  2. joe kelly says:

    Nice post Brian! Looks like its going to be a great year for us partners and more importantly our customers. Look forward to part 2!

  3. Mauro Ayala says:

    Thanks, I like your blog, now I am building muy VM lab and I am going to use the Celerra Virtual Appliance.

  4. Fred says:

    Nice blog, will come back here soon.


  5. […] the changes to EMC’s Celerra unified storage system, if you missed part 1 you can view this here and for everyone else as promised here is part […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s