Archive for January, 2010

iPad defines portable computing

Posted: January 29, 2010 in Uncategorized

Colleague Preston de Guise has written a great post over at The NetWorker Blog about the release of the Apple IPAD.

You wont find pictures, specifications or a price in this post, but instead covers why he believes this little bad boy will deliver (in his words) an “über-portable experience

You can check it out here

Advertisements

EMC…. Sharpen Up !!

Posted: January 27, 2010 in EMC Networker, VMware

For those of you who follow my blog you’ll know I am very pro EMC, the majority of my posts are VMware/EMC related. 

Something thats been bugging me for sometime now is NetWorkers lack of support for the vSphere vStorage API’s.  Searching around Ive read other competitors boasting support for the vStorage API’s from ” Day One” of vSphere being released.

Note that I did not say “EMC’s” lack of support, because Avamar of course supports the vStorage API’s for backing up virtual machines.

When it comes to storage and VMware integration, there is no doubt im my mind EMC are leading the field, take a look at some of the awesome VMware/EMC webcasts that are coming up, listed here on Chads site.

So what about NetWorker ?…..Who decided NetWorker didnt need vStorage support yet ?…..Was this an oversight by Product Development ?… Was this a deliberate decision to better position Avamar ?

NetWorker is an enterprise backup product which scales far beyond its competitors yet these compeditors already support the new API’s.

I find this very frustrating !!

Below I’ve embedded a tv commercial on YouTube which is made here in NZ with the catch line “Sharpen Up” which I think is quite fitting.

Given the amount of people talking in the forums about how Site Recovery Manager 1.x did not support automated failback, I was a little surprised to find this functionality also missing from 4.0.

Well the good news is if you have an EMC Celerra, the new failback plug-in now supports SRM 4.0 and allows a user to automatically failback data to the primary site, the plugin completes both storage and VMware failback tasks.

Whats new ?

Well other than supporting vCenter 4.0, the new plugin also supports the failback of NFS datastores (previous version supported iSCSI only)

What does the plug-in do ?

The Plug-in is a supplemental software package for the VMware Site Recovery Manager (SRM). This plug-in allows users to automatically failback virtual machines and their associated datastores to the primary site after implementing and executing disaster recovery through SRM for Celerra storage systems running Celerra Replicator V2 and Celerra SnapSure.

The plug-in does the following:

  • Provides the ability to input login information (hostname/IP, username, and password) for two vCenters and two Celerras.
  • Cross references replication sessions with vCenter datastores and virtual machines (VMs).
  • Provides the ability to select one or more failed over Celerra replication sessions for failback.
  • Supports both iSCSI and NFS datastores.
  • Manipulates the vCenter at the primary site to rescan storage, unregister orphaned VMs, rename data stores, register failed back VMs, reconfigure VMs, customize VMs,remove orphaned .vswp files for VMs, and power on failed back VMs.
  • Manipulates the vCenter at the secondary site to power off the orphaned VM, unregister the VM, and rescan storage.
  • Identifies failed over sessions created by EMC Replication Manager and direct the user as to how these sessions can be failed back.

For anyone interested, here are the release notes and also the plug-in download.

One of the more popular posts I’ve done over the last few months has been how to configure a standard vSwitch to allow multiple paths to the storage using the software iSCSI initiator in vSphere 4.0

The only problem here was this could only be done with a standard vSwitch and trying to configure on a virtual distributed  switch resulted in the error “Add nic failed in IMA” (VMware also confirmed this was a bug)

When vSphere 4 update 1 was released I went straight to the release notes to see if this problem had been resolved, but  found no mention of it. After a quick install I found the error no longer occurred, but unfortunately did not have my Celerra simulator setup to test conclusively.

Good news, last night I set this up using the exact configuration used previously and found this problem has been resolved in update 1.

Here are some screenshots showing everything working as expected using the dVswitch.

Virtual Distributed  Switch

In this screenshot you can see I have multiple port groups configured,  each with its own VMKernel port.

 

Device View

Here the device view shows the 2GB lun presented from the Celerra. You can also see  2 paths are now being reported.

 

Path View

This screenshot was taken after I changed the path selection policy to “Round Robin” which is no longer considered experimental (as it was with ESX 3.x) I now have 2 avtive paths.

 

I got a few emails over the last month asking if I had tested this yet so I hope this helps and if your looking for the new vSphere iSCSI configuration guide, you can find this here.

Also here is a great white paper on migration and configuration of virtual distributed  switches.

I recently posted about some of the changes to EMC’s Celerra unified storage system, if you missed part 1 you can view this here and for everyone else as promised here is part 2.

As previously noted in post 1, all information was found on PowerLink, if your an EMC customer and you havent already done so, I highly recommend getting an account.

Celerra Manager IE8 Support

Celerra Manager has been a bit behind the times in regards to support for Internet Explorer, its great to finally see IE8 supported.

Fast Mount

I actually plan to do a post shortly on Celerra DataMover (now called X blade) failover and cover some of the considerations when your setting up your Celerra in production, so keep an eye out for this in the near future.

What is Fast Mount ?  

The improvements in DART V5.6.47 will reduce the file system mount time significantly for a typical configuration. The file system mount function is a major part of the reboot and failover processing time. Configurations with larger numbers of file systems will benefit even more.

Manual Deduplication management API and CIFS compression 

What manual options are now available for using Celerra deduplication?

With DART V5.6.47, an API has been introduced to enable external management applications to call the deduplication process for specific files. Currently CIFS file level compression functionality in Windows Explorer is the only interface used for this manual function.

How is the CIFS compression functionality enabled?

In Windows Explorer, if a user right clicks a file, specifies properties and selects “Advanced”, there is an option to “compress contents to save disk space”. If the client file is on a share that is hosted by Celerra running 5.6.47 and with to With DART V5.6.47, an API has been introduced to enable external management applications to call the deduplication.

Replication Enhancement “Out of Family” support

Reading over the information I found on PowerLink, it suggests earlier versions of DART supported replication (using Celerra Replicator) between the same major DART releases e.g. 5.5, 5.6, but in reality when you contact the Celerra support team I can say from experience they want to see identical versions of DART running on both Celerra’s.

 Now with “Out of family support” EMC will support phased upgrades of systems on different major releases e.g. 5.5, 5.6. This is to allow for upgrades in environments with multiple systems involved in replication and it is not feasible to upgrade all platforms at the same time.

It’s important to note this functionality is supported from 5.6.47 and above only.

Windows 2008 R2 iSCSI support

As you would expect this did in fact work as expected but was previously unsupported, I’m glad to see Windows 2008 R2 is now fully supported.

SMB2 Support

The SMB2 protocol is now fully supported with 5.6.47 which is great news for Windows 7 and Windows 2008 users which utilises the new protocol, it’s also worth noting that you can run SMB2 exclusively or run both SMB1 and SMB2 together.

As noted above with the iSCSI support, SMB2 with windows 7 and Windows 2008 works as expected in earlier DART versions but its great to see EMC getting these things “supported” as nothings worse than sitting in front of a customer and having to say it works but its not supported.

Vote Now for VMware Bloggers

Posted: January 11, 2010 in VMware

It’s that time again, you can vote here for your favorite VMware bloggers. It will be very interesting to see who takes the top spots this time and how much the list changes, but regardless of who rates #1 everyone on that list does a fantastic job and if you have a spare moment take the time to make your vote count.

When I sat down to write this post it was a chance for me to look back over the last year and to be honest when I started blogging it was actually more of a place for me to store useful bits and pieces of information for self-reference more than anything, but somewhere along the way people started tuning in.

 I’ve been very surprised by the number of hits, the majority of which come via people using search engines. Every now and then I keep an eye on hits per day/month but for a long time I’ve not looked to see which posts are the top rating.

And after a quick check, here are my top 5 posts.

vSphere 4 with Software iSCSI and 2 Paths

Enabling Active Directory Authentication with ESX 3.5

Securing your ESX Service Console

Disk Alignment

Networker 7.5.1 new VCB recovery feature

If you’ve found any of my posts useful and you would like to throw a vote my way you can scroll right to the bottom where you can enter a blog name into the blank field (Im not currently listed)

For those of you who are not familiar with Celerra, this is EMC’s multi protocol unified storage platform which does it all, iSCSI, NFS, CIFS, as well as FC SAN all bundled with Celerra Replicator for built-in replication of iSCSI LUNS, File Systems and Virtual Data Movers. (Brilliant for VMware, Site Recovery Manager and VMware View)

Just recently Chad from virtualgeek left a message on one of my posts saying a new Celerra Simulator had been released, so off I went to take a look and sure enough its based on the newly released 5.6.47 version of DART. After doing a bit of searching around on PowerLink I found an excellent document which I’ve used for the information below to cover not all but the most beneficial in my opinion of the changes in this release…. oh and Ill br breaking it up into two posts so its doesn’t blow out into a monster.

 HARDWARE

With DART version 5.6.47 comes new support for hardware, here are some of the highlights.

  • 8Gb FC FLEX I/O Module support in the NS-960 and NS-G8 blade, enabling full 8Gb X-Blade to Storage connectivity
  • FLEX I/O Module Upgrades for NS-960 and NS-G8 X-Blades
  • Native array (AUX) support for 10Gb/s iSCSI – Support for configuring back-end dual port 10GbE iSCSI Flex I/O Modules for use with MirrorView or as host connect. With the NS-120 only one 10Gb/s iSCSI I/O module can be configured and for the NS480 and NS-960 a maximum of two 10Gb/s iSCSI I/O modules are supported
  • Flash Drive new configuration options. Flash drives now support new raid configurations: Raid-1/0, Raid-6 configurations. In addition, flash drive configurations at install are optimized for improved performance. Flash drive on Celerra now allows the same configuration options as CLARiiON.
  • 2TB 5400 rpm SATA will be offered and orderable in Direct Express
  • 200GB flash drives are offered and orderable in Direct Express as of September 2009
  • 600GB 15K rpm FC and SAS drives will be offered and orderable in Direct Express
  • 2.5” 146GB and 300GB 10k rpm SAS drives for NX4 will be offered and orderable in Direct Express

CELERRA DEDUPLICATION

Celerra Deduplication was introduced in DART 5.6.45, it was a welcome addition to an already impressive feature set, but it did however have a few limitations and annoyances which needed to be overcome. Ive covered some of these below.

1.  NDMP backup of a Deduplicated file system resulted in the files being uncompressed (and rehydrated) in memory before being saved to tape meaning the dedup benefits did not extend to the backup system/media.

This behaviour has changed in DART 5.6.47 and the backup occurs against the compressed objects meaning dedup now does infact extend to the backup system/media

2.  As noted below the maximum file size supported in previous DART versions was 200MB.

 As of DART 5.6.47 this limitation has now been increased to 8TB which I suspect now covers almost any file your likely to place on the Celerra file system.

3.  In previous versions of DART, the parameters shown below could only be changed at the data mover level meaning the parameters applied to all dedup enabled file systems. Any changes made also needed to be done using the command line from the control station.

As of DART 5.6.47 you can now apply different parameters to different file systems and can all be done using the Celerra Manager web interface. Im also really glad to see these default values are now much more aggressive.

Here is a list of the default parameters in previous DART versions and the new defaults which automatically apply when an upgrade to 5.6.47 occurs.

  • Access time – Default value moves to 8 days from 30 days
  • Modification time – Default value moves to 8 days from 60 days
  • Maximum size – Default value moves to 8TB from today’s 200 MB
  • Minimum size – default value of file size stays at 24KB
  • Case sensitive – whether to use case-sensitive comparison on data for NFS. Default is case-insensitive comparisons to match with CIFS,
  • Comparison method – allow selection of method for deduplication (disabled, SHA-1, byte by byte comparison)
  • Minimum scan interval – The minimum number of days between scans for a file system. Default value remains 7 days

 REPLICATION TARGET SUPPORT FROM IOMEGA IX4-200D

Actually not specific to 5.6.47 but with DART 5.5 and above comes support for a Celerra to be configured as a target for IOMEGA IX4-200D devices, if you’re wondering how this might be of benefit, consider a company with remote branches with IOMEGA IX4-200D devices located at each site all being able to replicate back to the primary Celerra storage array.

NATIVE iSCSI BLOCK SUPPORT

What is block support ?–  Native block support is the ability to configure iSCSI connections in the back-end (CLARiiON array) of a Celerra Unified storage platform for connectivity to hosts using a native CLARiiON block experience.

Previously, iSCSI support on the Celerra Unified Storage platform was available only using Celerra file emulation of iSCSI LUNs. This file-based iSCSI implementation continues to be an option for customers today. Native iSCSI supports all of CLARiiON functionality, including local and remote replication, through Navisphere . It also supports PowerPath, and RecoverPoint.

What are the key differences between file based iSCSI and Native iSCSI for VMware implementations ? —

Native iSCSI

  • Identical management model for iSCSI and FC LUNs (with Navisphere)
  • Virtualization aware Navisphere
  • Storage Viewer vCenter Plug-in
  • Navisphere Quality of Service (NQM) and VMware DRS affinity
  • RecoverPoint VMware SRM integration
  • MirrorView VMware SRM integration
  • General CLARiiON Functionality: Dynamic LUN Migration / FAST, 30 second SP failover

File-based iSCSI

  • 1000 read/write snaps of an iSCSI LUN
  • Replicator VMware SRM integration with automated failback (ability to replicate both file and block storage with a single solution)
  • Advanced IP networking features: FSN, Trunking, and EtherChannel for improved network availability and bandwidth.

What technical use cases does native iSCSI apply to ?

  • Tier 1 iSCSI requirements

–        High performance (no iSCSI emulation on a file)

–        Fast failover (block style failover vs. NAS style failover)

–        Consistency groups (required for managing protection scenarios for hosts with multiple LUNs that require to be kept in synchronization)

–        Quality of service support (Navi Quality of Service Manager – NQM)

–        Large LUN sizes (>2TB)

  • A need to replicate all block data (FC and iSCSI) via RecoverPoint (RecoverPoint not supported with Celerra)
  • Ability to snap an FC LUN and make the snap available to an iSCSI connected host.

The addition of Native iSCSI support is an excellent move from EMC but one thing I want to emphasize is dont think for one minute that file based iSCSI off the data mover is not enterprise ready, Ive implemented small, medium and large VMware implementations using file based iSCSI and performance has never been an issue and yes this includes applications such as Exchange, Microsoft SQL Server, Oracle, and SAP.

At the end of the day when you implement any iSCSI solution you need to go back to the basics and look at networking and storage configuration before you worry about the emulation overhead with file based iSCSI.

Keep an eye out for Part 2 coming up.