By far one of the more popular posts Ive done over the last few months has been this post where I showed how the iSCSI initiator in vSphere 4 could be configured to provide multiple paths to each LUN and in turn the path selection policy “Round Robin” could be configured to load balance across multiple paths (up to 8 paths is supported).
One of the only things to disappoint me with the initial vSphere release was this configuration could only be applied to a standard vSwitch, when trying associate the VMKernel ports on a dVSwitch with the iSCSI initiator the following error occurred ” Add Nic Failed in IMA”
Over the last couple of days Ive been itching to test this out and last night I finally got a chance. Also something I wanted to do was perform these tasks using the vSphere CLI opposed to how I did it last time which was via esxcli on the service console.
Ill skip through the part where I created the dVSwitch, but the same concepts apply, 2 nics, 2 port groups. In my test lab VMK1 and VMK2 were the VMKernel ports associated with the port groups as shown in the screenshot below.
Associate VMK1 and VMK2 with vmhba33
Now lets check to make sure that both VMK1 and VMK2 are indeed associated with vmhba33
At the time of writing this post I dont have a iSCSI system I can easily point my vSphere system at to confirm this works but the fact that I can now successfully run the commands without error is promising.
Ill setup a Celerra Simulator over the next day or so and confirm everything works as expected.
I would be interested in hearing from anyone who’s also got this up and running.
Just as a wrap up, heres a couple of errors you might come across and how to fix them.
Get Hba Oid Failed error when trying to add vmk’s to the iSCSI initiator, this was because the iSCI initiator was disabled.
If you’re using vSphere 4 Update 1 and your still getting error Add Nic Failed in IMA then its likely because you have not configured the fail over of the physical NICS correctly.