« The SDMC Evolution | Main | Sometimes the Latest isn't the Greatest »

May 03, 2011


Excellent Blog. We are currently try to implement NPIV in our environment. We would be implement POWER 770 with NPIV on FCOE adapters using Cisco MDS9000 switches and EMC CX4-400 storage. To further complicate this, we have a Cisco DWDM across our data centres with matching hardware on our other site. I have been in a long and hard battle with our SAN admins (who also is our VMware admins) regarding the benefits of NPIV. Their "windows" and button pushing mentality causing me grief because of LPM requirements of manually zoning the false/ghost WWPN and how many paths should they zone to each EMC SP. They also claim that NPIV is very complicated in terms of all the zoning for each LPAR (4 VFCA, with 2 WWPN each). They too have try to convince me to just zone on VFCA per VIOS and not zone for LPM..hahahah..

Anyways, regardless of this battle, I love NPIV of what it provides me to not manage the VIOS mapping. We are currently trying to test LPM with NPIV without any luck, looks like a issue with zoning.

Just want to know your thoughts on how many VFCA should we have on the LPARS? If dual VIOS have 2 HBA's each in which each HBA is on seperate Fabrics, then to provide redundancy and reduce complexity for the SAN team, then would it be better to only have 2 VFCA per LPAR connected to each VIOS on different fabrics???? What is "IBM best practices" on this?


IMHO, 4 VFCA for production LPARs and 2 VFCA for non-production LPARs. But it will depend on how many FC adapters you have in each VIOS and your I/O performance & availability requirements.

I think you've chosen a sensible approach to your configuration. Looks fine to me.

What are the basic steps when migrating from vSCSI to NPIV

-remove vscsi / vhosts
-create Virtual fibre channel at VIO
-create Virtual FC client at LPAR
-Locate the WWPNs for the destination
-client Fibre Channel adapter from the
-HMC and remap the SAN storage that was
-orinally being mapped for the
source partition to the WWPNs for the destination partition.

We have implemented NPIV.

We choose (enforce) our own unique WWNS (something as AB:AB:AB:...), we do not use the HMC/SDMC allocation apability, so we have predictible WWN's that we can use for zoning (zoing is preconfigured, we do not script the switches).

Every VIOS client has just a number associated with it to choose the WWN's to use. VIOS client creation, WWN mapping and lun association is scripted based on a config file.

One advantage of NPIV is that if path control software is used on non-NPIV systems, it is also used in the VIOS NPIV environment making systems uniform (offering better testing and integration capabilities).

Bernard D.

you don't need to map both WWNs in VIOS, but you need to include them both in the proper zone of the fabric to allow Live Partition Mobility.

Attempting to move a vscsi client on one server to npiv on another server. Followed all the aforementioned steps, and initially moved my rootvg with just a single wwn connection. But when booting get 0554 error, my bootlist shows as vscsi0.
If i try change the bootlist I get 0514-221 bootlist: Unable to make boot device list for hdisk0. Any ideas ?

With regard to Jan's NPIV problem, we are unable to use NPIV when the lun is a rootvg. If someone has found a solution to this problem, please post.

Great article as usually.

What's your opinion on sharing the same VFC adapaters to both disk and tape in a small environment?

We have redundant VIOS, each with only one FC HBA so we can't really dedicate a HBA to tape.


I like the advice you can find here:

"Few of the issues that are seen while trying to share the same HBA port for tape and disk I/O are:

1. Nature of the traffic : disk generally do random I/O of short duration giving raise higher IOPS but tape does sequential I/O for higher throughput.
2. Tape and disk devices require incompatible HBA settings for reliable operation and optimal performance characteristics.
2. Tape drivers do tape resets which results in SAN switch port re-logins , etc"

The comments to this entry are closed.