Posts tagged ·

NPIV

·...

What is NPIV (N_Port ID Virtualization) and NPV (N_Port Virtualization)?

Comments Off

First, though, I need to cover some basics. This is unnecessary for those of you that are Fibre Channel experts, but for the rest of the world it might be useful:

  • N_Port: An N_Port is an end node port on the Fibre Channel fabric. This could be an HBA (Host Bus Adapter) in a server or a target port on a storage array.
  • F_Port: An F_Port is a port on a Fibre Channel switch that is connected to an N_Port. So, the port into which a server’s HBA or a storage array’s target port is connected is an F_Port.
  • E_Port: An E_Port is a port on a Fibre Channel switch that is connected to another Fibre Channel switch. The connection between two E_Ports forms an Inter-Switch Link (ISL).

There are other types of ports as well—NL_Port, FL_Port, G_Port, TE_Port—but for the purposes of this discussion these three will get us started. With these definitions in mind, I’ll start by discussing N_Port ID Virtualization (NPIV).

N_Port ID Virtualization (NPIV)

Normally, an N_Port would have a single N_Port_ID associated with it; this N_Port_ID is a 24-bit address assigned by the Fibre Channel switch during the FLOGI process. The N_Port_ID is not the same as the World Wide Port Name (WWPN), although there is typically a one-to-one relationship between WWPN and N_Port_ID. Thus, for any given physical N_Port, there would be exactly one WWPN and one N_Port_ID associated with it.

What NPIV does is allow a single physical N_Port to have multiple WWPNs, and therefore multiple N_Port_IDs, associated with it. After the normal FLOGI process, an NPIV-enabled physical N_Port can subsequently issue additional commands to register more WWPNs and receive more N_Port_IDs (one for each WWPN). The Fibre Channel switch must also support NPIV, as the F_Port on the other end of the link would “see” multiple WWPNs and multiple N_Port_IDs coming from the host and must know how to handle this behavior.

Once all the applicable WWPNs have been registered, each of these WWPNs can be used for SAN zoning or LUN presentation. There is no distinction between the physical WWPN and the virtual WWPNs; they all behave in exactly the same fashion and you can use them in exactly the same ways.

So why might this functionality be useful? Consider a virtualized environment, where you would like to be able to present a LUN via Fibre Channel to a specific virtual machine only:

  • Without NPIV, it’s not possible because the N_Port on the physical host would have only a single WWPN (and N_Port_ID). Any LUNs would have to be zoned and presented to this single WWPN. Because all VMs would be sharing the same WWPN on the one single physical N_Port, any LUNs zoned to this WWPN would be visible to all VMs on that host because all VMs are using the same physical N_Port, same WWPN, and same N_Port_ID.
  • With NPIV, the physical N_Port can register additional WWPNs (and N_Port_IDs). Each VM can have its own WWPN. When you build SAN zones and present LUNs using the VM-specific WWPN, then the LUNs will only be visible to that VM and not to any other VMs.

Virtualization is not the only use case for NPIV, although it is certainly one of the easiest to understand.

<aside>As an aside, it’s interesting to me that VMotion works and is supported with NPIV as long as the RDMs and all associated VMDKs are in the same datastore. Looking at how the physical N_Port has the additional WWPNs and N_Port_IDs associated with it, you’d think that VMotion wouldn’t work. I wonder: does the HBA on the destination ESX/ESXi host have to “re-register” the WWPNs and N_Port_IDs on that physical N_Port as part of the VMotion process?</aside>

Now that I’ve discussed NPIV, I’d like to turn the discussion to N_Port Virtualization (NPV).

N_Port Virtualization

While NPIV is primarily a host-based solution, NPV is primarily a switch-based technology. It is designed to reduce switch management and overhead in larger SAN deployments. Consider that every Fibre Channel switch in a fabric needs a different domain ID, and that the total number of domain IDs in a fabric is limited. In some cases, this limit can be fairly low depending upon the devices attached to the fabric. The problem, though, is that you often need to add Fibre Channel switches in order to scale the size of your fabric. There is therefore an inherent conflict between trying to reduce the overall number of switches in order to keep the domain ID count low while also needing to add switches in order to have a sufficiently high port count. NPV is intended to help address this problem.

NPV introduces a new type of Fibre Channel port, the NP_Port. The NP_Port connects to an F_Port and acts as a proxy for other N_Ports on the NPV-enabled switch. Essentially, the NP_Port “looks” like an NPIV-enabled host to the F_Port on the other end. An NPV-enabled switch will register additional WWPNs (and receive additional N_Port_IDs) via NPIV on behalf of the N_Ports connected to it. The physical N_Ports don’t have any knowledge this is occurring and don’t need any support for it; it’s all handled by the NPV-enabled switch.

Obviously, this means that the upstream Fibre Channel switch must support NPIV, since the NP_Port “looks” and “acts” like an NPIV-enabled host to the upstream F_Port. Additionally, because the NPV-enabled switch now looks like an end host, it no longer needs a domain ID to participate in the Fibre Channel fabric. Using NPV, you can add switches and ports to your fabric without adding domain IDs.

So why is this functionality useful? There is the immediate benefit of being able to scale your Fibre Channel fabric without having to add domain IDs, yes, but in what sorts of environments might this be particularly useful? Consider a blade server environment, like an HP c7000 chassis, where there are Fibre Channel switches in the back of the chassis. By using NPV on these switches, you can add them to your fabric without having to assign a domain ID to each and every one of them.

Here’s another example. Consider an environment where you are mixing different types of Fibre Channel switches and are concerned about interoperability. As long as there is NPIV support, you can enable NPV on one set of switches. The NPV-enabled switches will then act like NPIV-enabled hosts, and you won’t have to worry about connecting E_Ports and creating ISLs between different brands of Fibre Channel switches.

Comments Off

An example: Install new LPAR AIX using NPIV and Storage V7000

Comments Off

Summary:

 

1. Create an AIX LPAR.

2. VIOS Set in   the Virtual Fibre Channel Adapter servers.

3. In the VIOS identify the new vfchost.

4. Create Virtual Fibre Channel Adapter customers

5. Save the new configuration of the VIOS.

6. Map the vfchost to physical cards.

7. Identifying the WWPNs gives the LPAR created.

8. Configure NIM to install AIX one.

9. Start first LPAR.

10. Check status of vfchost logged_in.

11. Create the area in the S / W SAN.

12. Reset LPAR.

13. Define the disks.

14. The installation starts.

 

Detail Steps:

 

1. On the HMC create an AIX LPAR as ever.  On LHEA select IVE card, select a free port (which is not used by another LPAR).

HMC
> Systems Management
> Servers
> POWER Server
> Configuration
> Create Logical Partition
> AIX or Linux
> Complete steps Wizard

 

Note: It is advisable to set in advance the resources that were used to create the LPAR (CPU, memory, cards, etc..).
2. At HMC go to vionodo1h, then Logial Dynamic Partitioning and create a virtual Fibre Channel adapter ( Virtual Fibre Channel Adapter ) with an identifier not used.

 

HMC
> Vinodo1h
> Dynamic Logial Partitioning
> Virtual Adapter
> Actions
> Create Virtual Adapter
> Fibre Channel Adapter
vionodo1h: ID 4 to 3 nimsuma with customer ID
vionodo2h: ID nimsuma 4 for 4 with customer ID

3. Then run VIOS cfgdev . If you run lsdev before and after, you should see a new device vfchost .

 

vionod1h :
$ lsdev | grep ^vfc
vfchost0         Available   Virtual FC Server Adapter
vfchost1         Available   Virtual FC Server Adapter
vfchost2         Available   Virtual FC Server Adapter
vfchost3         Available   Virtual FC Server Adapter
vfchost4         Available   Virtual FC Server Adapter
vfchost6         Available   Virtual FC Server Adapter
vfchost7         Available   Virtual FC Server Adapter
vfchost8         Available   Virtual FC Server Adapter
$ cfgdev

$ lsdev | grep ^vfc
vfchost0         Available   Virtual FC Server Adapter
vfchost1         Available   Virtual FC Server Adapter
vfchost2         Available   Virtual FC Server Adapter
vfchost3         Available   Virtual FC Server Adapter
vfchost4         Available   Virtual FC Server Adapter
vfchost5         Available   Virtual FC Server Adapter <–
vfchost6         Available   Virtual FC Server Adapter
vfchost7         Available   Virtual FC Server Adapter
vfchost8         Available   Virtual FC Server Adapter

vionodo2h:
$ lsdev | grep ^vfc
vfchost0         Available   Virtual FC Server Adapter
vfchost1         Available   Virtual FC Server Adapter
vfchost3         Available   Virtual FC Server Adapter
vfchost4         Available   Virtual FC Server Adapter
vfchost6         Available   Virtual FC Server Adapter
vfchost7         Available   Virtual FC Server Adapter
vfchost9         Available   Virtual FC Server Adapter
$ cfgdev

$ lsdev | grep ^vfc
vfchost0         Available   Virtual FC Server Adapter
vfchost1         Available   Virtual FC Server Adapter
vfchost2         Available   Virtual FC Server Adapter <–
vfchost3         Available   Virtual FC Server Adapter
vfchost4         Available   Virtual FC Server Adapter
vfchost6         Available   Virtual FC Server Adapter
vfchost7         Available   Virtual FC Server Adapter
vfchost9         Available   Virtual FC Server Adapter
4. On the profile of the LPAR created in step 1, create a new virtual Fibre Channel adapter and generate one with the identifier and VIOS Part 2. Check which is required for the LPAR start.

 

HMC
> nimsuma
> Configuration
> Manage Profiles
> Default
> Virtual Adapters
> Actions
> Create Virtual Adapter
> Fiber Channel Adapter

Adapter # 3 Adapter VIO vionodo1h ID 4. 
Adapter # 4, VIO vionodo2h ID adapter 4.
> OK
> Close

 

5. Save the current profile settings as Default VIO server to keep it restarts if a VIOS.

HMC
> Vionodo1h
> Configuration
> Save Configuration Courrent
> Acceptance 

HMC
> Vionodo2h
> Configuration
> Save Configuration Courrent
> Acceptance

6. VIOS Map out in the command vfcmap , alternating fcs0 and fcs1 .

 

vionodo1h:

$ Vfcmap-vadapter vfchost5-fcp fcs1 
$ lsmap-NPIV-vadapter vfchost5 
Name Physloc ClntID ClntName ClntOS 
------------------------------ -------------------------------------------- 
vfchost5 U8233.E8B.065864P -V1-C4 3                 

Status: Not_logged_in 
FC name: fcs1 FC loc code: U78A0.001.DNWK388-P1-C1-T2 
Ports logged in: 0 
Flags: 4 <NOT_LOGGED> 
VFC client name: VFC client DRC:

vionodo2h:

$ Vfcmap-vadapter vfchost2-fcp fcs0   
$ lsmap-NPIV-vadapter vfchost2 
Name Physloc ClntID ClntName ClntOS 
------------------------------ -------------------------------------------- 
vfchost2 U8233.E8B.065864P -V2-C4 3                 

Status: Not_logged_in 
FC name: fcs0 FC loc code: U78A0.001.DNWK388-P1-C3-T1 
Ports logged in: 0 
Flags: 4 <NOT_LOGGED> 
VFC client name: VFC client DRC:

7. Goto the WWPNs Storage corresponding to the LPAR created:

HMC
> Select the LAPR the first step
> Configuration
> Manage Profiles
> Choose Default
> Virtual Adapters
> Client Fibre Channel vionodo1h
> Actions
> Properties
> WWPNs: c050760376c40064

 

7.1 Repeat the same procedure goes for vionodo2h

 

8. Configure NIM to install AIX one.

# Vi / etc / hosts -> add the name of the LPAR of step 1 and IP available.
10.1.4.253       nimsuma

# Smitty nim
> Perform Nim Administration Tasks
> Manage Machines
> Define a Machine

smitty nim
> Perform Nim Administration Tasks
> Define a Resource
> spot

# Smitty nim_bosinst
> Select the client we previously defined.
> Select Installation Type: “spot - Install a SPOT copy”
> Select the SPOT to use for the installation    
> Set “ACCEPT new license agreements?” to yes
> Set “Initiate reboot and installation now?” to no

 

9. Start LPAR in SMS mode and configure the network with the same IP that is in the / etc / hosts on the NIM, in order to start (boot ) from him.

A. Select 2 for setup remote IPL.
B. Select 1 for first ethernet.
C. Select 1 for IPV4.
D. Select 1 for bootp.
E. Select 1 for IP parameters.
1.  client: 10.1.4.253
2.  server: 10.1.4.254
3.  Gateway: 10.1.4.11
4.  Subnet: 255.255.255.0
F. Hit ESC.

10. Execute command on VIOS NPIV-vAdapter lsmap-vfchostX , to see that the state is logged_in.

11. The Storage area should modify now the space for integrating the new LPAR.

In the Fabric
Aliases are created:
nimsuma_hba1
c0:50:76:03:76:c4:00:66
nimsuma_hba2
c0:50:76:03:76:c4:00:64

Zones are created:
v7000nimsumahba1
nimsuma_hba1; stg_v70001; stg_v70002; stg_v70003; stg_v70004

v7000nimsumahba2
nimsuma_hba2; stg_v70001; stg_v70002; stg_v70003; stg_v70004

 

12. Reset LPAR.

 

13. At this time “We” visible from the disk enclosure. Storage coach proceeds to perform the necessary definitions in the storage unit.

 

In V7000
It creates the host:
nimsuma 
c0:50:76:03:76:c4:00:66
c0:50:76:03:76:c4:00:64

Volumes are created:
nimsuma_rootvg
20 GB - NL_SAS
nimsuma_datos
50 GB - NL_SAS

14. Starts installation.

 

A. Select 1 for boot device.
B. Select 6 for network.
C. Select 1 for bootp.
D. Select 1 for first ethernet.
E. Select 2 for normal boot mode.
F. Select 1 for yes I want to exit tftp should now start up.
G. After around 30,000 packets the console prompt should appear as follows:
 
Select 1 for English during install.

 

Notes:

The LPAR created in this example the call nimsuma . To implement NPIV on POWER environment is necessary to use VIO Server, in this case our VIOS called vionodo1h and vionodo2h (redundant). The Storage used was an IBM SW V7000, in which we created two LUNs, one for the installation called nimsuma_rootvg and one for data call nimsuma_datos .

 

Comments Off

Logging NPIV WWPN into SAN switch before AIX installation

Comments Off

Question:

How can I log NPIV WWPN into SAN switch before AIX installation on the VIO LPAR that is using NPIV LUN boot?

 

Answer:

 

There are a few methods you can use:

 

Method 1:

Allocate the NIM lpp_source and SPOT resources to the new LPAR on NIM server.

Start up the new LPAR and boot it from network

When it reaches the installation menu, the WWPNs of virtual adapters will login into SAN switch.

 

Method 2:

 

Boot the LPAR, press 8 to get into firmware mode.

> ioinfo

Select 6 for FC Adapter
Then Select the HBA you want, then run the list devices, which will return you login & Devices if any

Note: You can only login one virtual adapter at one time.

Method 3:

chnportlogin and lsnportlogin – Login in Virtual Fibre Channel adapters for NPIV client LPARs

There are two new HMC (V7.7.3.0) commands that can force a client Virtual Fibre Channel adapter to log into a SAN. This should make the life of the AIX and SAN administrator easier, as they will no longer need to install AIX in order for the new VFC adapters to log into the SAN. Although there was an unsupported method* for doing this already (see links below). Nor will the SAN admins need to “blind” zone the WWPNs.

 There was some indication of this in the latest VIOS FP readme:

 

https://www-304.ibm.com/support/docview.wss?rs=0&uid=isg400000693

 

  • Enabled SAN login from VIOS

And there was this:

IZ95569: ENABLE SAN LOGIN FROM VIOS FOR IMPROVED UABILITY

https://www-304.ibm.com/support/docview.wss?uid=isg1IZ95569

 

Problem summary

  • Customer will be able to instruct the VIOS to login to the SAN
  • for a given WWPN on a given virtual adapter to allow the
  • customer to see the WWPN in SAN management tools and easing
  • the task of configuring new NPIV client
  • partitions.

Problem conclusion

  • add commands to VIOS manager interface to allow the HMC to
  • login, logout and query virtual adapters.

 

And the manpage of these commands:

 

chnportlogin:  http://pic.dhe.ibm.com/infocenter/powersys/v3r1m5/index.jsp?topic=/p7edm/chnportlogin.html

lsnportlogin:   http://pic.dhe.ibm.com/infocenter/powersys/v3r1m5/index.jsp?topic=/p7edm/lsnportlogin.html
The new lsnportlogin and chnportlogin commands on the 7.730 HMC provide the ability to utilize the new function in the VIOS. From the Readme:

Added the chnportlogin and lsnportlogin commands.

 

1.  The chnportlogin command allows you to perform N Port login and logout operations for virtual fibre channel client adapters that are configured in a partition or a partition profile. Use this command to help you in zoning WWPNs on a Storage Area Network (SAN). A login operation activates all inactive WWPNs, including the second WWPN in the pair assigned to each virtual fibre channel client adapter. This feature is particularly useful for Logical Partition Migration. A logout operation deactivates all WWPNs not in use. A successful login of a virtual fibre channel adapter requires that the corresponding virtual fiber channel server adapter must exist and that it must be mapped.

 

2.  The lsnportlogin command also allows you to list WWPN login status information for virtual fibre channel client adapters configured in partitions or partition profiles.

 

 

Here’s an example of using the lsnportlogin command on one of my systems:

hscroot@HMC1:~>  lsnportlogin -m  770-frame-1 –filter “profile_names=normal”

lpar_name=nim1,lpar_id=3,profile_name=normal,slot_num=32,wwpn=c060760405f0000c,wwpn_status=0

lpar_name=nim1,lpar_id=3,profile_name=normal,slot_num=32,wwpn=c060760405f0000d,wwpn_status=0

lpar_name=nim1,lpar_id=3,profile_name=normal,slot_num=33,wwpn=c060760405f0000e,wwpn_status=0

lpar_name=nim1,lpar_id=3,profile_name=normal,slot_num=33,wwpn=c060760405f0000f,wwpn_status=0

lpar_name=nim1,lpar_id=3,profile_name=normal,slot_num=30,wwpn=c060760405f00000,wwpn_status=0

lpar_name=nim1,lpar_id=3,profile_name=normal,slot_num=30,wwpn=c060760405f00001,wwpn_status=0

lpar_name=nim1,lpar_id=3,profile_name=normal,slot_num=31,wwpn=c060760405f00002,wwpn_status=0

lpar_name=nim1,lpar_id=3,profile_name=normal,slot_num=31,wwpn=c060760405f00003,wwpn_status=0

lpar_name=aix01adm,lpar_id=4,profile_name=normal,slot_num=32,wwpn=c060760405f00004,wwpn_status=0

lpar_name=aix01adm,lpar_id=4,profile_name=normal,slot_num=32,wwpn=c060760405f00005,wwpn_status=0

lpar_name=aix01adm,lpar_id=4,profile_name=normal,slot_num=33,wwpn=c060760405f00006,wwpn_status=0

lpar_name=aix01adm,lpar_id=4,profile_name=normal,slot_num=33,wwpn=c060760405f00007,wwpn_status=0

lpar_name=aix01adm,lpar_id=4,profile_name=normal,slot_num=30,wwpn=c060760405f00008,wwpn_status=0

lpar_name=aix01adm,lpar_id=4,profile_name=normal,slot_num=30,wwpn=c060760405f00009,wwpn_status=0

lpar_name=aix01adm,lpar_id=4,profile_name=normal,slot_num=31,wwpn=c060760405f0000a,wwpn_status=0

lpar_name=aix01adm,lpar_id=4,profile_name=normal,slot_num=31,wwpn=c060760405f0000b,wwpn_status=0

 

 

Descriptions of selected command attributes:

wwpn_status

The WWPN status.  Possible values are:

0 – WWPN is not activated

1 – WWPN is activated

2 – WWPN status is unknown

Comments Off