Posts tagged ·

PowerVM

·...

AIX PowerVM Concepts in short

Comments Off

PowerVM concepts

 

CPU entitlement

One of the unique PowerVM unique features is the ability to give a VM an exact slice of CPU time. In PowerVM, this slice is called a CPU entitlement. Implicit in this concept is that an entitlement can be for less than one CPU. A VM with fractional entitlements is termed a micropartition. One question this article will answer is if micropartitions scale linearly; that is, is there any overhead running with fractional CPUs?

Virtual CPUs

In addition to specifying the CPU entitlement, a VM must also specify the number of vCPUs it will use. These vCPUs are what the VM operating system sees, regardless of the underlying CPU entitlement.

A VM can have up to ten times the number of vCPUs as its entitlement, rounded down. vCPUs must be specified as whole numbers. For example, a VM with 1 entitlement can have 10 vCPUs; a VM with a 0.25 entitlement can only have 2 vCPUs. In addition, a VM must have at least 1 vCPU for each whole-number entitlement. It is not possible to have 2 entitlements and only 1 vCPU for example.

The interesting question here is whether a multithreaded workload benefits from having PowerVM or the operating system manage threads. PowerVM manages a multithreaded workload by scheduling additional vCPUs for a VM. The AIX operating system handles multiple threads using context switching between its available CPU threads.

Capped compared to uncapped

Another feature of PowerVM is that it allows VMs to use more CPU processing time if needed. If one VM is not using its CPU for a given scheduling interval, other VMs are able to use this CPU time to do more work. In order to use additional processor cycles, the VM must be in uncapped mode, which means that it can use up to its vCPUs worth of entitlement. For example, a VM with a 0.25 entitlement and 2 vCPUs in uncapped mode could use up to 2 CPU entitlements. If the VM only had 1 vCPU, it could only use 1 CPU entitlement. In capped mode, the same VM would never use more than 0.25 entitlement, regardless of its vCPU count. A separate set of benchmarks was run to measure how VMs with 2 vCPUs scale with different entitlements and uncapped settings.

The extra CPUs used by uncapped virtual machines can come from either other shared CPU VMs or from dedicated CPUs VMs that allow processor sharing. For dedicated CPU VMs, this is called donating mode.

Comments Off

PowerVM Migrating a Logical Volume Backed Virtual Disk to Different Volume Group

Comments Off

Question

How can I migrate a existing logical volume backed virtual disk to another volume group on my vioserver?

Answer

Note: Logical volume must be closed and not in use

As Padmin User Only

 

Step 1. Unvirtualize the existing logical volume backed disk thats presented through your current vhosts

rmvdev -vtd vtd_name

 

Step 2. Copy the current logical volume to the destination volume group a system generated name will be given.

cplv -vg desti_vg_name source_lv_name

 

Step 3. Revirtualize the new logical volume back out through vhost#

mkvdev -vdev lv_name -vadapter vhost# -dev vtd_name

 

See command man pages for optional flags

Comments Off

NPIV FAQs for PowerVM Virtual I/O Server Environment

Comments Off

QuestionThis document covers the most frequently asked questions for NPIV in a Virtual I/O Server (VIOS) environment.

This applies to VIOS version 2.1 and above.  Answer

  1. What is NPIV?
  2. What are the minimum requirements for NPIV?
  3. What are the supported Fibre Channel adapters for NPIV in a POWER6 System p server?
  4. What are the supported Fibre Channel adapters for the IBM BladeCenter H?
  5. Can I have dual NPIV capable Fibre Channel adapters on the same VIOS in different zones/fabrics?
  6. Is it possible to SAN boot the VIOS using the same Fibre Channel adapter being used to service NPIV traffic?
  7. What are the NPIV limitations?
  8. What is the meaning of lsnports output?
  9. Can a virtual Fibre Channel Server adapter be dynamically moved from one physical NPIV (HBA) adapter to another?

1. What is NPIV?

N_Port ID Virtualization(NPIV) is a standardized method for virtualizing a physical fibre channel port. An NPIV-capable fibre channel HBA can have multiple N_Ports, each with a unique identity. NPIV coupled with the Virtual I/O Server (VIOS) adapter sharing capabilities allow a physical fibre channel HBA to be shared across multiple guest operating systems. The PowerVM implementation of NPIV enables POWER logical partitions (LPARs) to have virtual fibre channel HBAs, each with a dedicated world wide port name (WWPN). Each virtual fibre channel HBA has a unique SAN identity similar to that of a dedicated physical HBA.

2. What are the minimum requirements for NPIV?

Hardware

    • A POWER6-based System p server OR

Minimum system firmware levels

  • EL340_039 for IBM Power 520 and 550
  • EM340_036 for IBM Power 560 and 570
  • One of the following JS blade types. Note: NPIV is only supported on the BladeCenter H
  • JS12 (7998-60X)
  • JS22 (7998-61X)
  • JS23 (7778-23X)
  • JS43 (7778-23X + FC8446)
  • BladeCenter H (7989-BCH)
  • Minimum one supported Fibre Channel adapter (see FAQ 3 & 4)
  • NPIV-enabled SAN switch.
    Only the SAN switch which is attached to the Fibre Channel adapter in the VIOS needs to be NPIV-capable. Other switches in your SAN do not need to be NPIV-capable.

Switch levels

  • Brocade v6.1.0 or later
  • McData v9.7 or later
  • Cisco v3.2 (3) or later

Software

  • HMC 7.3.4 or later
  • Virtual I/O Server 2.1.0.10 (Fixpack 20.1) or later
  • AIX 5.3 TL 9 or later
  • AIX 6.1 TL 2 or later
  • IBM i 6.1 with 6.1.1 LIC or later

3. What are the supported Fibre Channel adapters for NPIV in a POWER6 System p server?

The 8 Gigabit Dual Port Fibre Channel adapter, feature code 5735, is the only supported adapter.

The minimum firmware requirement to enable NPIV for AIX on this adapter is 110305 (# lsmcode -d fcs#).

You can obtain this image from the Microcode downloads site–select Power from Product Group, then Firmware and HMC Product.

4. What are the supported Fibre Channel adapters for the IBM BladeCenter H?

Fibre Channal Module Feature Code

———————————————- —————-

Emulex 8Gb Fibre Channel Expansion Card (CIOv) 8240

QLogic 8Gb Fibre Channel Expansion Card (CIOv) 8242 (DUAL port)

QLogic 8Gb Fibre Channel Expansion Card (CFFh) 8271

Note: NPIV is only supported on the BladeCenter H and the firmware level of the Fibre Channel adapter must support NPIV.

5. Can I have dual NPIV capable Fibre Channel adapters on the same VIOS in different zones/fabrics?

Yes.

6. Is it possible to SAN boot the VIOS using the same Fibre Channel adapter being used to service NPIV traffic?

With proper zoning, it is possible to SAN boot the VIOS with the same host bus adapter that is servicing the NPIV traffic. The VIOS would need to be running before the NPIV-based traffic can serve the clients.
Use care with this configuration because even a simple error in zoning could disable a VIOS and result in loss of the services it provides.

7. What are the NPIV limitations?

NPIV support Only available on 8-GB Fibre Channel Adapters on Power6 systems.The Fibre Channel Switch needs to support NPIV, but does not need to be 8-GB.

The 8-GB FCA can negotiate down to 2 and 4 GB.

Maximum number of virtual Fibre Channel (NPIV) adapters per physical Fibre Channel port 64This is the “tports” value in lsnports output, where tports is the “Total Number of NPIV Ports the physical Fibre Channel Port can support.

By default, it is set to the value of the ODM attribute max_npivs. If the max number of NPIVs has not been set (max_npivs=0), this number will be the minimum number of NPIV ports which can be supported based on the Number of Target WWPN’s and default value 64.

Maximum number of virtual Fibre Channel (NPIV) adapters per client Unlimited
Maximum number of WWPNs supported by the physical Fibre Channel port 2048
Maximum number of WWPN “pairs” 32,000Once deleted, WWPN pairs are not reused.

If you run out of WWPNs, you must obtain an activation code that includes another prefix with another 32,000 pairs of WWPNs.

8. What is the meaning of lsnports output?

$ lsnports

name physloc fabric tports aports swwpns awwpns

fcs3 U789D.001.DQDYKYW-P1-C6-T2 1 64 63 2048 2046

9. Can a virtual Fibre Channel Server adapter be dynamically moved from one physical NPIV adapter to another?

It is possible to dynamically remap a vfchost adapter to another physical NPIV capable Fibre Channel adapter (fcs#) using vfcmap command. The ability to do so depends on the AIX client and VIO server levels. It requires

  • Virtual I/O Server 2.1.2 or higher
  • AIX 5.3 TL 11 or higher (feature was introduced with IZ51404)
    • IZ33540 (5.3 TL 11)
    • IZ51404 (5.3 TL 12)
    • IZ33541 (6.1 TL 4)
    • IZ51405 (6.1 TL 5)

To verify if the APAR is installed, ran

# instfix -ik IZ#####

In the following example, we are unmapping vfchost0 from fcs0 to fcs1 (while the client is up and running):

      To unmap, ran vfcmap -vadapter vfchost0 -fcp
    To remap, ran vfcmap -vadapter vfchost0 -fcp fcs1

 

Comments Off