Blog
AIXchange

Categories

« Configuring X11 Forwarding | Main | IBM Rolling Out Entitlement Validations »

January 14, 2014

Comments

Nice idea, but what to do in case vio has more than 10 adapters, which is more common scenario from my experience. Doesn't the maximum number of virtual adapters on LPAR impact hypervisor reserved memory?

We are using similar approach but is confusing when you are using LPM, and LPAR ids may be changed.

Indeed, just as above, I often face scenario when there's more than 10 adapters per VIOS, in extreme case I've had over 20 adapters from single VIOS mapped to each client (LPAR). So the solution above would be rather tricky.

Personally I use odd adapter (slot) number for VIOS1 (LPAR_ID=2) and even for VIOS2 (LPAR_ID=3).

The full convention looks like this:
${CLIENT_LPAR_ID}${SLOT_NUMBER}

So for example for Client LPAR ID 7 we would have

VIOS1 + Adapter 1 == 71
VIOS2 + Adapter 2 == 72
VIOS1 + Adapter 3 == 73
.....
VIOS1 + adapter 25 == 725

This setup does pose problems though if you would have a lot of LPARS (say 99) and at the same time used adapter ID over 9. For me the combination of both events did not occur yet.

I would also recommend using rendev to rename vfshost adapter on VIOS to suit your naming convention, it makes it much easier to read.
So from our above example slot 725 == vfchost725.

For interested I am using following scripts to create the VFC adapters from HMC:

# VARS
FRAME="p7_something"
VIO_IDS="2 3"
LPAR_IDS="7 8 9"
ADAPTERS="1 2 3 4"

# Collect infos:
for ID in ${VIO_IDS}; do
lssyscfg -r prof -m ${FRAME} -F lpar_name,virtual_fc_adapters --filter lpar_ids=${ID}|sed 's/["]//g'|sed 's/,/\n/g'|sort -n
done

for ID in ${VIO_IDS}; do
lshwres -m ${FRAME} -r virtualio --rsubtype fc --level lpar --filter lpar_ids=${ID} \
-F lpar_name,lpar_id,slot_num,adapter_type,state,is_required,remote_lpar_id,remote_lpar_name,remote_slot_num | sort -n
printf -v char "%60s" "";echo "${char// /#}"
done

for ID in ${LPAR_IDS}; do
lssyscfg -r prof -m ${FRAME} -F lpar_name,virtual_fc_adapters --filter lpar_ids=${ID}|sed 's/,"/\n/g'|sed 's/["]//g'|sort -n
done

for ID in ${LPAR_IDS}; do
lshwres -m ${FRAME} -r virtualio --rsubtype fc --level lpar --filter lpar_ids=${ID} \
-F lpar_name,lpar_id,slot_num,adapter_type,state,is_required,remote_lpar_id,remote_lpar_name,remote_slot_num | sort -n
printf -v char "%60s" "";echo "${char// /#}"
done

# Run the update on Profiles (NOTE: it does only "echo" so you have last validation step!)
# VIOS profile
for VIO in ${VIO_IDS}; do
VIO_NAME=$(lssyscfg -r lpar -m ${FRAME} -F name --filter lpar_ids=${VIO})
echo " * Preparing VIO: ${VIO_NAME}"
addVFC=""
for ADPT in ${ADAPTERS}; do
if [[ $(expr ${VIO} % 2) -eq 0 ]] && [[ $(expr ${ADPT} % 2) -ne 0 ]]; then
echo " + Preparing adapter: ${ADPT}"
newVFC=$(for ID in ${LPAR_IDS}; do
LPAR=$(lssyscfg -r lpar -F name -m ${FRAME} --filter lpar_ids=${ID})
echo "${ID}${ADPT}/server/${ID}/${LPAR}/${ID}${ADPT}//0"
done | sed ':a;N;$!ba;s/\n/,/g')
addVFC="${addVFC},${newVFC}"
elif [[ $(expr ${VIO} % 2) -ne 0 ]] && [[ $(expr ${ADPT} % 2) -eq 0 ]]; then
echo " + Preparing adapter: ${ADPT}"
newVFC=$(for ID in ${LPAR_IDS}; do
LPAR=$(lssyscfg -r lpar -F name -m ${FRAME} --filter lpar_ids=${ID})
echo "${ID}${ADPT}/server/${ID}/${LPAR}/${ID}${ADPT}//0"
done | sed ':a;N;$!ba;s/\n/,/g')
addVFC="${addVFC},${newVFC}"
fi
done
echo " * VIO: ${VIO_NAME} Profile is ready"
echo "chsyscfg -r prof -m ${FRAME} -i 'name=Normal,lpar_id=${VIO},\"virtual_fc_adapters+=${addVFC}\"'" | sed 's/\+\=\,/\+\=/'
done

# LPAR Profile (for me always called "Normal")
for ID in ${LPAR_IDS}; do
addVFC=""
newVFC=$(for A in ${ADAPTERS}; do
if [[ $(expr ${A} % 2) -eq 0 ]]; then
VIO_ID=3
else
VIO_ID=2
fi
VIO_NAME=$(lssyscfg -r lpar -m ${FRAME} -F name --filter lpar_ids=${VIO_ID})
echo "${ID}${A}/client/${VIO_ID}/${VIO_NAME}/${ID}${A}//0"
done | sed ':a;N;$!ba;s/\n/,/g')
addVFC="${addVFC},${newVFC}"
echo "chsyscfg -r prof -m ${FRAME} -i 'name=Normal,lpar_id=${ID},\"virtual_fc_adapters+=${addVFC}\"'" | sed 's/\+\=\,/\+\=/'
done

# DLPAR update VIOS
for A in ${ADAPTERS}; do
for ID in ${LPAR_IDS}; do
if [[ $(expr ${A} % 2) -eq 0 ]]; then
VIO_ID=3
else
VIO_ID=2
fi
echo "chhwres -m ${FRAME} -r virtualio --rsubtype fc -o a -s ${ID}${A} --id ${VIO_ID} -a \
\"adapter_type=server,remote_lpar_id=${ID},remote_slot_num=${ID}${A}\""
done
done

# DLPAR update LPAR
for A in ${ADAPTERS}; do
for ID in ${LPAR_IDS}; do
if [[ $(expr ${A} % 2) -eq 0 ]]; then
VIO_ID=3
else
VIO_ID=2
fi
WWPNS=$(lssyscfg -r prof -m ${FRAME} -F virtual_fc_adapters --filter lpar_ids=${ID}| \
sed 's/"""/\n/g' | sed 's/"",""/\n/g'|grep "^${ID}${A}"|cut -d"/" -f6)
echo "chhwres -m ${FRAME} -r virtualio --rsubtype fc -o a -s ${ID}${A} --id ${ID} -a \
\"adapter_type=client,remote_lpar_id=${VIO_ID},remote_slot_num=${ID}${A},wwpns=\\\"${WWPNS}\\\"\""
done
done


All the vars and script should be directly assigned and executed on HMC.
Those really simplify my work especially when I need to build many LPARs at the same time ;)

PS: The scripts might be a not very readable here so if anyone is interested feel free to mail me.

I am so glad you decided on this topic, because just earlier I was thinking about the convention while doing an LPAR migration validation and the first error that came up had to do with virtual adapter number not being available on the destination...

I use a similar convention and it falls under the same pros and cons like how you suggested:

For starters, "Maximum Virtual Adapters" are set to 1000 to accommodate my whole range. Herewith I quote the HMC help page:

"The higher the maximum number is, the more memory the managed system reserves to manage the virtual adapters, so specify only the number of virtual adapters you are likely to use. You can change this value only when you create or change partition profiles".

I think logic will explain a couple of pointers one can think about, of which I am not going to mention for now (too long post).

So my convention is as such:
1. The first digit from the right I allocate to virtual adapter number - the odd numbers goes to VIO1 and the even numbers go to VIO2.
2. The third digit is allocated to the type of adapter. Virtual SCSi is 1 and virtual FC is 2.
2. The second digit is allocated to LPAR ID. When the LPAR ID is between 10 and 19, then the third digit flips to the next odd or even number.
3. I keep the low numbers for virtual ethernet.

So IF:
LPAR ID 7 (below 10) with 4 x vFC, 2 x vSCSi and 2 vEthernet adapters:

THEN:
VIO1 adapters:
171,271,273,

VIO2 adapters:
172,272,274,

LPAR adapters:
2,3 = ethernet
171,172 = vSCSi
271,272,273,274 = vFC

ELSE IF LPAR id 14 (above 10):
THEN:
VIO1 adapters:
341,441,443,

VIO2 adapters:
342,442,444,

LPAR adapters:
2,3 = ethernet
341,342 = vSCSi
441,442,443,444 = vFC

I have merely 1 question - Can you share in more detail regarding the possible concern about reservation of memory and why the HMC states :"so specify only the number of virtual adapters you are likely to use".

Thanks and Regards
Jaco Bezuidenhout
Cape Town
South Africa

This is very similar to a numbering scheme that I have been using for that last few years. However, I use a range vs trying to setup and maintain a numbering scheme on the host (aka VIOS or VMware). Virtualization on any platform is so dynamic that maintaining a numbering scheme on the host (which owns the hardware) is very difficult. Below are a few things to keep in mind on the IBM Power system.

Things to keep in mind
1.If you are using NPIV the physical port you map the virtual adapter to on the fiber card can change for a number a reasons
a.The port becomes over utilized
b.LPM can change the port number
c.Install of additional fiber cards
d.Repurpose of the port
e.Hardware upgrades

2.VM ID can also change

3.If for any reason you change something on the host it may require you to change your total number of virtual slots to maintain the numbering scheme

4.The total number of virtual slots can only be changed when a VM is down limiting your flexibility to maintain a number scheme

5.The higher the total number of virtual slots the more memory is reserved in the hypervisor

As suggested by Viper6, I don't like the idea of wasting perfectly good memory just to "prop up" some arbitrary schema so that the Sys Admin can keep track of what is plumbed where. And as Viper6 also pointed out, this arbritrary schema is thwarted the first time an LPAR is migrated to a different frame, anyway. I submit that what is really needed is a VIOS utility that does a better job than "lsmap -ALL" of reporting these relationships in a compelling format. That's the direction I went. Once you have that sorted, any ol' available virtual slot number will do.

Noticed this bug which may impact if high values used for adapter slots
http://www-01.ibm.com/support/docview.wss?uid=isg1fixinfo135645

Normally I keep a set of fixed numbers for client adapters (10 to 40) and they will be fixed for all clients. And the maximum virtual adapters for clients will be set to 48.

I use the following convention,
On Clients
slot numbers 11, 12, 13 for VNET(can go up to 20 if needed)

Assuming 12 VFC slots (2 for root, 2 for data, 2 for tape from each VIOS)

sot numbers 21,23,25 ....31 connects to VIOS1 (odd number connects to VIOS1)

slot numbers 22,24,26....32 connects to VIOS2 (even number connects to VIOS2)

On VIOS
LPAR ID + VFC numbers
Usually tries to keep unique LPAR ID for LPARs on each frame keeping LPM in mind
LPAR ID 1..4 for VIOS on all frames
LPAR ID 5...50 for frame 1
LPAR ID 60 to 100 for frame 2 ..etc

slot number on VIOS1 for LPAR 24
241 => 21
242 => 23
243 => 25
244 => 27
245 => 29
246 => 31

Slot number on VIOS2 for LPAR 24
241 => 22
242 => 24
243 => 26
244 => 28
245 => 30
246 => 32

I use this numbering scheme:

http://earth2baz.net/2014/05/05/powervm-vadapters/

Personaly I think it is irrelevant to include the physical adapter on the VIO in the numbering scheme. As your client will usually only use 2 fabrics per VIO. What they are mapped to is just a load balancing act.

My scheme factors in LPM, however Chris Gibson pointed out to me that best practices is to keep your virtual adapter count below 1000. As a result I have switched my scheme to 100. Unfortunately this is not suitable for large sites.

Personally I see this limitation as a design flaw in PowerVM slot mapping. That is reserving memory addresses for unusused slots.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Comments are moderated, and will not appear until the author has approved them.