Blog
i can

July 22, 2014

Navigator Search

Search and favorites are two of the new features added to Navigator in IBM i 7.2. You will find the search feature in the upper portion of the left-frame navigation area. The Favorites section is not far away – an expandable list near the top of the left-frame navigation area.

07222014 Navigator Search_DawnMay1

Navigator has a lot of new functionality with the 7.2 release (we can probably anticipate that some of these new features will eventually find their way back to the 7.1 release), but finding the task you want may be difficult if you are not sure where it’s located in the navigation tree. In addition, I know many of you are very comfortable with your green-screen interfaces and may not know the equivalent function in the GUI. The search feature helps with both of these scenarios.

For example, you many know that IBM provided support for PTF tasks in Navigator with the 7.2 release, but you don’t know where to find those tasks. If you type in PTF in the search box, you will be shown the PTF tasks:

07222014 Navigator Search_DawnMay2

If you know exactly where to navigate in Navigator, you’d find these PTF tasks under Configuration and Service:

07222014 Navigator Search_DawnMay3

You may also know your green-screen command, but don’t know the equivalent task in Navigator. You can type the command name in the search box and the results will provide a link to the GUI task. Let’s say you’re used to using Work with Active Jobs (WRKACTJOB) - simply type in the command and the search results will come up. Clicking on the results will take you directly to the task.

07222014 Navigator Search_DawnMay4

Now let’s assume you want to find all tasks associated with something – for example, all the job tasks; if you type in job in the search box, you will get a list of the job tasks, but in the case of jobs, the list is long and you will find a “more….” link at the bottom of the list. If you click on that “more…” link, a new tab will open in the Navigator browser with the complete list of tasks. In the case of “job” tasks, there are a lot of them. This approach can be useful if you want to quickly find out all the tasks you can do for that something you are interested in.

07222014 Navigator Search_DawnMay5

As with any search feature, you may need to play around with the search string to get the results you’re looking for.

After you have found the task(s) you have been searching for, Favorites can help you easily find them again. In next week’s blog, I’ll write more about favorites.

 

 

July 16, 2014

How to End Jobs That Are Now Held for Maximum CPU or Temporary Storage Usage

Quite some time ago, I wrote the blog IBM i 7.1: Jobs Exceeding Their CPU or Storage Limits are Now Held. Earlier this year, the question was asked: “Hi, What if we WANT the job to end if it exceeds the max CPU? Can this be specified in the class?” I try to respond to most of the questions that are asked and this blog article provides an answer to that comment.

As was written in the referenced blog, PTFs in 7.1 changed the behavior of the system when jobs exceed their maximum CPU or maximum temporary storage limits; rather than being ended, the jobs are held and the following messages are sent to QSYSOPR:

  • • CPI112D – Job held by the system, CPUTIME limit exceeded
  • • CPI112E – Job held by the system, MAXTMPSTG limit exceeded

Holding of jobs when these limits are hit is also the behavior in the IBM i 7.2 release.

However, some users still want the jobs to be automatically ended when these conditions occur and not have to manually manage jobs that have exceeded their limits.

You can accomplish this is by using your favorite message monitoring solution to automatically monitor for the messages that are sent when CPU or temporary storage limits have been reached. You can do this with Management Central Message Monitors or the Message Monitors added to Navigator in 7.2. There are also a variety of message monitoring solutions provided by vendors other than IBM.

Another way is to use watches, which would be my preferred method. Why? I simply love message watches for solutions such as this because they are extremely efficient in their implementation and relatively easy to set up. Read the blog Automate Monitoring with Watches if you are not familiar with them.

You can set up a watch to monitor for the above messages. When the message is sent to QSYSOPR, your watch program will run and your program determines the action that is taken. To get the old behavior back you simply end the job; your watch program will be passed the message replacement text that identifies the affected job. Watch programs can be as sophisticated as you need -- you can check for specific job names and selectively end ones you know you always want to end, but other jobs could remain held for manual management.

The IBM support article Jobs are Held Instead of ended CPI112D and CPI112E (How to revert back) has an example watch program along with instructions on how to set up the watch.

 

 

July 08, 2014

IBM i 7.2 Improved Temporary Storage Tracking (Part 3)

I’m continuing the series of blogs on improved temporary storage tracking in IBM i 7.2 –part 1 covered the changes made to provide improved tracking and part 2 covered the changes made so you can better understand temporary consumption at a system level. You may want to read these blogs if you have not done so already.

This week, I want to review the enhancements that allow you to better view and understand where the temporary storage is being used from a job-specific perspective. If you have discovered a problem with temporary storage consumption on your IBM i partition, you generally detect the issue at the system level but will need to identify the root cause. The problem is often caused by a job (or jobs) that is consuming the storage.

 

Active Jobs

Using Navigator for i to view Active Jobs (or active jobs by memory pool, or active jobs by subsystem), the temporary storage used column will be displayed by default. Simply click on the column heading (twice) to sort the table by the jobs that are using the most temporary storage. (Yes, I’ve already asked the Navigator development team to add a way to select the sort order with just one click….)

Also, a reminder about using the Active Jobs interface on Navigator – if you want to see updated values displayed in the table, don’t forget to click the “refresh” button – the table is not dynamically updated. (Yes, I’ve already asked the Navigator development team to add dynamic updates….)

The screen capture below shows the Active Jobs display sorted by the temporary storage used column.

Dawn1

 

Work with Active Jobs (WRKACTJOB)

For those of you who prefer to use the green-screen interface to manage i, you will appreciate the enhancement to WRKACTJOB to provide temporary storage as a column. You can simply use WRKACTJOB and press F11 two times to display the new column. You can sort on the column in the usual way with your cursor placed on the column heading and press F16. In addition, if you prompt the WRKACTJOB command, there is a new value, *TMPSTG, on the sequence (SEQ) parameter that you can use to immediately display the view with the jobs sorted by the temporary storage column.

The screen capture below shows the WRKACTJOB display sorted by the temporary storage column.

Dawn2

 

Work with Job

There is also a minor change to a job’s run attributes. Prior to 7.2, you could work with a job and see the maximum amount of temporary storage allowed and the current amount used. 7.2 enhances this by including peak usage by that particular job.

Dawn3

Interestingly, this support is only on the green screen WRKJOB (option 3) and not on the job properties GUI. (Yes, I’ve already asked the Navigator development team to add the peak temporary storage used to the GUI….)

As an aside, most of you should already know that you configure the maximum temporary storage allowed by a job on the class object. In 7.2, the MAXTMPSTG parameter on the class object is now in megabytes rather than kilobytes. Read the blog Jobs Exceeding their CPU or Storage Limits are now Held on why you should start setting the maximum temporary storage value.

 

Temporary Storage Details – buckets per job

In part 2, I wrote about how you can view the temporary storage details to find a table of all of the temporary storage buckets along with their size information. That article noted that there is a bucket for each active job; the temporary storage details view is another way to find the jobs that are using the temporary storage in addition to Navigator’s Active Jobs task.

Remember you can use filters to subset what you see, so you could add a filter on the “Job Status” column to only show those entries that start with “*” – this is a simple way to filter out the global entries and only see the temporary storage information for jobs. You can then sort the bucket current size column to find the job(s) using the most temporary storage. The nice thing about using this approach is that you also can find jobs that have ended but still have temporary storage allocated.

If you discover active jobs that are consuming an unexpectedly large amount of temporary storage, you can take proactive actions by holding or ending them. While you can see if ended jobs had temporary storage that was not released, you cannot take any actions upon those jobs. Diagnostics or debugging will be necessary to determine what the job was doing when it ended that prevented the system from freeing storage.

 

It looks like I will have at least two more blogs on related topics for temporary storage tacking in the 7.2 release. In future blogs, I will discuss support added to Collection Services and how to set up notification.

 

 

June 25, 2014

PowerVM Processor Virtualization 101 Part 2

Chris Francois completes blog part 2 on PowerVM Processor Virtualization 101. Read PowerVM Processor Virtualization 101 part 1 if you missed it. 

This is the second installment in a two-part blog series to provide basic introduction to the PowerVM processor virtualization terminology and acronyms. The first part was heavy on terminology. This part will tie the terminology together and provide you with a better understanding of the trade-offs involved with PowerVM LPAR configuration as it pertains to processor virtualization. The IBM Redbooks publication, “IBM PowerVM Virtualization Introduction and Configuration” provides much greater detail. For an in-depth treatment of this material from an IBM i perspective, see “Under the Hood: POWER7 Logical Partitions.”

PowerVM implements serial sharing of physical processors, with entitled capacity commitments enforced through periodic intervals of time called “dispatch windows.” The sharing is serial because the physical processor is dispatched exclusively to one partition at a time, regardless of the processor's thread context (e.g., SMT4). Entitled capacity represents a claim on a fraction of physical processor dispatch time in the dispatch window. For example, assuming the dispatch window period is 10 milliseconds, 2.0 processor units entitled capacity is a claim on 20 milliseconds of physical processor dispatch time. These entitled capacity claims are “use it or lose it”; every dispatch window the LPAR's entitled capacity commitment is replenished without regard to history. For POWER8, PowerVM requires a minimum of 0.05 processor unit and a maximum of 1.0 processor unit per virtual processor. The total current entitled capacity of all shared processor LPARs cannot exceed the number of processors assigned to the physical shared processor pool, and the current number of dedicated processors cannot exceed the balance of licensed processors in the platform. This is a roundabout way of saying that while there can be more virtual processors than physical processors (up to 20 times), the entitled capacity can never be overcommitted.

The differences between a shared and dedicated processor LPAR goes beyond the fact that a dedicated processor LPAR has a fixed ratio (i.e., 1:1) of entitled capacity units to virtual processors. A shared processor LPAR can be configured for uncapped sharing mode, meaning that it is able to use excess shared processor pool capacity above and beyond its entitled capacity. For an uncapped LPAR, the uncapped weight offers some control over the relative distribution of excess shared processor pool capacity among competing uncapped LPARs. A dedicated processor LPAR can be configured for processor sharing, meaning that the operating system can chose to allow the LPAR's idle virtual processor[s] to be temporarily donated to the physical shared processor pool. Oftentimes, this is an effective way to increase the excess shared pool capacity available to uncapped LPARs, and normally the performance impact on the donating LPAR is negligible, as the physical processor is returned to the donating LPAR upon demand.

The other major implementation differences mainly impact performance:

  • • Resource Isolation – While POWER systems and PowerVM provide secure data isolation between LPARs, the physical nature of serially sharing physical processors can impact the effectiveness of processor caches and other hardware resources. For a dedicated processor LPAR, the operating system has greater control over processor sharing and associated performance impacts.
  • • Processor Affinity – The association between a virtual processor and its underlying physical processor are not set in stone, but for a dedicated processor LPAR, the associations are much more durable than for a shared processor LPAR. The VCPU of a shared processor LPAR may be dispatched to any physical processor of the shared processor pool, whereas the VCPU of a dedicated processor LPAR is generally dispatched to the same physical processor. Architecturally, virtual to physical processor associations can change at any time, but for a dedicated processor LPAR, they tend to remain constant during partition activation. Exceptions are processor DLPAR, Live Partition Mobility, and Dynamic Platform Optimizer operations. Software optimizations based on processor affinity are generally more effective for dedicated processor LPARs than for shared processor LPARs.
  • • VCPU Latency – Shared processor LPARs can incur entitlement delays, which are the result of entitled capacity being exhausted during the dispatch window, and have a greater potential for VCPU dispatch delays, which are the result of over subscription of the physical shared processor pool at any moment. Dedicated processor LPARs don't experience entitlement delays, and VCPU dispatch delays are generally negligible.
  • • I/O Latency – Interrupts from I/O adapters assigned to shared processor LPARs are routed to any physical processor of the shared processor pool. Sometimes the interrupt can be handled directly, but sometimes it must be forwarded to a VCPU of its assigned LPAR. This forwarding can be a source of latency which does not occur for the interrupts of I/O adapters assigned to dedicated processor LPARs.

There you have it...PowerVM processor virtualization in a nutshell. PowerVM's flexible, industrial strength processor virtualization supports a range of options and features to maximize the utility of the Power Systems platform. For more in-depth coverage, the website, “Server virtualization with PowerVM” is a comprehensive source for this and other PowerVM topics.

References

IBM i 7.2 and POWER8

PowerVM Processor Virtualization 101

Under the Hood: POWER7 Logical Partitions

IBM PowerVM Virtualization Introduction and Configuration

Live Partition Mobility 

Dynamic Platform Optimizer – Affinity and Beyond for IBM Power

 

 

June 19, 2014

PowerVM Processor Virtualization 101

I’d like to thank Chris Francois for writing this blog. Chris is one of the leading experts on IBM i running on the POWER processor. Chris has been a commercial OS kernel programmer for nearly 25 years, and is currently a lead developer of IBM i Licensed Internal Code. Chris joined IBM Rochester in 1994 during the AS/400’s migration to the processor family ultimately to be known as POWER.

In addition to authoring this blog, Chris is also the author of a recent article on IBM i developerWorks, IBM i 7.2 and POWER8.

 

This is the first installment in a two-part blog series to provide basic introduction to the PowerVM processor virtualization terminology and acronyms. The first part will be heavy on terms without much explanation; the second part will be just the opposite. The IBM Redbooks publication, “IBM PowerVM Virtualization Introduction and Configuration” provides much greater detail. For an in-depth treatment of this material from an IBM i perspective, see “Under the Hood: POWER7 Logical Partitions.” Let’s get started...

The IBM i operating system executes in a system virtual machine (VM) implemented by the PowerVM hypervisor on a Power Systems server. PowerVM supports 1,000 concurrent instances of VMs, which are called logical partitions (LPARs). The hardware resources available to the LPAR are specified in the LPAR configuration, and include the number of virtual processor cores (VCPUs), MBs of mainstore, etc., as well as resource virtualization attributes and qualifiers. For instance, the LPAR may use shared processors, in which case the processing units defines the fractional share of physical processor dispatch time to which the partition is entitled, or the partition may use dedicated processors, in which case the processing units is implicitly the full share of physical processor dispatch time corresponding to the number of VCPUs configured. The term entitled capacity (EC) is sometimes used interchangeably with processing units, especially to avoid confusion with [virtual] processors.

There are a number of LPAR configuration parameters associated with processors. Some of them are fixed for the duration of the partition activation (i.e., partition IPL), and some may be changed while the partition is active. Static LPAR parameters include the minimum and maximum number of VCPUs, the processing mode (Shared or Dedicated), the processor sharing mode, and the Processor Compatibility Mode (PCM). A shared processor LPAR in uncapped sharing mode is often simply referred to as an uncapped partition, as its virtual processor dispatch time is not limited, or capped, by its entitled processor units. Dynamic LPAR parameters include the current number of VCPUs, the current processor units for a shared processor LPAR (SPLPAR), the weight for an uncapped partition, and the processor-sharing attribute for a dedicated processor LPAR. The term donation attribute is often used in place of processor sharing attribute to underscore that sharing is the result of a dedicated processor LPAR temporarily donating unused processors to the physical shared processor pool.

Many of the parameters for a dedicated processor LPAR are shown in Figure 1 below, which is the Processors properties tab of a partition profile as rendered by the HMC. Figure 2 illustrates the parameters for a shared processor LPAR.

06182014DawnMay1

Figure 1 – Dedicated Processor Sharing Mode Attributes

06182014DawnMay2

Figure 2 – Shared Processor Sharing Mode Attributes

Of all of these parameters, Processor Compatibility Mode is probably least familiar to IBM i audiences. The short explanation is that PCM makes the virtualized processor appear as an instance of the indicated generation of POWER processor. So for example, POWER8 mode supports SMT8 and implements Version 2.07 of the Power Instruction Set Architecture (ISA), POWER7 mode supports SMT4 and implements Power ISA 2.06, POWER6 mode supports SMT2 implements Power ISA 2.03, and so forth. On a POWER8 server, PCM can be configured for POWER8, POWER7, POWER6 and POWER6+.

One of the benefits of the Technology Independent Machine Interface (TIMI) is that IBM i applications are generally insulated from the Power ISA version. As a result, PCM has not been relevant for most IBM i users. Now that IBM i supports Live Partition Mobility between POWER7 and POWER8 Power Systems servers, PCM will become more important, because it must be set to a mode supported on both the source and target systems.

In the next part, we’ll look at how these partition configuration parameters interrelate, and the flexibility that PowerVM processor virtualization can offer in support of scale-up and consolidation roles, on the same platform, at the same time.

 

References

Under the Hood: POWER7 Logical Partitions

IBM PowerVM Virtualization Introduction and Configuration

Live Partition Mobility

IBM i 7.2 and POWER8