April 15, 2014

Dynamic Statement Cache: Sharing Prepared Statements

An SQL statement can be executed by an application with either static or dynamic text. Each type of statement execution has its own benefits. The biggest reason to use static SQL is to avoid the overhead with preparing a statement for execution.

Over the past decade, the use of dynamic SQL has grown dramatically, as IBM continues to improve its performance. Much of this improvement has come through providing DBAs with alternatives to having to prepare a dynamic statement each time one is executed. The first innovation in this area was the introduction of the dynamic statement cache, a pool of memory in which DB2 saves the control structures for prepared SQL statements that can be shared among different threads (execution of a program). To enable dynamic statement cache, YES must be specified for the CACHEDYN subsystem parameter.

When a statement is prepared, DB2 will search for the statement in the dynamic statement cache and reuse the control structures. This works great for applications that code predicates with host variable rather than literals. The problem with literals is that potentially every execution of a statement is unique, so in this case the cache will fill up with many statements, none of which will ever be reused. To help in this circumstance, IBM has introduced a new PREPARE parameter: CONCENTRATE STATEMENTS WITH LITERALS.

When using CONCENTRATE STATEMENT WITH LITERALS, the statement is stored in the cache and each literal is replaced with "&."  For example:

SELECT COL1, COL2 from T1 WHERE COL2 = ‘ABC’ is stored in cache as SELECT COL1, COL2 from T1 WHERE COL2 = &

When the statement is prepared, DB2 will search for the statement in the dynamic statement cache searching for the literal such as "ABC." When not found, DB2 searches the cache for the statement containing the replaced character, &. Once it's found, the control structures are reused. This process doesn't occur if the statement is coded with the "?" parameter marker; for example, this statement cannot take advantage of CONCENRATE STATEMENT WITH LITERALS:

            SELECT COL1, COL2 from T1 WHERE COL2 = ‘ABC’ and COL1 > ?

The avoidance of preparing a statement can provide a tremendous CPU savings. See the DB2 10 Performance Management guide for details about dynamic statement cache, including possible reasons why DB2 won't share certain identical statements in the statement cache.


April 08, 2014

DB2 Subsystem Tuning Tips

Many, many years ago, I was a DB2 systems programmer. As part of my training, I was sent to an IBM DB2 systems tuning course. That type of education cost several thousand dollars back then, but with the knowledge I'd gained, my employer easily recouped its investment in the form of reduced CPU consumption on our system.

One basic thing I learned was what dispatching priority the different DB2 address spaces needed to be. I find it interesting that the current DB2 installation guide doesn't address these settings. (Hint: IRLM should always be the highest.)

Fortunately, the recently released IBM Redbook, "Subsystem and Transaction Monitoring and Tuning with DB2 11 for z/OS," covers many of the same topics I learned in the classroom back when -- even DB2 subsystem address spaces. In section 3.2 IBM recommends that IRLM run at SYSSTC to ensure it's at a higher priority than the other address spaces (MSTR, DBM1 and DIST).

If you haven't worked with traces, Section 2 ("DB2 traces") lists different trace types. Something to keep in mind about traces that I didn't find in this Redbook is that starting some traces creates more overhead than starting others. The most costly trace to start is Class 7 Lock Details, so only use this trace in an extreme situation. (And yes, this is another thing I learned in my old systems tuning course.)

But back to the Redbook. Section 2.3.5 (“Loading IFCID field descriptions into a table”) covers the steps involved in creating tables that contain IFCID records as well as field descriptions. (Note: For more, see what I wrote about this topic some years back.)

Here are just a few of the chapters that focus on monitoring the subsystem:

            Chapter 4  : System address space CPU time

            Chapter 5  : EDM Pools

            Chapter 6  : Data Set Open and Close

            Chapter 7  : Log Activity

            Chapter 8  : IRLM, Locking, and Latching

            Chapter 9  : The zIIP engine and DB2

            Chapter 10: Buffer Pools and Group Buffer Pools

            Chapter 11: DDF Activity

            Chapter 12: Workfiles, RID, and Sort Pools

            Chapter 13: Virtual and Real Storage

This Redbook is worth your time even if you aren't a DB2 systems programmer. For instance, DB2 developers could use this information to help analyze and improve application performance. I also appreciate that the publication notes DB2 version numbers and offers version-specific performance tips. This material gives readers an idea of how various changes will impact their systems regardless of the release they're on.

If you've read this Redbook, please offer your impressions in comments.

April 01, 2014

DB2 and Hadoop an Unlikely Combo

I recently came across a great SHARE presentation that examines the relationship between DB2 for z/OS. "Big Data Strategies with DB2 for z/OS" provides a nice overview of Big Data with Hadoop before going into some technical benefits of using built-in DB2 user-defined functions (UDF) delivered with Infosphere BigInsights to process Hadoop data from DB2 for z/OS applications.

Here's a sample of the SQL to read Hadoop data:

            SELECT TX.*

            FROM TABLE ( hdfs_Read( ‗ ‘)

                         AS TX (Seller varchar(20)
                                   , Year INT
                                   , Total decimal(15,2));


The capability to enrich operational data with big data sources such as Twitter and Facebook provides powerful benefits to existing applications running DB2 for z/OS, and it will be interesting to see where IBM takes this relationship over the next year or two. I can see some customers bringing the data from Hadoop into DB2 for z/OS and accessing it from DB2 accelerators, while others will choose to store DB2 for z/OS data in Hadoop and conduct local analysis on existing big data. Providing a facility to manage the movement of data and controls around the governance of data being accessed on these different systems will, I believe, be key in the ongoing development of Hadoop support on DB2 for z/OS.

March 25, 2014

Plan Program Authorization

A DB2 plan is used to set runtime parameters and security when executing applications on a local DB2 system. A plan specifies run-time options as well as one or more collections of program “packages” that the plan can execute.

The list of packages that a plan can use is defined in the package list parameter at the time the plan is bound. To avoid having to rebind the plan every time it needs to be authorized to execute a package, a collection name is used with a wild card package name to provide dynamic access to any package in the collection. For example you may allow employees to view payroll information but restrict them from updating it. They would use the READPAY PLAN that uses the collection (READONLY_PAYINFO with a program package called PAY001. The syntax for the BIND is:

            BIND PLAN (READPAY)



To bind this plan so program PAY002 can be added to the READONLY_PAYINFO collection without the need to rebind it with the new package name, use this syntax:


            BIND PLAN (READPAY)



Now, as packages are added to the collection READONLY_PAYINFO, they are available to the plan READPAY.

Of course, if a package is inadvertently added to a collection, any authorized user could then execute the plan, which could lead to unauthorized uses of the application package. To protect against these types of exposures, a new bind and rebind plan option was introduced with DB2 11. This option serves essentially as a double check that's designed to ensure, as IBM puts it, "a valid program is authorized to execute the plan."

As is often the case when new features are introduced, plan program authorization isn't tightly integrated with DB2; therefore, some manual steps must be implemented. First, you must create the DB2 table, SYSIBM.DSNPROGAUTH. The DB2 sample library SDSNSAMP(DSNTIJSG) contains both DDL to create table SYSIBM.DSNPROGAUTH and sample DML to INSERT a row into the table.

The row inserted into SYSIBM.DSNPROGAUTH contains information to authorize a PLAN to be used for a given program package. To activate plan program authorization, you must BIND or REBIND a PLAN using the new parameter, PROGAUTH. Once enabled (the system default is DISABLE), DB2 will examine table SYSIBM.DSNPROGAUTH to find a row with combination of PLAN NAME and PACKAGE NAME.

Several new reason codes are available to help determine why program authorization may fail. Here's an example:

00F3003C The program authorization is enabled by specifying PROGAUTH option when the plan was bound. The MDC value for the program does not match the program authorized to use the plan. The request to allocate a plan to the program name is denied.

If the program name exists in SYSIBM.DSNPROGAUTH table, then correct the ROGMDCVAL and PROGMDCPAD values for the program. Otherwise, the program name must be added to the SYSIBM.DSNPROGAUTH table.

To find a list of plans that have enabled program authorization:



                      CAST(QUALIFIER AS CHAR(10)) AS QUALIFIER,

                      RELBOUND, PROGAUTH, PLENTRIES



      WITH UR


Do you ever need to lock down individual packages so they can only be used by specific plans? Personally, I've never required this level of security, and I'd worry that it could be a maintenance nightmare. If DB2 customers are really demanding this feature, I'd expect IBM to soon add the capability to insert into the DSNPROGAUTH table through a grant statement or some form of external security package. What are your thoughts on this?

March 18, 2014

Static SQL Support a Key Feature of DB2 Accelerators V4.1

Last fall IBM announced DB2 Analytics Accelerator (DB2 Accelerators) for z/OS Version 4.1. In contrast to previous releases, which supported a limited SQL syntax (and only for dynamic SQL), V4.1 enables a wide array of queries with static SQL support. I've written frequently about DB2 Accelerators, and from what I've seen on recent customer visits, I believe that the enhancements in V4.1 are contributing to increased adoption of this product.

Here's a quick list of other new features:

·         Support for EBCDIC and Unicode in the same DB2 system and accelerator.

·         Support for multi-row FETCH

·         Improved scalability workload balancing through automated routing, based on workload balancing information, that evenly distributes the workload across the attached accelerators.

·         Improved high availability through automated workload balancing failover support. If, for any reason, one accelerator is unavailable, queries are automatically rerouted to available accelerators.

·         Increased number of replication sources per accelerator: It's now possible to replicate from up to 10 different subsystems into a shared accelerator.

·         Better performance for incremental update through Instrumentation Facility Interface (IFI) filtering, which is designed to lower MIPS utilization and improve performance because fewer log records need to be processed by the capture component.

·         Richer monitoring: The accelerator now provides expanded information for monitoring applications with the IFI. The IFI makes this data available to software vendors who can leverage this broader information to better inform customers about the usage, execution and concurrency of queries as they are processed in the accelerator.

·         Significant enhancements to the High Performance Storage Saver (HPSS) solution:

o    Support to update historical read-only record through a stored procedure;

o    Support to restore individual partitions without requiring any outages;

o    The capability to enforce read-only state on a partition used by HPSS; and,

o    Backup image maintenance can be simplified by marking HPSS partitions as "undeletable."

·         Simplified maintenance and lower cost of operation. V4.1 now offers an automated NZKit Install that facilitates the fully automatic installation and activation of Netezza kits, with no firmware or operating system updates required.

·         Support for DB2 for z/OS version 11.

If you’re using this technology, please share some of your experiences, pros and cons, in comments.