IT Trendz has moved to http://www.ibmsystemsmag.com/Blogs/IT-Trendz/. Please update your bookmark. It’s the same great content but in a new location.
I am not an expert on open source, but I know that running Linux on a System z server is a powerful idea. Once you have a Linux OS running on your System z platform, then the door is open to additional possibilities including traditional proprietary and open source licensed software running interesting and useful workloads. Linux is certainly a different kind of OS from z/OS. It is good to have it running on the centralize System z server so you can run workloads and applications native to Linux. It is as simple as that.
There are many Linux distributions in the marketplace, for example those that are rooted in Debian GNU/Linux, which is a distribution that emphasizes free software. However, the two that are supported by IBM on System z are distributions by Red Hat and SUSE. The specific regarding releases can be found here.
The Red Hat and SUSE Linux distributions run in coordination with different virtualization techniques on System z. The processor resource/System Manager (PRSM) is almost always used but often not at a very granular level. PRSM is integrated with all System z servers and allows separating the servers into LPARs. Each LPAR is assigned a dedicated portion of the available physical resources. The resources can be shared across LPARs or dedicated to a particular LPAR.
Many IT departments use z/VM for their granular Linux virtualization often running dozens of instances. There is a lot of flexibility under z/VM so this is often the best place to do workload-level virtualization. It is remarkable to run all these different platforms for workloads on the same server hardware at the same time in a combination of LPARs and z/VM guest instances.
Consider that by running Linux you have the opportunity to run workloads like collaboration, messaging, and business intelligence and data warehousing that are supported by traditional licensing approaches and support. You can also experiment with software from open source projects like Apache and Eclipse. I’ll write about this in the next post.
In my last post, I wrote about management and how it compliments monitoring. In this post, I want to write about products and how they enhance your productivity.
There was a time when there were few monitoring and management tools available, so IT specialists wrote their own. Many early NetView users wrote their own automation CLISTs and linked to them from the message table. In the early days, IBM had a number of sources for samples like IBM Redbooks and SolutionPacs.
As NetView matured as an automation solution, the manuals and libraries provided more complete solutions that mainly required administration by a systems programmer. This approach was improved on through the use of program products that were much more multifaceted than the previously supplied samples. At this time, REXX replaced CLISTs for programming, making more flexible solutions possible. When did you first use REXX?
Today, there are a variety of profound monitoring and management products. There are a few that I would like to single out for z/OS. With a considerable focus on automation, there are many strong features to be found in Tivoli NetView for z/OS. It is both a system and network management tool. The Tivoli OMEGAMON for z/OS Management Suite, as well as the other products in the OMEGAMON family, have a strong following with good reason, as the products have proven themselves in the marketplace for many years.
Since cloud has become important in the marketplace, IBM has made available its Cloud Management Suite for System z. The suite is used to automate provisioning and monitoring of critical workloads and other functions needed to insure the availability of a production private cloud on Linux for System z. Are you running a cloud service on System z?
In my last post, I wrote about the tools and tactics that are taken to implement an end-to-end approach. Specifically, I wrote about monitoring address spaces, agents and message handling and log files. In this post I will discuss management and how it compliments monitoring.
Monitoring, whether proactive or passive, is useful in detecting situations that can result in serious problems. If you don’t monitor, then you don’t have the opportunity to get ahead of problems before they get severe. When you monitor, you can get humans involved quickly. One technique used by some IT departments is to automatically open a problem record for any situation that is anticipated to be serious enough to require the intervention of a support specialist. This forces intervention but, if not done carefully, can result in hundreds of problem records. The creation of the problem record and its handling by a specialist is a kind of management action.
Perhaps a better management action is automating the response to the situation detected by the monitoring. There are hundreds of examples of this that have been implemented by IT departments. The simplest example is starting an address space that should be running after monitoring detects that the address space is down. This is a z/OS example. This can be done in Windows when a log file adapter handles a message indicating that a task has terminated. The automated response is to restart the task and send an alert to the support specialist that is on duty. Are you doing this as well?
A lot of the management actions that are taken are done by support specialists. These are situations that are often detected by monitoring but are too challenging to automate. A variety of tools are used for these actions and often the basic facilities of the system are used. System management specialists for the OS often log on to a system and use the command line interface. Network management personnel use the toolset provided with the network devices they manage. Middleware and application support personnel typically use the tools provides with the software or application. In this area, the choices are more limited but usually the vendor-supplied management tools are more than sufficient. What middleware tools do you use?
In my last post, I started to discuss end-to-end monitoring and management. End-to-end management is about visibility to all aspects of an application including OS, network, middleware, database and the application itself. This post is about approaches that are taken to implement end-to-end management.
The common way to set up proactive monitoring is to install software that does it, thereby leveraging the best practices supplied with the software. This approach is common for all operating system environments, although the techniques used vary considerably. On z/OS servers, monitoring software is installed and runs in address spaces. This software uses z/OS interfaces to gather data often with a particular attention to the messaging interfaces and the Subsystem Interface.
Monitoring software running in other environments like UNIX and Windows systems usually relies on software agents installed on each server or server image. Even though agent technology is mature, some IT departments resist installing agents for reasons of cost and concerns about reliability. Some put their focus on agentless technologies. Some simply don’t use software to monitor their systems.
It is generally understood that software monitoring has to be implemented as part of a systematic approach that involves not just the software but people and procedures. What good is a message from the monitoring system if no one is watching or when they see the message they don’t know what to do? Do you work with messages from monitoring software?
Software-based monitoring with address spaces or agents is considered a proactive approach. There is a passive technique as well, and both active and passive are typically implemented at the same time as part of one common tactic. The basis for the passive approach is to look for certain error messages and when they appear take a corrective action or notify a system specialist.
In the z/OS environment, these message come through the message processing facility and are trapped by software like NetView in its message table. In other OS environments, the messages appear in log files. The messages need to be identified by a software program—often called a log file adapter—so they can be handled. There are some common log files and many individual log files on each system, so handling messages is best done with the best practices that come with monitoring software. Do you have experience with handling log files?Here is some background on log file adapters that you might find interesting.
This is the first of a series of posts on end-to-end monitoring and management. End-to-end management is about visibility to all aspects of an application including OS, network, middleware, database and the application itself. This visibility makes management effective, otherwise you are constrained when problems occur. History has shown this quest to be a tough challenge.
Every application runs on a system with an OS like z/OS or Linux or Windows. Many applications have components running on more than one OS at the same time—i.e., the database running on z/OS and the application running on Linux. Needless to say, the application system is dependent on that OS being available and reliably providing services.
That is not the only dependency. Applications also depend on network services. Today’s applications, more than ever, rely on the network to connect components of the application as this is a common architectural characteristic of distributed applications. Distributing the types of work over multiple images like database, application and web became a way to handle the scaling of workloads but it introduced additional points of failure.
Since the mid-1990s applications themselves have been the focus of their own management discipline called application management. Application management came about because the complexity of client server computing requires special tools to monitor and manage this application and their middleware-supported infrastructure.
These three areas—systems management, network management and application management, have matured and grown together, providing considerable depth in each of their areas.
Despite their maturity, there are challenges. Many organizations have realized that the practitioners supporting each of the three areas must work together to provide a level of human integration. This is a challenge that some organizations have yet to fully address. Have you seen this challenge in your work experience?
Another challenge area is the software toolset. Different tools can have different consoles, message formats and conventions and, without integration at the software level, effectiveness can be hampered. It is possible to get by without one integrated toolset, but it is not ideal. This shortcoming can be addressed with software that provides at least a rudimentary level of integration. Do you have any experience integrating tools?
In my previous post, I wrote that COBOL is a language that was designed to meet business needs by making use of the automation supplied through computer programs and systems. It is so useful that you can take motivated people with no computer training and turn them into COBOL programmers. This is true today as it was 30 years ago. What is it about COBOL that gives it this timeless characteristic?
COBOL is successful because it has the language capabilities to allow different programmers to solve a common problem in different ways. This capability is not unique to COBOL but it is nevertheless impressive. A good example of this is the challenge of date validation. Many COBOL programs exist that, when called with a date, indicate if the date is valid or not. Simple enough, right?
It is a given that values for month, day and year must be numeric. Looking around the web, I have found a COBOL utility that checked the date provided to make sure it was greater than 1 and less that 99999999, however this would be a less than perfect solution as it would find June 31st to be valid which it is not. The same approach could be achieved with one statement—IF DATE-GIVEN NUMERIC—but it would have the same imperfection.
Solutions with higher integrity generally set up the date with symbolic values then check for conditions in the logic. Consider this:
WORKING-STORGE SECTION.
01-DATE-GIVEN.
05 INPUT-MONTH PIC 9(2).
88 MONTH-IS-VALID VALUE 1 THRU 12.
05 INPUT-DAY PIC 9(2).
88 DAY-IS-VALID VALUE 1 THRU 31.
05 INPUT-YEAR PIC 9(4).
88 YEAR-IS-VALID VALUE 1950 THRU 2050.
…
PROCEDURE DIVISION.
IF MONTH-IS-VALID
IF DAY-IS-VALID
IF YEAR-IS-VALID
THEN
…
On the surface, this seems like a better approach—it is certainly somewhat elegant on paper, however identifying certain invalid month and day combinations is still not handled properly and the challenge of leap year is not addressed by the logic. This is a interesting problem to solve and I have hinted at some of the ways to approach a solution in COBOL. Do you have an elegant solution in your toolkit?
I wrote my first date validation routine in 1980 and it became a standard utility for the company for which I worked. More than three decades later, a new generation of programmers is still solving the problem with new programs that reflect their different ways of approaching the problem. COBOL is there with the needed power and flexibility to handle both simple and more abstract and elegant programming solutions.
In my previous post, I wrote that COBOL has grown and thrived for 50 years. I also wrote that in the past, COBOL did not get significant attention in universities because the business nature of COBOL did not match the math and science focus of computer science departments. Nevertheless, COBOL is a programming language with powerful capabilities and straightforward effectiveness.
What does this mean? COBOL is a language that was designed to meet business needs by making use of the automation supplied through computer programs and systems. It’s so useful that you can take motivated people with no computer training and turn them into COBOL programmers. After they become proficient at COBOL they can move into other areas of systems analysis by designing application systems, writing test cases and user documentation, etc.
Why can you do this with COBOL? COBOL requires that you describe all the data with which you plan to work. This is straightforward. If you are working with data from a file, you lay out the fields in the record in the order that they appear in the actual data and you can work with them in the logic portion of the program. For other data like report layouts or work data fields, you do the same—lay out the fields. This data description, when done well, is elegant and descriptive. Of course, when done thoughtlessly, it can create confusion for programmers that come along later to maintain the program.
Once the data to be used is defined, you supply the statement that work with the data. This too is straightforward. If you want to compute the sum of two data items, you use the COBOL ADD statement. If you want to create a report, you move data elements to the report layout fields then WRITE the report to the report file that will be printed. Basically, that is it—define the data then work with it using the statements that are supplied with the language.
I know from experience that you can train regular people to become COBOL programmers because that is exactly what happened to me in 1978. I know you can teach regular business people to write COBOL because I taught hundreds of business students in the 1980s to write basic COBOL programs. The university where I worked had the idea that future business leaders should have the experience of writing business programs so they would have more that a casual understanding of IT and programming. We spend many hours together debugging programs, so I know that they found it a humbling but useful experience.
The COBOL Programming Guide is a good place to look to get the main idea about COBOL or to get a refresher. It is an easy to read and well-organized resource for COBOL.
In my previous post, I continued a discussion about Basic Assembler Language (BAL) that I had begun the week before. BAL is still useful is a funny way of putting it, because in certain contexts BAL is indispensible. Consider this: Everyone who makes a living as a programmer should write in the BAL of their favorite machine so they can grow from the lessons that this experience teaches them. So what about COBOL, how is it doing?
COBOL has grown and thrived for 50 years. Today, many companies individually have thousands of COBOL programs that contain millions of line of code. COBOL is still a practical language for building complex applications. Also, there are many existing applications written in COBOL that need to be maintained, enhanced and extended. How do we know this?
Look at the IT job sites and do a search. One site I checked had 582 open jobs for mainframe COBOL programmers. What is really interesting is the scope of these jobs, as it is much more than just writing code. The job description often includes the need for skills that provide application solutions to fulfill business requirements using sound, repeatable, systematic and quantifiable industry best practices.
Methodologies are mentioned with day-to-day activities involving the creation of application design, work estimation, modifying/updating existing mainframe software and/or developing new programs as defined in a detailed system design. In addition, the developer will also be responsible for writing, executing and evaluating test cases as well as completing all required system documentation.
This is useful and interesting work. There are many aspects of this work that could develop into a specialization like system design or testing activities. Writing code is in the news today with many initiatives introducing young people to the task. These mainframe COBOL jobs are much more than that—they are whole-brain endeavors involving thinking and doing along with communicating the work that has been done. This is an amazing opportunity for young people to step up, sustain and innovate.
In the past, COBOL did not get significant attention in universities. COBOL did not get the focus that other languages like C and C++ received because the business nature of COBOL did not match the math and science focus of computer science departments. In my next post, I will discuss COBOL as a programming language with powerful capabilities and straightforward effectiveness.
In my previous post, I began to discuss one of the most wonderful and challenging elements of z/OS—its basic assembler language.
I wrote that the designers who created the architecture of the mainframe always had the programmer in mind. I used the BALR instruction as an example of this mindfulness because BALR allows the programmer to control the structure of the program. With BALR, you can load the address of a routine and branch to it returning to the next instruction in the program. In a way, this is like a PERFORM statement in COBOL. This makes it possible for the programmer to better organize the program and make it more efficient.
Even though assembler is a long-standing programming language, there are still many different uses for it today. Assembler is used when a program will be executed frequently and efficiency really matters. Assembler programs typically get the job done using fewer instructions as compared to high-level languages, thus efficiency results from executing fewer instructions. With assembler there is also minimal overhead. Sometimes, assembler is used when dictated by the circumstances like an exit routine where the use of assembler is predetermined. An example of this is any of the many JES2 exits explained here.
Using assembler doesn’t always mean that you will be operating at a low-level mode. CICS has a command-level interface that is available in assembler so you can enjoy the flexibility of assembler while at the same time taking advantage of CICS commands like EXEC CICS LINK and TRANSFER. IMS also supports application programming in assembler language. You can find details here.
When you have a choice, you might choose to write assembler for the simple joy of it. There is directness in assembler that is refreshing. It is easy to debug because the relationship between what you wrote and what is executing is so close. You recognize your work and can fix problems quickly.
Finally. take a look at this amazing book called z/Architecture Principles of Operation (SA22-7832-09). It is not a programming book but it explains what you are doing in your assembler programs. It explains exactly what the computer is doing with the operations that you program. I have come back to this book time and again for more than 30 years, and I still find it a marvelous explanation of the stunning and interesting System z computer architecture.