Architecture Matters has moved to http://www.ibmsystemsmag.com/Blogs/Architecture-Matters/. Please update your bookmark. It’s the same great content but in a new location.
Architecture Matters has moved to http://www.ibmsystemsmag.com/Blogs/Architecture-Matters/. Please update your bookmark. It’s the same great content but in a new location.
The following blog entry was written by David Bruce, an IBM marketing manager responsible for Power enterprise systems. He works with Power Systems marketing and development teams to help identify and communicate the capabilities and benefits of IBM Power enterprise systems to clients.
In my previous blog post, “Power Systems Delivers the Balance Today’s Businesses Need,” I talked about how IBM Power Systems is a fit-for-purpose architecture with a balanced design that adapts to your business needs instead of making the business adapt to the technology.
In this post I want to further explore the idea of technology adapting to the business by examining how Power Systems architecture adapts for systems of record and systems of engagement.
So you might be first asking yourself, “What are systems of record or systems of engagement?” and “How does this apply to scale-up or scale-out applications?” First, let me briefly describe these terms:
Many authors suggest that businesses are shifting from systems of record to systems of engagement. Is that really true? What company wants their inventory records, warehouse control systems or payroll to be driven through social or collaborative applications? And, conversely, when you’re using a smartphone to check the availability of an item you want from a retailer, should you have to understand the warehouse structure, item lead time and traverse the order management system to see how many orders are ahead of you?
Reality is that we need both and we’re not favoring one in place of the other. We need systems of record to make sure our businesses run in an orderly and predictable way. We need systems of engagement to mask the complexities of business processes and engage the customer in a way that is appealing to them. These systems are not perfect substitutes for one another.
As systems of engagement become business critical, they need some of the attributes of systems of record. They need to be secure, available and they need to integrate with back-end systems of record. Customers expect more today. What good is a system of engagement that runs as an island without connection to enterprise data and other systems of engagement?
To make these systems work for your customer and for your business, you need a flexible architecture. One approach is to select a platform for every type of application or work that you need. This approach can lead to having dozens of different management systems with many layers of software whose sole purpose is to connect two systems together. Another approach, likely a better one, is to select an architecture that adapts to the requirements of both back-end systems and customer engagement systems.
Power Systems architecture fits that description. It has long been known for its scale-up characteristics of being reliable, secure and always available, with high performance and excellent data management. Now with the announcement of the new POWER8 scale-out servers, the Power Systems platform adds community innovation, designed for big data and open and collaborative to that list of characteristics. These are ideal attributes to meet the needs of both systems of record and systems of engagement. And now with a scale out line-up that is designed for mobile, social, analytics and cloud work, businesses have the advantage of choosing a single architecture with flexible operating systems and flexible scale-out and scale-up deployment modes that can integrate across the enterprise. IDC’s recent paper, "Innovations to IBM Power Systems for the Virtualization, Multitenancy, and Cloud Demands of the 3rd Platform," offers additional exploration of this topic.
Think about how that could work. Your ERP and CRM systems are running on an enterprise Power server as a system of record. You need a social, customer-facing system to offer mobile access through the cloud and integrate real-time analysis of social and company data into a customer interaction (transaction). Slide in a new POWER8 scale-out server with for the system of engagement. POWER8 technology gives you the ability to analyze the data in real time and provide suggestions or input to the customer during the engagement. You have confined your mobile and cloud applications to the scale-out server, offering an additional layer of isolation from your ERP and CRM systems. PowerSC is watching to make sure your virtualized environment is secure and compliant. PowerHA is managing your systems of engagement to fail-over to your enterprise Power server if necessary. Want to do that on your existing system? Take advantage of a Power IFL as the system of engagement and it works the same way. What other architecture offers this level of flexibility and integration, and can be managed with a single set of platform skills?
How is your business ensuring that these systems of engagement for mobile and social applications are secure, reliable and integrated across the enterprise? I would love to hear your stories on how the Power Systems platform is helping you in today’s mobile and social environments.
This blog post was written by David Bruce, an IBM category marketing manager responsible for enterprise systems (Enterprise Power Systems and zEnterprise). He works with Power Systems and System z marketing and development teams to help identify and communicate the capabilities and benefits of IBM enterprise systems to clients.
Our world is changing. Big data, cloud computing, mobile and social media technologies are all transforming the landscape of how businesses run their applications. The exploding number of smartphones, phablets and other mobile devices is reshaping how businesses interact with their customers, partners and employees. The rise of these technologies is creating exponential data growth as well as the need to store and manage this data. We generate 500 million DVDs of data daily—and 80 percent of it is unstructured. There will be 1 trillion connected devices by 2015 and when those users contact a business through social media, they will expect a response within five minutes.
So how do we deal with all this? Balance. Heraclitus described the concept of balance as “the road up and down is one and the same.” Our IT infrastructure requires balance to support a whole new class of applications optimized for these new consumption models. It requires a balanced design to manage the new mix and volume of data. And, most importantly, it requires the balance to adapt with our business—to be the single road used in traveling different routes. Choosing the right architecture for your systems needs to be based on your business processes and the applications you need to run.
Fit for Purpose Architecture
It’s important to choose the type of deployment that works for your business processes as opposed to force-fitting your processes on a single type of deployment. The Power Systems platform offers scale-up and scale-out capabilities while still enabling you to run a single OS, use a single set of skills and have the simplicity of a single architecture. The Power architecture offers you the ability to align technology to your business needs as opposed to forcing your business needs to fit the technology.
Built with the first processor that is truly designed for big data, Power Systems servers provide flexibility and choice for organizations of all sizes to turn massive volumes of raw data into actionable business insights.
Three key differences suggest the Power Systems architecture is the ideal choice for today’s businesses and business applications:
- Processor (cache on processor chip and cache near the processor chip)
- I/O bus designed to bring data in
- Storage with I/O bandwidth design to move data in and out
And with Coherent Accelerator Processor Interface, the platform offers new capabilities for optimizing a system for a specific application mix. A balanced design and the ability to adapt are critical for analytics and crucial to successfully run hundreds of diverse applications at one time.
The choice you make in your IT infrastructure can make a big difference in not just being able to easily and cost-effectively deliver solutions today, but as you look towards building a balanced and optimized infrastructure to carry your organization into the future. For more details on these new first-generation systems IBM has announced this week, read our press release or watch our Open Innovation to Put Data to Work webcast.
How are you handling your business systems and their related IT infrastructure in your enterprise? I would love to hear your comments on how Power Systems architecture is helping to run your applications today and in the future.
This blog entry was written by John Easton, IBM Distinguished Engineer, Advanced Analytics Infrastructures.
You'll all have doubtless seen the statistics: that the volume of data we're producing as a species is growing at some (insert superlative here) rate and that the (insert large percentage here) of this data is going to be unstructured. This is usually followed by some statements to the effect that if we can harness this data in some way that the world is going to be a better place. I'll leave it to you to determine just what bette' might actually mean in practice. Such is the big data mantra, yet many organizations seem to struggle to make their first steps into this grim and scary unstructured world. But why?
Let's start by shooting some holes in an accepted belief. People will frequently tell you that an audio or video file is unstructured; however, this is not strictly true. Implicitly something like an MP3 file has to have structure in the form of headers, ID3 tags, etc., that allow an MP3 player to do something useful with it. Where the “unstructuredness” (if indeed such a word exists) comes in is that the audio content of the MP3 file is not defined by this structure. The only way to find out what the MP3 file contains is to play the content. Analyzing what that content is telling us can then take place. If you are not using a toolkit that provides the capability to do this, then you'll need to write a program to process this binary data. And let's be honest, if you've not done this before then this is hard.
So how else might you be able to start taking advantage of this mountain of unstructured data that’s easier to get going with? How about logfiles? These can be a source of great insight and--because they are typically stored as plain text--they are much easier to work with than binary data. But just because it's easier doesn't mean that there isn't real business value to be gained.
Consider a telecommunications company IBM has worked with. Using analytics on logfile data the company has identified particular combinations of communications hardware and firmware, which gives rise to poor performance and has used these insights to proactively fix their customers' systems. Doing this before many end users have even realized they had a problem has resulted in higher customer satisfaction and hence less churn in the company’s customer base. The telecommunications provider also been able to use network logfile data to identify individuals who are performing illegal activities on their network and work with the relevant law enforcement authorities to get these individuals dealt with appropriately.
In both cases, what the organization does is to build a model of what “normal” behavior is: what are the correct ranges for performance? What does a legal user look like? Once these normal behaviors are understood then graphically displaying all data allows the outliers--those that are “not normal”--to be found relatively easily and along with it realize real business value.
Successful analytics projects need to deliver tangible business benefits. In my experience, many organizations try to take on the big challenges too early. Starting small and starting easy isn’t an admission of failure, but rather opens the business' eyes to what might be possible. So, for your first foray into the world of unstructured data, why not think about using all those logfiles you've been storing away for a rainy day?
More on IBM Power Systems
For more information on how to get started with unstructured data, download the solution brief. For updates on Power Systems for analytics, please follow our venues on Facebook, LinkedIn and Twitter. And, for the latest on how Power Systems servers are constantly evolving to help you break through the physical and virtual boundaries of data, be sure to register for our upcoming webcast Open Innovation to Put Data to Work on April 28.
John Easton is an IBM Distinguished Engineer, who is internationally known for his work helping commercial clients exploit large-scale distributed computing infrastructures, particularly those utilizing new and emerging technologies. He is currently leading work on next-generation systems infrastructures to support big data and complex analytical workloads. He has worked with clients in a range of industries with a particular focus on banks and financial markets firms. Over his time at IBM, John has led initiatives around hybrid systems, computational acceleration, cloud and grid computing, energy efficiency and mission-critical systems. He is a member of the IBM Academy of Technology and a Fellow of both the Institute for Engineering and Technology and the British Computer Society.
By Ron Schmerbauch, the technical leader of the SAP on IBM i team
What do we know about new and shiny things? Whether it is the latest smartphone or the latest DB platform, we hear a lot of marketing hype, based on narrow specialized tests with grandiose promises. They are usually expensive. Conversion may be required and it might not be painless. They still might be fragile and untested in the real world. In short, we may find out that the hype is not quite all it was made out to be and this is why enterprise IT architects pay special attention to integration and service levels, and veteran platforms continue to persist despite any niche newcomers.
Big Data and Analytics currently see a lot of this hype with in-memory databases being the latest rage, especially in the SAP market. Architects need to be mindful that although Big Data and Analytics are important, they are only one part of a complete SAP solution. It is crucial to choose a solid architecture as a foundation for an SAP solution across the board - something time proven with a track record of reliability in both OLTP and OLAP workloads.
Perhaps this explains how SAP running on IBM i and DB2 for IBM i is now entering its 20th year of development. Yes, in case you didn’t know, IBM i runs SAP applications on Power Systems very well and has for quite a long time. Other databases have entered the SAP market since the 1990s, including a couple from SAP itself, yet DB2 on IBM i is still current with all of the latest SAP NetWeaver releases.
Not only are SAP NetWeaver applications supported on IBM i, IBM i excels in both performance and TCO when it comes to SAP applications.
For example, DB2 for IBM i is giving SAP’s shiny new HANA DB platform a run for its money in the latest SAP benchmark, BW-EML. Considering this workload was specifically architected in an attempt to showcase SAP HANA against traditional databases by putting a premium on complex ad-hoc queries with random selection criteria, IBM i shows extremely well at the official SAP scoreboard for the SAP BW-EML benchmark, setting the throughput record with the most Ad-Hoc Navigation Steps per hour.
These SAP benchmark results are spectacular when one considers that IBM i is running both DB2 for i and the SAP application server together within one box and within just one partition, saving on administration overhead – not to mention floor space, cooling and electrical power. IBM i was architected to support multiple business applications and multiple users at once. It’s quite typical for IBM i clients to leverage IBM i subsystems to run multiple SAP components in a single partition, and to take advantage of PowerVM to support multiple partitions on a single footprint. These architectural advantages of POWER and IBM i together provide an amazing TCO story for the SAP landscape as this study explains.
The native Single Level Storage (SLS) concept on IBM i is another helpful architectural feature when noting that SAP databases tend to grow in size at about 20-50% annually. Although SLS includes many other sophisticated capabilities, it delivers outstanding benefits for workloads of this type. IBM i will automatically benefit from larger memory capacity whenever physical memory is added to the system just as an in-memory DB platform would. But whenever necessary the IBM i SLS memory space expands seamlessly to disk ensuring that database size is not limited by physical memory. With SSD and Flash Storage becoming more common, disk speeds get ever closer to memory speeds and the benefit of SLS is further amplified.
It is disappointing to see that a “modern” database like SAP HANA is perpetuating the x86 server sprawl problem, where “just buy another one” seems to be the answer to just about any question. HANA must dedicate x86 resources just for its DB work, forcing the SAP application server to run elsewhere. It also must be sized to make sure the in-memory database never simply stops because the size of the database exceeds the capacity of the physical memory on the system. Is this really progress? Sprawl, oversizing? That’s just the first system; consider how much more is required for any sense of High Availability and imagine the impact to ROI.
In the benchmark configuration and the study above, we’ve shown that IBM i is able to manage and share resources in an architecturally elegant way to provide both great performance and TCO. These features enable clients running SAP on IBM i to spend time focusing on business challenges instead of worrying about when and where they need to plug in the next x86 server to keep their infrastructure from toppling. I’d call that real business value, worthy of some hype.
By Frank Rodegeb, IT Management Consultant in IBM's High Availability Center of Competency
As part of our discussion on Architecture Matters we will have a series on high availability (HA)–what it is and how to go about achieving it. I plan to discuss a number of myths, inhibitors, fundamental concepts and other considerations regarding HA and availability management (AM). I look forward to sharing our experience and examining different approaches together.
Let me first introduce myself. I have been with IBM for 47 years with the last 23 years in an IT management consulting role specializing in HA and service management. I am currently a member of IBM's High Availability Center of Competence where I facilitate HA assessment workshops and provide service management expert support for clients worldwide. I've held a variety of positions within services, technical sales and operating system development, primarily in a customer facing, problem solving role. I have developed and continue to maintain and teach an AM seminar to IBM consultants and customers.
Information technology plays an integral role in the corporate strategy and IT service availability is critical. With IT services directly facing customers, outages and other service disruptions can tarnish the company image and be very costly to the business. Highly available IT services are a business essential and can be a competitive advantage. However, IT often faces a number of availability and service quality challenges in meeting the business needs:
If you answered yes to any of these questions, you may find these discussions useful.
In this initial discussion I'll start with definitions, because there are many perceptions of what is meant by HA. Some people think HA is defined by some number of nines expressing the percentage of up time (e.g., four nines or 99.99 percent). One of our competitors suggests four nines is HA today. I find it difficult to put a number behind the definition since the number is changing over time as business requirements increase and technology continues to improve. And, frankly, if a service is down during any business-critical period, it really doesn't matter how many nines there are. In my mind, HA is a concept. I like the definition developed by a group of SHARE members: The attribute of a system to provide service during defined periods, at acceptable or agreed upon levels and masks unplanned outages from end-users and customers.
Let's examine some of the key terms to help us better understand what it's really telling us. First, what is meant by service? In this context service is what IT is charged with providing to business users and their customers–it's IT's mission to manage the information needs of the business and provide information services. To provide service to business users a system then must include all components necessary to collect, store, analyze and distribute the enterprise's information, and the information entrusted to them by their customers. This means a system is made up of not only the components we traditionally think of as infrastructure such as processors, storage and system software, but also includes applications, network, data and even people. I interpret the term “masks unplanned outages from end-users” to mean that unplanned component failures should not impact end users. I would, therefore say HA is the attribute of a system to provide service by isolating unplanned component failures from the business users.
So, I feel it's important to recognize that any discussion about HA is really about service availability. Certainly, that's where I intend to focus this series of posts, along with discussing any questions and comments you may have. While components may have availability characteristics, I do not plan to talk about availability in the context of components. Rather, when looking at components we should be thinking about the factors that can impact service availability--reliability, recoverability and serviceability.
Continuous availability is defined as a combination of HA and continuous operations: The attribute of a system to deliver non-disruptive service to the end user seven days a week, 24 hours a day across several time zones (there are no planned or unplanned outages). What this means to me is that a continuously availability system must provide the capability to remove and isolate any component at any time for whatever reason, whether for planned maintenance or unplanned failures, while maintaining service to the business without disruption. Obviously, this requires redundancy of every component at every layer, but it requires a whole lot more.
Do you have concerns that prevent your company from delivering highly available IT services? Over the next several months we'll discuss several HA- and AM-related topics to address many of the challenges noted above. We'll explore some of the underlying causes that must be addressed before these concerns and challenges can be overcome. We'll discuss good technology and management practices for HA, seven steps to IT process improvements and more. I'd also like to hear your suggestions on topics of interest to you.
Which of the factors affecting how users perceive availability (reliability, recoverability and scope of impact) can have the most impact on overall availability? In the next HA article we'll discuss where we can focus our time to achieve the most benefit in improving overall availability.
By David Bruce, Enterprise Systems Category Marketing Manager, IBM
I have a cracked master bath sink. What does that have to do with enterprise IT architecture you ask? When that epiphany occurred, the same question surfaced in my head. Interestingly, they have much in common. As I pondered the various solutions for my sink, it occurred to me that the process I was using had much in common with making IT infrastructure choices. They connect at the fit-for-purpose junction.
Apparently, the original homeowner’s requirements for the sink were that it should hold water, drain water and have a nice appearance. My requirements are a bit different. I expect it to do those things and tolerate water that is 140-150 °F without cracking. That never happens in IT does it? Market conditions cause a requirements change after implementation.
ANSI Z124 series standards call for vanities to handle a range of 50-150 °F but the manufacturer of my vanity states that water must be kept below the scald level (110-120 °F) to avoid cracking. So my cracked “sink” has many of the attributes of a sink but it is not fit-for-purpose as it fails to meet standards for its designated role. Had they originally chosen quartz (industrial strength, ranges from sub-freezing to several hundred °F tolerance) or a sink mounted to a countertop instead of a one-piece unit I wouldn’t be planning a master bath-remodeling project today. The failure here–if there was one–was in not thinking about versatility–fit-for-purpose and fit-for-future-purpose. Requirements change all of the time–and not just with a new owner. Failing to anticipate change just creates more work down the road than it would if handled at the beginning. Every DIYer reading this post has realized that at some time or another–usually on their way to the hardware store.
The same thing happens with our IT systems. We have a great order management application implemented; our call center personnel have all of the customer’s information and order information available to them when they are on the phone–and then the requirements change. Customers want to order products, check status, arrange shipping, return products, etc., all without talking to anyone. Great, we’ll just do a Web interface. However, they also want it on smartphones and tablets with a variety of operating systems, screen sizes, connection speeds and more. And they want it everywhere and without time restrictions and it should price check against their local store as well.
So much for the simple update.
Whether we’re trying to capture new customers through improved service or respond to a competitor, the result is the same–we need to modify the system. This takes us back to the original requirements and the original architecture choices. If we have infrastructure that can adapt to new requirements and easily expand to support the increased work, then meeting the new requirement will be straightforward. On the other hand, if versatility isn’t a feature of the existing architecture, then we will need to add on or replace. Neither is very attractive and both shift resources away from other projects.
Let’s jump back to the sink for a moment. Since I cannot update the vanity top with built in sinks to incorporate the new requirement of hot water, I have to replace it. Because I’m replacing it, it makes sense to build in some versatility and fit-for-future-purpose by switching to mounted sinks instead of a one-piece system. That change drives a new vanity top and cabinet base–one that no longer matches the tub surround. Why replace just a tub surround when you can update the tub as well? Why not modernize and future-proof a bunch of stuff?
Can you see where this goes? I just wanted hot water and to get it I am now going to get a new vanity top, sinks, cabinets, tub, tile, fixtures and floor. Wouldn’t it have been great if that first decision had included versatility–both fit-for-purpose and some level of fit-for-future-purpose? The same logic applies to our IT architecture. Does it offer support for today’s business objectives–and tomorrow’s? Was versatility built in or is everything an add-on or a replacement? What parts of your architecture most need to be fit-for-future-purpose? Are you facing issues today because of challenges like this? How are you tackling this in your business?
By David Bruce, Enterprise Systems Category Marketing Manager, IBM
Have you spent much time recently thinking about the increasing importance of technology and enterprise IT architecture? Wondered about the varying benefits of different ways to solve the same problem? We have been too. And we’re all not alone. In the 2012 IBM Global Chief Executive Officer Study 71 percent of CEOs identified technology as the most important external force impacting their organizations. This was the first time since the study began in 2005 that technology led the list.
Welcome to the first blog post for Architecture Matters. The intent of this blog is to discuss and help answer some of those questions about technology and architecture, and explore the benefits and the implications of different approaches. Should I choose private or public cloud for customer service? What’s the best way to integrate analytics for improved customer insights and a more personalized customer experience? What are the implications of bring your own device (BYOD) engagement models on data availability, security and privacy? We’ll explore these and other important questions through this blog.
You’ve likely read about enterprise systems and their architecture as guest posts in other blogs, perhaps you’ve even commented on a few of them. That broader interest and the increasing importance of enterprise IT architecture to the new generation of applications deploying today caused a group of us to decide that it was time to become a bit more structured and frequent in blogging about these topics.
With CEO and boardroom focus on technology, we want this blog to give some of our business consultants, enterprise IT architects, development lab personnel, and other assorted technical subject matter experts a way to share insight and experience with you as together we explore various ways of solving today’s business challenges with IT infrastructure. This blog will make sharing things faster as well. It shortens the communication pathway and gives you a direct connection to people that can help with challenges you face.
As I mentioned several of us will contribute to this blog. We’ll post some bios and ask everyone to offer a brief introduction as they begin to post. To get that process started, I’ll share of bit of my background with you. I’ve been with IBM for more than 30 years in a range of positions including hardware service, product development, offering management, sales, management and marketing, and I’m likely to write about any number of things, but mostly about the business aspects and benefits of IT. All of those positions related to the server industry with most of them focused squarely on business solutions. You can find me on LinkedIn and Twitter as well.
The other members of the blogging team also have varied backgrounds, often with a specialty area that is important to enterprise clients–system architecture, security, resiliency and availability, data and analytics, and virtualization and cloud amongst others–so you should expect a broad spectrum of topics that all relate to enterprise computing. Of course, there will be general communications as well–things we hope will interest everyone like education conferences, items of interest in other blogs, thoughts on strategy and the like.
We also want to hear from you. We’ll always have a list of things to talk about, but the enterprise computing and enterprise architecture questions you have are the most important so we would love to have you comment, ask questions, debate approaches and suggest anything else that brings the most value in your business.
Thanks for reading to the end of this first post! In my next post, I blog about the common nature of home remodeling and enterprise IT architecture. Feel free to connect with us on Twitter using the hashtags of #EntSys, #powersystems, and #zEnterprise as well.