IT Trendz


Rob McNelly

Rob McNelly

Bookmark and Share

Recent Comments

March 2011

Sun Mon Tue Wed Thu Fri Sat
    1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 31    


$3 Million Dare Asks Data Crunchers to Fix Healthcare

Can an algorithm prevent unneeded hospital stays?

By Morgon Mae Schultz

A network of California doctors is issuing a $3 million dare asking data miners to fix healthcare. The Heritage Health Prize, which stands to be the largest-yet data-modeling competition, will challenge participants to write an algorithm identifying patients most at risk for unnecessary hospitalization—an economically draining component of U.S. healthcare woes. Ultimately, the algorithm will alert doctors to intervene before hospitalization with healthier, far cheaper preventive action.

The Problem

Hospitals are costly. Jonathan Gluck, senior executive with Heritage Provider Network, the competition’s sponsor, estimates that Americans spend between $30 billion and $40 billion annually on unnecessary hospitalizations. Unneeded admissions also put patients at risk for hospital-borne infections and divert resources from patients who really need them. More to the point, Heritage asserts, they’re symptomatic of a system that treats sickness rather than keeping people healthy.

“With all that’s going on with predictive modeling and data mining, the thought was, well, let’s see if we can kind of think outside the box and get new people involved in trying to solve these problems,” Gluck says. Some feared the algorithm will be used to avoid caring for costly patients, but Gluck stresses that Heritage is not an insurance company and, as a network of doctors, has no say in which patients it treats or doesn’t treat.

To understand why the network decided a huge data-mining prize was the best way to prevent unneeded lay-ups, Gluck says you need to know a little about its founder and CEO, Richard Merkin. Merkin is a medical doctor, philanthropist for youth and medical charities, and a core contributor to the X Prize Foundation, which runs innovation competitions in space exploration, genomics, ecology and other fields. Merkin is genuinely excited to bring new minds to the healthcare table and believes data miners hold great potential, according to Gluck. The contest was Merkin’s idea to not only yield a winning algorithm, but also to grab the attention of data miners globally and raise awareness about competitive innovation. “You could be the best company in the world and if you hired 20 really good minds to work on your problem, you’ve got 20 good minds. If you run a prize, and you’ve got 2,000 entries, I would assume you’ve got a better shot at success,” Gluck says.

Attracting Minds

The company is putting up substantial money, and serious data, to back up its hopes. The $3 million award is bigger than the Nobel Prize for medicine and the Netflix prize combined. (The former varies but has paid about $1.5 million each of the past 10 years. The latter, a famous data-mining competition, awarded $1 million.) It’s by far the biggest prize Kaggle, the data-contest administrator that will run the competition, has ever handled. The 11-month-old company accelerated a total infrastructure rebuild to accommodate it.

Competitions attract great minds based on three factors, according to Kaggle founder and CEO Anthony Goldbloom: by offering interesting, real-world data to researchers usually stuck with lackluster fictional or public data; by posing meaningful social problems; and with large prizes. And this contest, he says, ticks all boxes. “Firstly, there’s a ginormous prize up for grabs. Secondly, the data set is fantastic. And thirdly, it’s a really, really, really significant, meaningful problem,” Goldbloom says. “I mean the Netflix prize attracted something like 50,000 registrations and it was to make sure people didn’t see movies that they disliked. This is obviously far more significant.”

Goldbloom says the Heritage data set alone could attract scores of participants even if it weren’t linked to a fortune. He cites a chess-rating competition that posed what data miners considered an interesting problem. Two-hundred and fifty-eight teams competed for the top prize, a DVD signed by a chess grand master. A $25,000 prize attached to a less interesting problem attracted far fewer entries.

The Heritage set includes real patient data on perhaps hundreds of thousands of members—doctor’s visits, test results, prescriptions and whether they’ve been filled—scrubbed of identifying details by a group of health-privacy experts. Kaggle will conceal a late portion of the 2005-2010 data and challenge participants to predict hospitalizations based on the earlier portion. When entrants upload their predictions, Goldbloom says, they’ll get real-time feedback on the site’s leader board. “It’s literally like making data science a sport. These people will know in real time exactly how they’re doing.”

Kaggle will launch the contest April 4 and expects it to run for about two years.

Who Will Win?

Goldbloom and Gluck say the $3 million purse could go anywhere. In the contest to design a new chess-rating system, an IBM researcher took first place—no big surprise. But in a competition to predict viral load in HIV patients, the winner had learned to data mine by watching lectures Stanford University had posted online. “This guy just became interested in it. Started watching these Stanford lectures, and learned enough,” Goldbloom says. And the computer scientist who won Kaggle’s first contest was a 25-year-old in Slovenia. The underdog factor is so important to competitions that Kaggle helps hosts whittle down data sets as much as possible so participants with few resources can run the problems on laptops or small systems.

“There’s a lot of low-hanging fruit that I suspect things like this could help to unleash,” Goldbloom says. “For example, you identify somebody as being a really high risk of hospitalization and you get them into an exercise program or you have a nurse call them up every day to make sure they take their prescription. Well, that’s going to have a significant effect on quality of life and it doesn’t involve a $3 trillion drug trial. It’s low-tech but it can have as dramatic an impact as a new blockbuster drug.”

To participate in the contest, visit the Heritage Health Prize website.

Morgon Mae Schultz is a copy editor for MSP TechMedia

Infrastructure-Management Checklist

Improve capabilities and evaluate service providers

By Joseph Gulla

Editor’s Note: In the March/April IBM Systems Magazine, IBMer Joseph Gulla outlined eight infrastructure-management challenges. Here, he provides a checklist to help identify these challenges in your environment.

Here are eight checklists that can be used to help you improve your own management capabilities and evaluate the capabilities of potential outsourcing and managed-service providers.

  1. Detect and handle incidents and problems
    • Implement standard monitoring tools and settings
    • Use technology to automatically handle incidents, open problem records and assign priorities
    • Use tools to anticipate and correct problems before they occur
    • Provide automated support for system administrators
    • Assign support personnel based on the severity of the incident or problem
    • Use service-level agreements
    • Follow rigorous and documented problem-handling and management steps
    • Utilize a specialized service manager and a multidisciplinary team as needed
  2. Handle changes with minimal impact on availability
    • Implement team participation in regular change-management planning activities
    • Use an effective software tool that facilitates the process–including required artifacts, teamwork and approval
    • Prepare for change: plan steps and estimate the time required
    • Test changes prior to implementation and prepare back-out activities in the event of a failed change
    • Use a skilled change manager
  3. Prevent security problems
    • Use information security controls like (ISO) IEC 27002 2005 to govern the overall security process
    • Implement security variables such as password length and update frequency early in the process
    • Perform security remediation as required for servers and other devices during setup
    • Look for security exposures during the ongoing support period and specify the frequency of these analyses
    • Report monthly on key security attributes and activities related to servers and other devices
  4. Effectively implement emerging or challenging technologies
    • Focus on skills support for virtual machines, logical partitions and related management software
    • Use change windows to make dynamic changes to production servers
    • Sustain skills in needed high-availability software
  5. Maintain server software and firmware
    • Proactively administer servers and use standard monitoring and management software
    • Manage server platform-support activities like patching and log-file maintenance
    • Perform server security administration for identity and access
    • Provide specific focus and support for virtualization
    • Provide high availability software support including periodic testing
  6. Employ useful reporting indicators
    • Post regular reports on a portal for easy access
    • Provide tools that focus on server resource management and generate performance and capacity reports
    • Set up a portal to enter problem records and change notifications
    • Provide links to other needed tools, portlets and services that support ongoing activities
  7. Supply the right tools
    • Availability management
    • Hardware-specific monitoring
    • Software monitoring of key computer resources like CPU and memory
    • Flexible notification handling
    • Performance and capacity management
    • Security management
    • Configuration management
    • System administration
    • Easy access to information
    • Automation
    • Standardized remote access
    • Problem and change management
  8. Rapidly deploy infrastructure and tools with ongoing management
    • Use a model or template project plan based on prebuilt components
    • Assign a project manager
    • Employ a delivery architect for technical support

Using these checklists as a guide, you can establish ongoing management procudures such as defining ongoing support activities using documented desk procedures, using a service-delivery manager and assigning security specialists, change managers and duty managers as needed.

Joseph Gulla is an executive IT specialist for IBM data-center services.

Making an Impact

Conference aims to change the way business and IT leaders work

By Marcia Harelik

Never before has System z played such a strategic role in IT. At Impact, April 10 to 15 in Las Vegas, you can engage in business and technical discussions around the importance of System z—a technology option that simply can’t be marginalized. Peers, analysts, business partners and IBM architecture experts will gather in one place for six days to consider zEnterprise System as the foundation of cloud computing, the System z server's place in the complex world of hybrid computing and the advantages of a standardized middleware infrastructure.

You’ll be able to sign up for a personal consultation on your specific environment and learn the best techniques to transform your core systems into flexible applications and services using SOA solutions for IBM CICS and IBM WebSphere Application Server. See, touch and test examples of IBM's continuous investment in all things IBM mainframe to help you improve business alignment for growth, cut costs and limit business risk.

Special System z Activities

The System z Software Solution Suite Schedule one-on-one time with IBM System z experts on many topics, including:

  • Improve productivity when managing CICS environments
  • Threadsafe! How to know if your applications are CPU jackpots or snake eyes
  • Leveraging DataPower in your zEnterprise
  • Are you looking for trouble in your applications?

Hours are: 11 a.m. to 6 p.m. Monday, April 11 and 8 a.m. to 6 p.m. Tuesday April 12 to Thursday April 15.

In addition, the System z Community Reception is scheduled for 7 p.m. Sunday, April 10 in the TAO Lounge.

System z Session Highlights

More than 65 sessions on System z include:

  • New workload. New strategy. New thinking on z: What a wonderful time to be an IT architect. WebSphere middleware and the zEnterprise System are giving organizations the freedom to compose an IT environment optimized for business innovation rather than maintenance, thrilling application owners rather than confounding. IBM speakers include VP of Enterprise Platform Software Yvonne Perkins and System z VP and Business Line Executive Greg Lotko.
  • Customer Panel: Meet the Wizards of System z—Stories of Revolution, Consolidation and Victory A panel discussion includes peers who assessed all of their technology options and chose to make System z the cornerstone of their environments, plus IBM architects and Distinguished Engineers. Questions will be welcome.
  • Twenty ways mainframe customers can cut costs. This session will focus on the concrete steps you can take to cut your mainframe workload costs: Some are technical—leveraging new technologies and licensing, for instance—while others are business oriented. The mainframe is already a cost-effective choice for many organizations, but there are always ways to improve. Customer examples and case studies will be used to demonstrate the scenarios and impacts of these suggestions.

Technology-specific session highlights include:

  • WebSphere MQ for z/OS: Introduction to problem determination and performance tuning
  • Getting Started with virtualization and WebSphere on System z Linux
  • Back to the Future: A mainframe reengineering journey at Credit Suisse
  • Batch to the Future: The road to adopting WebSphere compute grid batch
  • CICS and Business Rules: Perfect together—a customer's experience
  • Bring agility to System z with BPM and rules
  • Modernize business rules with WebSphere ILOG BRMS and Rational Asset Analyzer
  • Why WebSphere Application Server on the mainframe
  • A day in the life of a developer using enterprise-modernization solutions

You’ll also be able to network with business partners and see more options for enhancing and implementing your System z workload at the zZone in the Solution Showcase Center.

Sign Up Today

Impact is “z” place to discuss the operating system that took man to the moon in 1969 and runs the world's global economies today. Register online

Marcia Harelik is a market manager for WebSphere on System z.


zEnterprise Adoption Grows

2010 MIPS sales set record growth, for good reason

By Mary Shacklett

IBM first announced its new enterprise-class computing platform the zEnterprise System in July, and it was formally made available to customers in September. Customer adoption has been aggressive, with 450 systems totaling 1.5 million MIPS already shipped at fourth quarter 2010—a MIPS growth rate that was the highest in more than a decade. The industry has certainly noticed, so the natural question is, what’s fueling all of this activity?

IT Spending is Opening Up

One factor is the start of more aggressive IT spending after several years of recession, but just as important is the need to continuously compete in a global economy that’s growing more and more competitive daily. At a recent conference with analysts, IBM said 60 percent of zEnterprise sales have been to major market companies with locations in North America, Europe and Japan, equally significant is the fact that 40 percent of these zEnterprise sales are occurring in emerging markets in South America, Asia and Africa. There’s no dominant industry that stands out in this mix. zEnterprise sites span public and private sector organizations in a variety of verticals (e.g. finance, communications, healthcare, etc.).

Linux is a Major Area of IT Activity

Many of these organizations have spent the past five years greening their data centers and embarking on major virtualization efforts that have identified System z as a platform of choice for systems that formerly resided on separate physical servers. These systems are frequently Linux-based, as evidenced by one IBM survey that reported that 64 of 100 System z clients interviewed run Linux on their mainframes, with Linux now representing 19 percent of installed System z base capacity. When you add the increased capabilities for both Linux and native systems on the new zEnterprise, migration to zEnterprise systems almost seems natural. “We’re seeing 20-30 percent improvement in productivity with zEnterprise even before we do significant system tuning,” says one payment processor. A second customer in healthcare echoes the results.

Fit for Purpose Workshops Aid Adoption

But of course, there’s more to it than that. Companies can always be depended upon to introduce new hardware and software platforms, and to extol their virtues. The job remains for CIOs and senior technical staff to translate how these new capabilities will benefit their businesses, and if the help is significant enough to make an investment that they strongly feel will make a real difference.

“One of the free services we have and will continue to provide for zEnterprise is the delivery of free fit-for-purpose workshops to sites that can help them see how zEnterprise can specifically assist them with the performance of their application workloads,” says STG Marketing Director, Doris Conti, at a recent industry analysts conference.

More than 300 customers took IBM up on its free zEnterprise workshops in 2010 alone. From this body of work, IBM was able to identify four major workload areas that a majority of corporate IT departments want to improve reliability and performance in:

  • Transaction processing and database
  • Business applications
  • Web, collaboration and infrastructure
  • Analytics

The question is, for all of the companies participating, how many already had very defined plans of where they could best use zEnterprise and optimize the return on the investment to the business?

“Our experiences really varied across the board,” says Greg Lotko, IBM vice president and business line executive, System z. “Some sites had clear cut goals on what it was they wanted to do, and were ready to test applications as soon as we arrived. In other cases, we took time at the beginning to work through the site’s IT infrastructure and critical workloads, and to identify some test cases for zEnterprise where we all felt that zEnterprise would deliver an immediately beneficial impact to the business.”

A sampling of business workloads for zLinux where organizations felt that zEnterprise would deliver value includes:

  • Business connectors (e.g., WebSphere, MQSeries, DB2Connect, CICS Transaction Gateway, IMS Connect for Java)
  • Business-critical applications (e.g., Java)
  • Development of WebSphere and Java applications
  • WebSphere Application Server (WAS)
  • Email and collaboration (e.g., Domino, Web 2.0)
  • Network Infrastructure (e.g., FTP, NFS, DNS, etc.; Comm Server and .Communications Controller for Linux; and Communigate Pro for VoIP)
  • Data services (e.g., Cognos, Oracle, Informix, Information Builders WebFOCUS)
  • Applications running top-end disaster recovery
  • Virtualization and security services

These areas support customer practices and preferences for using Linux on System z in Web serving, systems management, online transaction processing (OLTP) and application development.

What to Expect in 2011

Planning to continue its fit-for-purpose workshops in 2011, IBM says this detailed planning work with customers in customer-specific IT environments has furthered IBM’s understanding of the different computing scenarios and pain points that corporate IT is faced with as it optimizes systems and continues to improve quality of service for systems users. For the CIOs and senior technical staff charged with making this job happen better and faster for their businesses, being able to participate with a major vendor in benefit and fit-for-purpose discussions that are tailored to the site also deliver major paybacks—because even though IT wallets will loosen up in 2011, every cost justification will remain painstaking and every budget discussion will entail detailed discussions about technology acquisition, proof of concept, installation—and when the technology can begin delivering on the ROI promises that have been projected. Early results indicate that zEnterprise is making the grade.

Mary Shacklett is president and CEO of Transworld Data.

Inform Your Business

Cognos 10 BI on System z enhances analytics opportunities

By Rebecca Wormleighton

Editor’s Note: This article is based on an IBM whitepaper, “Better Business Outcomes With Business Analytics,” which is available online.

Everyone in an organization is responsible for contributing to better business outcomes: higher revenue, lower costs, reduced risk and accurate predictions. Currently, organizations must achieve these outcomes in an unforgiving economic environment—one that is more volatile, less certain and more complex than in years past. Events, threats and opportunities emerge more quickly and with less predictability; additionally, they’re converging and influencing one another to create entirely unique situations.

The effects of this new reality include more cost-conscious and informed consumers who want more value for their money, creating a need for more rapid decision cycles, better response to changing market dynamics and a greater profitability growth—all while limiting capital and operating expenditures.

Analytics is the Answer

In this new environment, it’s still possible to thrive. The key is making more informed, fact-based decisions about strategy, resources and tactics, and deciding where to focus time and energy. The way to better decisions is through better business insights. The way to better business insights is business analytics.

Business analytics helps enterprises obtain actionable insights into numerous aspects of business performance, such as current results, customer trends, competitive threats or market opportunities. Business analytics also provide standardization, service delivery and automation that can increase the efficiency and effectiveness of core business processes.

With the appropriate insight, employees at every level of an organization can make better decisions and take more effective actions. Organizations that deploy business analytics also effect longer lasting improvements in their decision-making culture: these organizations become analytics-driven. In achieving this state, they can draw, share and act on business insights to overcome long-standing, sometimes entrenched barriers to better business outcomes.

IBM Leads the Industry

With acquisitions and organic growth, IBM has created a powerful, innovative and effective business-analytics portfolio. Over the past five years, IBM has:

  • Invested more than $14 billion (including the acquisitions of Cognos and SPSS) in software to build the industry’s most robust business-analytics portfolio.
  • Created the Business Analytics and Optimization service line and staffed it with more than 7,000 dedicated consultants who help companies realize their business-optimization objectives faster, with less risk and at a lower cost.
  • Opened eight analytics Centers of Excellence around the world to help clients uncover insights hidden in their data.
  • In October, IBM announced IBM Cognos 10: the first in a series of innovations that will change how organizations make decisions, allocate resources, predict and plan the future.

Intelligence Unleashed

IBM Cognos 10 revolutionizes how organizations use business intelligence (BI) by freeing people to think, connecting people and infusing insights into everything people do.

    Think Freely  Cognos 10 delivers a revolutionary user experience that supports the way users think, as opposed to reacting to the software processes, though a limitless BI workspace with greater power, intuitive navigation and a cleaner look.

    Connect With Others  Cognos 10 includes built-in collaboration and social-networking functions to fuel the exchange of ideas and knowledge that naturally occurs in the decision-making process, but can be trapped in meeting notes, manual processes, e-mails and people’s notebooks. Users can form communities, capture annotations and opinions, and share insights with others, facilitating decision excellence and building a corporate memory. Cognos 10 harnesses the collective intelligence of organizations to connect people and insights and gain alignment.

    Simply Act  Cognos 10 makes it possible to receive mobile BI, real-time BI and BI mashups. BI becomes a natural and essential part of everything they do. It provides interactive analytics to front-line workers and people on the road—extending the power of BI to more people and more communities than ever before.

    Proven Technology  Cognos 10 upgrades seamlessly and cost effectively scales for the broadest of deployments. It provides business users and IT the freedom to see more, do more and make the smarter decisions that drive better business results.

Achieving Analytic Success

To meet demand for BI and business analytics, many IT organizations have implemented analytics capabilities in individual departments; in the past, there was little focus on developing common tools for the entire organization. Today, the painful effects of this decentralized approach can be felt:

  • Isolated projects are creating higher hidden costs as multiple support, administration and maintenance resources are dedicated to each project.
  • IT organizations have limited visibility of which individuals and groups are accessing what data.
  • These distinct projects can produce inconsistent or even contradictory results because they’re missing information or it was inadvertently duplicated.

Because business today is so dynamic, organizations can benefit from a business-analytics infrastructure that minimizes costs and complexity while also ensuring high performance and end user satisfaction. An enterprise-level solution that provides the right information, at the right time and in the right context to all users helps increase trust in business-analytics tools and information to ensure project success.

The combination of IBM Cognos 10 and SPSS Predictive Analytics with IBM System z eliminates the barriers to a successful business-analytics initiative with a single solution, on a single platform, that is capable of scaling to meet a wide range of business-user needs. This solution facilitates the sharing of complete and accurate business information faster and better with fewer resources and expense. Because it’s flexible, it can help companies meet the business challenges of today and evolving business needs for actionable insights that help to optimize business performance.

Rebecca Wormleighton is an IBM product marketing manager for Cognos software, focusing on synergy’s between Cognos BI software and IBM products.

Complete, Consistent, Timely, Relevant

IBM Business Analytics on System z scales easily to meet the needs of every decision-maker, with capabilities such as real-time monitoring, reporting, analysis, dashboards, and a robust set of predictive analytics on a single platform. It is an end-to-end business-analytics infrastructure for providing a more complete view of the business and greater access to data as it’s created.

Increased Satisfaction  With IBM Business Analytics solutions on System z, system performance and high availability are guaranteed (with an enterprise service level agreement, or SLA) to ensure it’s up and running when it’s needed. To ensure user expectations are met, it can provide faster query and response times.

Reduced Cost and Complexity  Business Analytics on System z centralizes resources, reducing the complexity of providing business analytics so business units can shift their focus from system administration tasks to decision-making. This decreases the amount of hardware, software and facilities (power, floor space and so on) required to manage and maintain the infrastructure. In addition, enterprises can see a decrease in the costs associated with system administration and facilities by upwards of 50 percent over five years.

Rapid Deployment and Expansion  Business Analytics on System z reduces the time, resources and cost of implementing and expanding business analytics to new divisions, departments and users As a result, it’s easier for IT departments to provide new divisions, departments and users with business analytics quickly.

Ensured Security  The secure and reliable infrastructure System z infrastructure provides ensures corporate security policies are followed and disaster recovery plans are in place. It’s also ideal for service delivery, which can help IT maintain better control over businss processes.

Learn why System z is ideal for delivering business analytics.


Empathetic Circuits

A Cambridge team’s computers sense emotions

By Morgon Mae Schultz

As computers become more efficient and powerful, technology permeates into areas of life that may seem unlikely beneficiaries of small, fast processors—like human emotion. At the University of Cambridge computer laboratory, a team of researchers is addressing what it calls the necessity that computers become socially and emotionally intelligent, developing computers that can sense and react to user’s emotions. Possible applications range from improved automobile dashboards to aids for those unable to interpret emotional cues due to autism. The Cambridge lab’s graphics and interaction team, led by Professor Peter Robinson, has tested systems that can infer a user’s emotional state from facial expressions, gestures and voice as accurately as the top 6 percent of humans.

Reading Minds

The ability to discern what another person is feeling, known in psychology as mind reading, crosses cultural boundaries. Scientists agree on the facial expressions that reveal six basic human emotions (happy, sad, angry, afraid, disgusted and surprised) as well as hundreds of subtler mental states (such as tiredness, joy or uncertainty). Robinson’s team uses probabilistic machine learning to train computers that recognizing visual cues such as head tilt, mouth pucker and eyebrow raise. Such a system could inform a car when its driver is bothered, upset, bored or drowsy.

The team has applied the same mind-reading capabilities to an “emotional hearing aid,” a portable device designed to translate facial expressions into emotions and suggest appropriate reactions so people on the autism spectrum can relate to those around them. MIT is pursuing emotional-social prostheses, and Robinson says his team continues to research interventions for autism-spectrum conditions.

The inference system is as accurate as most people—70 percent. “There is potential for improvement. In some sense computers already are better than individual people, but there will always be difficulties establishing the ground truth in a subjective assessment,” Robinson says.

On a larger physical scale but more intimately customized, a gesture-reading system lets users control music through emotional body postures, creating an interactive, real-time soundtrack. Because emotional expression in gestures varies widely among individuals, it’s harder for machines to read whole-body cues than facial expressions. The system must tune itself to each new user and, just like humans, it reads large, dramatic movements more easily than subtle everyday gestures.

A Robot Named Charles

Not content with computers being able to recognize our mental states, Robinson’s team is working on machines that can synthesize emotion—express feelings in a way that triggers humans’ natural understanding. The complexities of human-human interaction present huge challenges to designing an appropriate human-robot interaction. One pioneer is Charles, an animated android head that Hanson Robotics built for the Cambridge team. Aiming for more satisfying robot experience, Charles has a plastic face (modeled after its mathematician namesake Charles Babbage) that can smile, grimace, raise its eyebrows and otherwise express a range of emotions.

Robinson says his team is always seeking commercial channels that could bring its technologies to consumers, including a major car manufacturer that may implement the emotional-inference system. “We are always talking to companies about the possibilities for commercial exploitation. I guess that something will appear when they see a good business case.”

You can see people interacting with Cambridge mind-reading machines here.

This video contains footage of Charles.

Morgon Mae Schultz is a copy editor for MSP TechMedia.


Align IT With Business

By Natalie Boike

The upcoming SHARE conference in Anaheim, Calif., (Feb. 27–March 4) is aimed at helping IT workers better align their departments with business needs, all with an eye towards continuing costs. “The thought behind the topic is to ensure we’re not losing sight of the relationship between IT and business,” explains SHARE President Janet Sun. “Over time IT has gotten a bad wrap as being inwardly focused and using new technology without focus on the business. I don’t think that’s always true.”

Related subtopics include IBM’s the zEnterprise System, virtualization, application technologies and architectures, and security and privacy. Each subtopic will relate back to the main theme. For example, Sun says virtualization sessions will outline not only how the technology can make better use of existing resources, but also how it can be a cost-effective approach to delivering new services.

“A lot of our sessions focus on user experiences and draw out best practices, from real-world practitioners,” Sun says. “We believe our attendees will take information from SHARE and it will help them contain costs and better align with business objectives.”

Keynote Sessions

As usual, SHARE has lined up three keynote speakers who will discuss wide-ranging IT topics.

Anjul Bhambhri, IBM VP of Big Data Products, will be discussing how the industry is changing to handle larger amounts of data and larger numbers of data queries. Sun says Bhambhri will discuss some of the technologies used in the IBM Watson supercomputer, which is slated to perform in a “Jeapordy!” challenge airing on Feb. 14, 15 and 16.

Dayton Semerjian, general manager at CA Technologies, will discuss the next generation of mainframe management. Additionally, IBM VP of Social Business and Collaboration Solutions and Social Media Evangelist Sandy Carter will outline how attendees can better leverage social media.

Other Highlights

New to SHARE is a conference within a conference targeted to IT executives, called ExecuForum. Held Feb. 28–March 1, discussions and roundtables are designed to deliver true best practices from executive-level peers that have applied or are about to apply strategies within their own environment. For example, topics scheduled for discussion include securing and managing mobile devices, and social media and its impact on IT, Sun explains.

SHARE will also continue its online virtual conference. Those who can’t physically travel to the event can access four days of quality streaming content and six months of archived on-demand access to recorded sessions. Sun says on-site attendees will still gain the most benefit. “Face-to-face networking and communication is one of the historic strengths of SHARE, but SHARE online is a good way to join the community,” she says.

For more information or to register, visit

Email Natalie at [email protected]


Growing Mastery

After six years, the Master the Mainframe contest is still drawing an increasing number of new users to the IBM mainframe. A record 3,537 students from more than 400 schools across the United States and Canada participated in the most recent contest; the winners of the three-part contest were announced Jan. 26.

Mike Todd, with the IBM System z Academic Initiative, says the increased participation is likely a result of the current generation of students who realize mainframes offer a solid career path. “If students didn’t see a future in mainframe computing, we wouldn’t be seeing the kind of growth that the contest has experienced: from 750 students in 2005 to 3,537 students in 2010,” he says.

Patricio Reynaga, West Texas A&M University, took first place. Another college student, Jay Thomas, Pace University, was awarded second place. A high school student, Calvin MacKenzie, Arkansas School for Mathematics, Sciences and the Arts, received third place. Other top winners and honorable mentions are listed online.

Sponsored by the IBM Academic Initiative System z program, the Master the Mainframe Contest invites students to gain hands-on experience with the System z platform. The contest welcomes students who have never logged on to a mainframe system before, guiding them through the basic tasks required to successfully navigate the system.

Three-Fold Competition

The contest is divided into three increasingly difficult parts, allowing students to decide how deep they want to go into the inner workings of the mainframe. In Part 1, students learn to navigate the user interface. In Part 2, the challenges get much more difficult; students debug Job Control Language (JCL) errors, learn to navigate UNIX on the mainframe, alter C programs to produce different output, manipulate security protocols and learn more advanced system navigation. All of the students who complete Part 2 earn certificates of completion from IBM and receive invitations to upload their resumes to the IBM Student Opportunity System (a resume database accessible to all IBM Business Partners and clients).

Todd says participants receive everything they need to log on to a mainframe system for the first time: screenshots, detailed instructions and a healthy dose of encouragement. “We understand it can be intimidating to tackle a brand new platform, and we do our best to ease students into the world of enterprise computing,” he adds. As students progress through the contest, they learn skills needed in future challenges.

In Part 3, students are faced with problems that have flummoxed systems programmers in the real world. IBM clients have also pitched in ideas for skills they’d like to see students acquire during the contest, such as more experience with VSAM data sets. “Students must bring tenacity, dedication and technical ability to succeed in the contest, Todd says, “but they don’t need to have a background in mainframe-specific technologies.” In fact, past winners have started the contest with no mainframe experience at all.

“Students see that we’re teaching them real-world skills that are in demand with many of the world’s largest companies, and they’re drawn to the contest in part because it’s fun and they can win prizes, but also because they’re picking up skills that can give them an advantage in the job market,” he adds.

Adding Rational Developer for System z

This year, for the first time, students were invited to download and install Rational Developer for System z (RDz), an Eclipse-based integrated development environment (IDE), to complete the Part 3 challenges. RDz is a more familiar environment to most students than a traditional 3270 emulator, Todd says. Contest organizers, who were predicting technical-support issues to arise, were surprised at how quickly students became productive with RDz. As no technical support-issues arose, RDz will likely be a component in future Master the Mainframe competitions.

Get Involved

The 2011 competition kicks off the Tuesday after Labor Day, with the contest beginning in early October. Students can keep up with the latest on the contest at, and also on the Facebook fan page.

Even if a contest isn’t running in your country (or if you’d like to practice for an upcoming one), students and educators have worldwide, no-cost mainframe access through the IBM Academic Initiative System z. Visit the website for details.

Email Natalie at [email protected].

Backing Up Cloud

I miss the good old days when I had maintenance windows that were long enough that I could bring my machine down to single user mode and back up the whole system. These backups contained all of the data that mattered to the company at the time. Twenty years ago, I could only back up my machine with reel-to-reel tape drives. I'd bring my machine down to single-user mode to perform the backup, and each tape backup would take 12 minutes. I remember this because we would set the time on a portable kitchen timer when we started each tape. When the timer went off, we'd head to the computer room to swap out the tape, and go to the console to “press G to continue the backup.” All of the important data lived on that one machine. We didn’ t worry about distributed computing environments, as we weren’ t running any at the time. Sure we had a few PCs scattered here and there, but they weren’ t critical. The entire company and all of its data lived on that central machine, and users who sat in front of green-screen dumb terminals accessed it. There wasn’ t any data that users stored locally; it was all stored on the machine in the computer room.

When I hear about cloud computing, this is still the kind of environment I picture: where people are logged into a central machine that exists in a computer room in the sky. I use several Web-based applications like, or Google Mail, where I know nothing about the servers nor where the applications run, and I don’ t necessarily care about the hardware or operating systems the applications use. I log in, use the service and log out. I often find myself logging into the IBM virtual loaner program website, where I can utilize slices of IBM hardware for short periods of time for demonstrations or proof of concepts or education.

I’ ve worked with companies that have cloud offerings, where I can very easily log in, spin up some resources on their servers and then spin them back down when I am finished with them. As long as my response time is acceptable, do I really care about the physical hardware these virtual instances run on?

I’ ve also had customers who were unable to get resources to test hardware in their environment. Using the cloud, they were able to log on to a cloud provider, spin up some server resources, do the that they needed and spin the resources back down—all without waiting for their internal IT departments to acquire and configure hardware for them. This would also benefit users who have test hardware several generations behind what they’ re using in production. Instead of using old hardware, they can use more modern machines in virtual environments as needed.

Consider This

There are benefits to cloud computing, but there may be a few things contemplate when considering a leap from your own computing assets to those that you don’ t control. I realize that these days we’ re usually accessing cloud-based applications over the Internet instead of from a green screen directly attached to computers in the machine room, but concerns like privacy, security and availability need to be considered along with all the benefits that are touted with the cloud.

Backup and recovery is another consideration when deploying services to the cloud. How do we back up our data that lives in the cloud? Surely cloud providers offer snapshots and local backups, and maybe that’ s good enough for what you’ re doing. If you wanted to copy your data to machines that are under your control, would you use the network and some kind of continuous data protection in order to move data from the cloud to machines you own so that you have another copy of it? Or would that method of data protection defeat the purpose of having someone else handling infrastructure management?

What happens if somewhere down the road you decide you want to get out of the cloud? Are there going to be issues with getting your data or OS images back under your control? Can you easily clone the systems back onto your own hardware or will you be looking at server reloads?

I have watched customers struggle with liberating their data from outsourcing companies and contracts. The companies that manage the machines have custom tools and scripts that they don’ t want to hand over. They may have information around how the machines were configured that they don’ t want to share. What’ s your plan to get out of the cloud or move to another cloud provider if you find the one you are using isn’ t for you? What do you do if the service you’ re using goes down, or the company goes out of business or they change the interface so much that you no longer like the way you use the tool? Will upgrades and outages happen on your timetable or on theirs? When you get used to accessing servers and applications from anywhere there’ s a network connection and then you find the provider has an outage, you want to be sure providers offer information and status updates on when they expect to recover the systems.

I enjoyed reading a blog post from John Scalzi, who was trying an experiment where he would exclusively use Google Docs and a Google laptop computer to write a novel. Technical glitches began causing delays, and he eventually retuned to working from his desktop, saying, “Until ‘the cloud’—and the services that run on them—can get out of your way and just do things like resident programs and applications can, it and they are going to continue to be second-place solutions for seriously getting work done.”

Return to Centralization

While there are definite advantages to the cloud-computing approach in some situations, I can’ t help but think that the whole idea has a “Back to The Future” feel to it, where we take distributed computing resources and try to centralize them again, or worse yet, rebrand existing offerings as cloud offerings so we can say we’ re on the cloud bandwagon. Certainly there are going to be applications and situations that will benefit from moving applications out of data centers. We just need to be sure to do our homework and educate ourselves before making the leap.

Rob McNelly is a System p Solutions Architect for Meridian IT Inc. and is a former administrator for IBM. Rob can be reached at [email protected].