A Cambridge team’s computers sense emotions
By Morgon Mae Schultz
As computers become more efficient and powerful, technology permeates into areas of life that may seem unlikely beneficiaries of small, fast processors—like human emotion. At the University of Cambridge computer laboratory, a team of researchers is addressing what it calls the necessity that computers become socially and emotionally intelligent, developing computers that can sense and react to user’s emotions. Possible applications range from improved automobile dashboards to aids for those unable to interpret emotional cues due to autism. The Cambridge lab’s graphics and interaction team, led by Professor Peter Robinson, has tested systems that can infer a user’s emotional state from facial expressions, gestures and voice as accurately as the top 6 percent of humans.
Reading Minds
The ability to discern what another person is feeling, known in psychology as mind reading, crosses cultural boundaries. Scientists agree on the facial expressions that reveal six basic human emotions (happy, sad, angry, afraid, disgusted and surprised) as well as hundreds of subtler mental states (such as tiredness, joy or uncertainty). Robinson’s team uses probabilistic machine learning to train computers that recognizing visual cues such as head tilt, mouth pucker and eyebrow raise. Such a system could inform a car when its driver is bothered, upset, bored or drowsy.
The team has applied the same mind-reading capabilities to an “emotional hearing aid,” a portable device designed to translate facial expressions into emotions and suggest appropriate reactions so people on the autism spectrum can relate to those around them. MIT is pursuing emotional-social prostheses, and Robinson says his team continues to research interventions for autism-spectrum conditions.
The inference system is as accurate as most people—70 percent. “There is potential for improvement. In some sense computers already are better than individual people, but there will always be difficulties establishing the ground truth in a subjective assessment,” Robinson says.
On a larger physical scale but more intimately customized, a gesture-reading system lets users control music through emotional body postures, creating an interactive, real-time soundtrack. Because emotional expression in gestures varies widely among individuals, it’s harder for machines to read whole-body cues than facial expressions. The system must tune itself to each new user and, just like humans, it reads large, dramatic movements more easily than subtle everyday gestures.
A Robot Named Charles
Not content with computers being able to recognize our mental states, Robinson’s team is working on machines that can synthesize emotion—express feelings in a way that triggers humans’ natural understanding. The complexities of human-human interaction present huge challenges to designing an appropriate human-robot interaction. One pioneer is Charles, an animated android head that Hanson Robotics built for the Cambridge team. Aiming for more satisfying robot experience, Charles has a plastic face (modeled after its mathematician namesake Charles Babbage) that can smile, grimace, raise its eyebrows and otherwise express a range of emotions.
Robinson says his team is always seeking commercial channels that could bring its technologies to consumers, including a major car manufacturer that may implement the emotional-inference system. “We are always talking to companies about the possibilities for commercial exploitation. I guess that something will appear when they see a good business case.”
You can see people interacting with Cambridge mind-reading machines here.
This video contains footage of Charles.
Morgon Mae Schultz is a copy editor for MSP TechMedia.
Connect With Us: