Do not Look Why: Why Worry That Machines Can Read Your Emotions? technology

Do not Look Why: Why Worry That Machines Can Read Your Emotions? technology

Home / Technology / Do not Look Why: Why Worry That Machines Can Read Your Emotions? technology Do not Look Why: Why Worry That Machines Can Read Your Emotions? technology Google +
C Can a program detect potential terrorists by reading their facial expressions and behavior? This was the hypothesis tested by the US Transportation Safety Administration (TSA) in 2003 when it tested a new surveillance program called Screening of Passengers by Observation Techniques, or Spot for short.
During the development of the program, they consulted Paul Ekman, emeritus professor of psychology at the University of California, San Francisco. Decades ago, Ekman had developed a method to recognize the smallest facial expressions and associate them with the corresponding emotions. This method was used to train “behavioral detectives” to scan faces for signs of deception.
However, when the program was launched in 2007, it was fraught with problems. The officers condemned the passengers more or less arbitrarily for questioning, and the small number of arrests that were made was charged with no reference to terrorism. Even more worrying was the fact that the program was supposedly used to establish racial profiling.
Ekman tried to distance himself from Spot and claimed that his method had been misused. Others, however, argued that the failure of the program was based on an outdated scientific theory that underpinned Ekman’s method. That is, emotions can be objectively derived by analysis of the face.
In recent years, technology companies have used Ekman’s method to develop facial expression recognition algorithms. Some developers claim that automatic emotion recognition systems are not only better than humans at detecting true emotions through facial analysis, but that these algorithms are tuned to our innermost feelings and greatly enhance the interaction with our devices.
But many experts study The science of emotions fears that these algorithms will fail again and make decisions about our lives due to faulty science. Your face: a $ 20 billion industry Monitors display a video with facial recognition software, which is used at the headquarters of the artificial intelligence company Megvii in Beijing. Photo: New York Times / Inspection
The technology of emotion recognition requires two techniques: computer vision to accurately identify facial expressions and algorithms to learn the computer to analyze and interpret the emotional content of these facial features.
Typically, the second step uses a technique called supervised learning. a process by which an algorithm is trained to recognize things that it has previously seen. The basic idea is that when you see thousands and thousands of pictures of happy faces labeled “happy” in the algorithm, when you see a new picture of a happy face, it shows it as “happy” again.
A graduate student, Rana el Kaliouby, was one of the first to experiment with this approach. After moving from Egypt to Cambridge University in 2001 to earn a doctorate in computer science, she found that she spent more time on her computer than with other people. She said if she could teach the computer to recognize and respond to her emotional state, her time away from family and friends would be less lonely.
Kaliouby eventually devoted the rest of her doctoral studies to this problem developing a device that helps children with Asperger’s Syndrome read and respond to facial expressions. She called it “emotional hearing aid”.
In 2006, Kaliouby joined the Affective Computing Laboratory at the Massachusetts Institute of Technology, where she teamed with Laboratory Director Rosalind Picard to further refine and refine the technology. In 2009, they founded startup Affectiva, the first company to bring “artificial emotional intelligence” to market.
First, Affectiva sold its emotion-detection technology as a market research product, delivering real-time emotional responses to ads and products. They landed clients like Mars, Kellogg and CBS. Picard left Affectiva in 2013 and became involved in another biometric startup, but the business grew as did the industry.
Amazon, Microsoft, and IBM now announce “emotion analysis” as one of their facial recognition products. and a number of smaller companies such as Kairos and Eyeris have come up with similar services as Affectiva.
In addition to market research, emotion detection technology is now used to monitor and detect driver dysfunctions and to test the user experience for video games, and to help physicians assess patient wellbeing.
Kaliouby, who has watched the development of emotion recognition from a research project to a $ 20 billion industry, is confident that this growth will continue. It predicts a time in the not-too-distant future when this technology will be ubiquitous and integrated into all our devices to “tap into our visceral, unconscious, moment-to-moment responses.” A 7.5m Faces Database 87 Countries Visitors check their phones behind the touchscreen face recognition software during the Global Mobile Internet Conference (GMIC). Photo: Damir Sagolj / Reuters
As with most machine learning applications, the progress of emotion recognition depends on access to higher quality data.
According to the Affectiva website, with more than 7.5 million faces from 87 countries, they have the largest emotion data repository in the world. Most of it was collected by opt-in shots of people watching TV or their daily commute to work drive.
These videos are sorted by 35 labellers in Affectiva’s office in Cairo, who look at the material and translate facial expressions into corresponding emotions. For example, if you see lowered eyebrows, tight lips, and bulging eyes, they label the label “trouble.” This labeled record of human emotions is then used to train Affectiva’s algorithm, which teaches how to associate sinister faces with anger, smiling faces with happiness. A face with lowered eyebrows and tightly pressed lips meant & # 39; anger & # 39; & # 39; to a banker in the US and a hunter in Papua New Guinea
This labeling method, considered by many to be the gold standard for measuring emotion in the emotion-finding industry, comes from a system called the Emotion Facial Action Coding System (Emfacs), which Paul Ekman and Wallace V. Friesen developed in the 1980s.
The scientific roots of this system can be traced back to the 1960s, when Ekman and two colleagues hypothesized that there are six common emotions – anger, disgust, fear, happiness, grief and surprise – that are firmly anchored in us and can be detected by analysis of muscle movements in the face in all cultures.
To test the hypothesis, they showed diverse populat n groups around the world photographing faces and asking them what emotions they have seen. They found that despite enormous cultural differences, people would combine the same facial expressions with the same emotions. A face with lowered brows, clenched lips and arched eyes meant “rage” for a banker in the United States and a half-nomadic hunter in Papua New Guinea.
Over the next two decades, Ekman drew his findings to develop his method of recognizing facial features and their association with emotions. The underlying premise was that when a universal emotion was triggered in a person, an associated facial movement would automatically appear on the face. Even if this person tried to mask their feelings, the true, instinctual feeling would “penetrate” and could therefore be perceived by someone who knew what to look for.
In the second half of the 20th century, this theory – referred to as the classical theory of emotions – dominated the science of emotions. Ekman made his emotion detection method proprietary and sold it as a training program to the CIA, the FBI, customs and border guards, and the TSA. The idea that true emotions are readable on the face even penetrated into popular culture and formed the basis of the show Lie to Me.
. Nevertheless, many scientists and psychologists who explore the nature of emotions have questioned the classical theory and the related emotion of Ekman’s detection methods. a good attitude or not & # 39 ;. Photo: John Lund / Getty Images / Mixed Images
In recent years, Lisa Feldman Barrett, a professor of psychology at Northeastern University, has voiced strong and persistent criticism.
Barrett first encountered classical theory as a doctoral student. She needed a method to measure emotions objectively, and came across Ekmans methods. Looking through the literature, she began to worry that the underlying research method was flawed. In particular, she thought that by providing pre-selected emotion labels that matched the photos, Ekman would have inadvertently induced her to give specific answers.
She and a panel of colleagues tested the hypothesis by doing Ekman’s tests without labels, so subjects could freely describe the emotions in the image at will. The correlation between certain facial expressions and certain emotions decreased.
Barrett has since developed her own theory of emotions, which is made in her book How Emotions: The Secret Life of the Brain. She argues that there are no universal emotions in the brain that are triggered by external stimuli. Rather, every emotion experience is made up of simpler parts.
“They arise as a combination of the physical characteristics of your body, a flexible brain wired to the environment in which it develops and your culture and education. who care for this environment, “she writes. “Emotions are real, but not in the objective sense that molecules or neurons are real. They are real in the same sense that money is real – that is, it is hardly an illusion but a product of human agreement. “
Barrett explains that it does not make sense to transfer facial expressions directly to emotions across all cultures and contexts. While one person may be upset when angry, another politely smiles as she plans the downfall of her enemy. For this reason, the assessment of emotions is best understood as a dynamic practice involving automatic cognitive processes, human-to-human interactions, embodied experiences and cultural competence. “That sounds like a lot of work and that’s it,” she says. “Emotions are complicated.”
Kaliouby agrees – emotions are complex, which is why she and her team at Affectiva are constantly trying to improve the richness and complexity of their data. In addition to using video instead of still images to train their algorithms, they are experimenting with capturing more contextual data, such as voice, gait, and small facial changes that go beyond human perception. She is confident that better data will mean more accurate results. Some studies even claim that machines already excel at detecting emotions.
According to Barrett, however, it is not just about data, but also about the labeling of data. The labeling process that Affectiva and other emotion detection companies use to train algorithms can only recognize what Barrett calls “emotional stereotypes,” such as emojis, symbols that fit in with a well-known theme of emotion in our culture.
According to Meredith Whittaker, co-director of New York University-based research institute AI Now, creating machine learning applications based on Ekman’s outdated science is not only a bad practice, but also leads to real social damage.
“You are already seeing recruiting companies Use these techniques to determine if a candidate is a good recruit or not. They also see experimental techniques proposed in school settings to see if a student is busy or bored or angry in the classroom, “she says. “This information could be used to prevent people from getting a job or being treated and assessed at school, and if the analysis is not very accurate, it will be tangible material damage.”
Kaliouby says that this is the case You are aware of the ways in which emotion recognition can be abused and take the ethics of their work seriously. “It’s critical to talk to the public about how it all works and where to apply and where it should not be used,” she told me.
After wearing a headscarf, Kaliouby is aware of the importance of deliberately creating various datasets. “We make sure that when training these algorithms, the training data is different,” she says. “We need representatives of Caucasians, Asians, darker skin tones and even people who wear the hijab.”
That’s why Affectiva collects data from 87 countries. Through this process, they have found that the emotional expression in different countries seems to have different intensities and nuances. The Brazilians, for example, use a broad and long smile to convey happiness, says Kaliouby, while in Japan a smile means not happiness, but politeness.
Affectiva has taken this cultural nuance into account by adding another level of analysis. The system is made up of what Kaliouby calls “ethnically based benchmarks” or codified assumptions about how emotions are expressed in different ethnic cultures.
But it is precisely this type of algorithmic judgment, based on characteristics such as ethnicity, that worries Whittaker most about emotion recognition technology, which proposes a future of automated physiognomy. Indeed, there are already companies that predict how likely someone is to become a terrorist or pedophile, as well as researchers who claim to have algorithms that can only recognize sexuality with their faces.
Recently, several studies have also shown that facial recognition is recognized Technologies reproduce prejudices that are more damaging to minority communities. A study released in December of last year shows that emotion recognition technology assigns black faces more negative emotions than white counterparts.
When I raised these concerns about Kaliouby, she told me that Affectiva’s system has an “ethnicity classifier,” but you do not use it now. Instead, they use geography as a proxy to identify where someone is coming from. This means that they compare the Brazilian smile with the Brazilian smile and the Japanese smile with the Japanese smile.
“How about if there was a Japanese in Brazil?” I asked. “Would not the system think they were Brazilian and miss the niceties of courtesy smiling?”
“At this point,” she admitted, “the technology is not 100% foolproof.”

Read More…

Popular Books

Tagged

Leave a Reply

Your email address will not be published. Required fields are marked *