[Netease Intelligence News January 23] In January, about 4,500 companies go to Las Vegas to attend the Consumer Electronics Show.
The exhibition in 2019 is similar to the past: companies are passionately advocating their ideas and exhibitors are launching the craziest products on Twitter. This years trends include unmanned aerial vehicles, voice-controlled family assistants, and 8K televisions. But the most exciting thing is that robots claiming to be able to read human faces recognize human emotions and health from a picture.
A machine interprets the 36-year-old technical editor Stan Horaczeks photo as charming, 30-year-old, looks like Kwon Chi-long (two of the three words are good). Another machine described it as 47 years old, male probability is 98%, and both feature descriptions added a lot of emoticons.
But some ideas may have a profound impact on our daily life. Intel has provided a new development aimed at creating a wheelchair controlled by facial expressions (blinking to the left and kissing to the right), which has a significant and positive impact on mobility.
In addition, Veoneer introduced the concept of facial expression recognition for self driving cars. It will make a judgment based on the users facial expression to determine whether the driver is sleepy or distracted on the road. Others expressed the intention to automate the doctors diagnosis and to look deeply into the users face to infer his condition.
Commodities on display at CES may be fresh, but attempts to read facial information can be traced back to ancient times. According to Sarah Waldorf of J. Paul Getty Trust, the Greek mathematician Pythagoras chose his students on the basis of how gifted they looked. In the fifteenth century, the scarlet birthmark of James IIs face in Scotland was considered an external expression of his depressive temper. In colonial Europe, many scientists have put forward the credibility of racist cartoons, which link human expression with animal behavior.
The word appearance refers to a belief that the lines of our faces present hidden and unknown information that has never really disappeared. In the New York Times magazine, Teju Cole argues that this belief is reflected in every photographic work: We tend to interpret portraits as if we are reading the intrinsic information of the descriptor, we talk about strength and uncertainty, and we praise their strong chins. A high forehead is considered wise. We can easily relate peoples facial features to their personalities.
But what can two eyes, one mouth and one nose tell us?
Humans cant read faces, but we do a very good job of explaining other peoples emotions.
Lisa Feldman Barrett, an expert in psychology and emotional neuroscience at Northeastern University, said: I think technology can be used to read your emotions from your face to some extent, not just to recognize the face, but to interpret them based on specific situations.
Considering the painful expression, this is generally considered a sign of discontent. You can see this in the movie Inside Out, Barrett said. The anger in everyones brain looks the same... Its a stereotype that people believe. There is, however, some strong evidence for something else. People frown when they are angry, but sometimes they dont. Moreover, some people often frown when they are not angry, which means frowning is not a very suitable basis for judging anger.
This shows the importance of specific situations (contexts). We constantly analyze other peoples body language, facial expressions, and even their pronunciation and intonation. In the course of our observation, we will consider what has just happened, what is currently happening and what may happen next. We even think about what happens in our own bodies, Barrett emphasizes, using what we feel, see and think. Some people are better at this than others, and certain factors can influence your judgment in a particular interaction. If you know a person, through long-term contact, you already understand the way their particular emotions may behave, then you are more likely to accurately explain why they frown.
But these are not really reading someones face. Its actually a bad metaphor, Barrett said, because we dont find the psychological implications of individual behavior. We make inferences based largely on situations. At best, you are working with another persons face to create something new from data (curly lips) and your preconceived ideas that robots currently cannot observe or understand.
These innate qualities help us to judge, understand others, and convey our feelings. But it may also lead us astray. Literature shows that we tend to overestimate our facial recognition ability. Professor Brad Duchaine of Dartmouth said. For example, people make unanimous judgments about who seems to be trustworthy and who is not, but these judgments do not seem to effectively predict the credibility of the actual situation.
Trying to collect personal health from peoples faces is equally complex.
Ian Stephen, a lecturer at Macquarie University in Sydney, Australia, uses a major evolutionary paradigm to study how physiology is reflected on our faces. He found that facial shape can predict factors such as BMI and blood pressure. The most interesting finding was not just the face, but the colour: participants in the study believed that white skin pigmentation was yellower and healthier. Stephen believes that this corresponds to keratin (the orange pigment we get from fruits and vegetables in large quantities) and oxygen-rich blood (the warm tones consumed by cardiovascular problems) - two very real signs of health.
Most of these decisions are made subconsciously. In Pride and Prejudice, Mr. Darcy and Elizabeth are dark after three mileshiking. But Darcy did not associate his attractiveness with oxygen, blood or reproductive health. He just responded to what he saw. Although this may seem superficial, Jane Austens romance novel reveals a deeper truth: Faces considered attractive are also considered healthy. Stephen said.
Many evolutionary biologists believe that occasional mixing of physical health and perceived beauty is beneficial. It helps animals, at least in theory, select spouses and reproduce their offspring. But this is not infallible: in many ways, beauty is diverse, in addition, the key is that beauty is influenced by cultural factors. For example, Americans value weight loss and denigrate obesity, but thin people may be unhealthy and obese people may be very healthy. These factors have had a real impact: just because of their appearance, obese people, women and people of color are discriminated against from the workplace to the emergency room.
In many peoples eyes, beauty can stop everything. Mens perceived health status is predicted by averageness, symmetry and skin yellowness, according to a 2017 Nature study. At the same time, womens health status is predicted by feminine temperament.
A nineteenth-century book on faces describes two visual emotions. Left: Total despair. Right: Anger and fear are mixed.
Some people think that facial recognition by machine may help people get rid of ignorance. Others fear that this will lead to uncontrollable emancipation. In a recent study published in the Journal Nature, FDNA-related researchers used artificial intelligence to identify genetic diseases in childrens face photographs. The project, Deep Gestalt, trained on data sets of 17,000 images to identify more than 200 syndromes.
Depending on where people are positioning technology, Deep Gestalt may inspire a depressing hope, or a fear. Stephen said that although doctors were still needed to explain the results, the machine could provide a safer and more convenient way to diagnose many diseases. However, its use is accompanied by serious moral problems. Information that has been privatized may be easier to identify, Stephen said. Will companies begin to extract your photos from your Facebook profile, analyze their previously unpredictable risks and deny your insurance or charge extra?
Similar face recognition algorithms cause their own privacy problems. In 2017, organizations such as Glaad and human rights advocacy groups condemned efforts to create AIgaydar. One critic called it an algorithm equivalent to a 13-year-old bully.
Fear of people or machines that can read our thoughts and emotions, even those we want to hide, is a fundamental part of technological anxiety, contained in George Orwells novel 1984. But in this area, imagination is still far beyond our technology.
Feldman Barrett believes that highly skilled emotional reading robots are achievable, but companies currently operating in the market do not seem to be able to make them. Computers are developing better and better in the field of vision (detection behavior). Unfortunately, (programmers) think that detection behavior means detecting an emotion. To make a real breakthrough, its not technology that needs to be changed in some ways, its mentality and presupposition, she said.