Nature driving autopilot survey: how to make an accident

category:Internet click:195
 Nature driving autopilot survey: how to make an accident


[Netease Intelligent News Nov. 3] According to foreign media reports, when a driverless car collides in a busy street, who should it give priority to avoid injuring? Who should he kill and not kill? A Massachusetts Institute of Technology study shows that your answer will depend on where you come from.

In 2016, researchers at Massachusetts Institute of Technologys Media Lab launched an experiment. They want to know what action models people want for driverless cars, so they set up a website where anyone can experience 13 different scenes of driverless cars: should a self driving vehicle in a car crash give priority to preventing young people from getting hurt? Is it better to prevent women from getting hurt? Should it avoid giving priority to people who are healthy rather than overweight? Should it not make any decisions at all, but simply choose not to act?

Two years later, researchers analyzed 39 million 610 thousand decisions from 2 million 300 thousand participants from 233 countries. In a new study published in the Journal Nature, they show that our sense of right and wrong about how machines treat us is determined by the economic and cultural norms of where we live. They found that there are different ethical concepts in the three geographical regions: Western (including North American and European Christian countries), Eastern (including Far East and Islamic countries) and southern (including most of South America and countries with French origin). These large groups also have their own subgroups, such as the Scandinavian Peninsula in the West and the Latin American countries in the South. As the interactive chart of the study shows, Brazilians tend to protect passengers rather than pedestrians; Iranians are more likely to give priority to pedestrians; and Australians are more likely to protect healthy people than overweight people than average.

But the study also found that people all over the world tend to agree on three aspects: many people want to drive the car to protect human beings rather than animals, giving priority to protecting young people rather than the elderly, protecting as many people as possible. These insights can provide the basis for international norms of machine ethics, which researchers write must be relevant when these life-threatening dilemmas arise. Those discussions should not be confined to engineers and policymakers. After all, those issues affect everyone.

From this point of view, the moral machine study gives us a glimpse of millions of peoples views on machine ethics for the first time.

Although most cultures have some general trends, researchers have found great differences between countries that emphasize individualism (mainly in the West) and collectivism. Western countries tend to let driverless cars protect children rather than the elderly, while Eastern countries pay more attention to the lives of the elderly - for example, Cambodia is far below the world average in its preference for protecting children rather than the elderly. Similarly, people in Western countries tend to protect as many people as possible, regardless of the composition of the group.

The researchers believe that this divergence may be the biggest challenge in formulating the global guidelines on how to operate an autopilot car. Protecting the preferences of more people and young people may be the most important consideration for policymakers, so the difference between individualism and collectivism cultures may become an important obstacle to the development of general machine ethics, they wrote.

A countrys economy also affects autopilots ethical judgement.

By comparing Sweden with Angola, you can see this in the interactive chart of the Moral Machine. The lower the Gene coefficient, the closer the country is to equality. Compared with the global average, a more equal Sweden (with a Gini coefficient of 29.2) is unlikely to want people with high priority for protection of driverless cars, while Angola (with a Gini coefficient of 42.7) believes that people with high status should be given priority over any other group.

In the Moral Machine study, people from countries with less economic equality do not treat the rich and the poor equally. This relationship may be explained by the fact that peoples moral preferences are often affected by inequalities, or that the broader egalitarian norms influence how much inequality a country is willing to tolerate at the social level, and that the participants in the influence support the degree of inequality in the judgment of the moral machine, the researchers write. Equality.

Likewise, the power of the countrys per capita gross domestic product (GDP) and system is related to the degree of preference for protecting traffic-compliant people. On the other hand, people from poorer countries tend to be more tolerant of pedestrians crossing the road.

Understanding the subtle differences between cultural groups is particularly important.

In terms of age, population and human life, people have some basic consensus on machine ethics, but it is particularly important to understand the subtle differences between cultural groups - they are often less obvious than individual and collective differences. For example, countries in the south are very biased towards protecting women rather than men, and towards protecting healthy people rather than unhealthy people.

Researchers believe that when making decision systems and regulations, autopilot carmakers and politicians need to take all these differences into account. This is very important: although public moral preferences are not necessarily the main determinants of moral policy, the extent to which people are willing to purchase autopilot cars and tolerate them on the road will depend on the appropriateness of the ethical rules adopted. The researchers wrote.

Despite these differences between cultures, the Moral Machine team still believes that we need a global, inclusive dialogue to discuss what ethics we have on machine decision-making issues - especially given that the age of automated machines is rapidly approaching us.

In human history, we have never allowed machines to decide who should live or who should die in an instant without real-time monitoring. The researchers wrote, We will face this problem at any time. It will not happen on the remote battlefield of military operations; it will happen on the most mundane level of our lives: daily traffic. Before we allow our cars to make ethical decisions, we need a global dialogue to express our preferences to companies that design ethical algorithms and policymakers who oversee them. (Selected from: FastcompanyCompiler: Netease Intelligent Participation: Lebanon) Focus on Netease Intelligent Public Number (smartman 163), for you to interpret major corporate events in the field of AI, new ideas and new applications. Source: NetEase intelligent responsible editor: Wang Chao _NT4133

In human history, we have never allowed machines to decide who should live or who should die in an instant without real-time monitoring. The researchers wrote, We will face this problem at any time. It will not happen on the remote battlefield of military operations; it will happen on the most mundane level of our lives: daily traffic. Before we allow our cars to make ethical decisions, we need a global dialogue to express our preferences to companies that design ethical algorithms and policymakers who oversee them.

Focus on the smartman 163, to interpret the major events of AI companies, new ideas and new applications.