At the recent Vatican City Conference on technology for the digital age, the Pope urged Facebook executives, venture capitalists and government regulators to be wary of the impact of AI and other technologies. If the so-called technological progress of mankind becomes the enemy of common interests, it will lead to an unfortunate return, and mankind will return to a barbaric state dominated by the law of the strongest, he said
This summer, joy buolamwini testified in Congress with democratic representative Alexandria ocasio Cortez, and several audits found that facial recognition technology works best for white men and worst for women of color.
What these two events have in common is that they discuss the relationship between power dynamics in AI ethics.
The debate on AI ethics can be carried out without mentioning the word power, but power is often hidden under the topic. In fact, it is rarely the direct focus, but it has to be. Like gravity, power in artificial intelligence has an invisible power, which affects all ethical thinking in artificial intelligence.
Power affects the methods associated with use cases, the issues that need to be prioritized, and the service objects of tools, products, and services.
It is the basis of a debate about how companies and countries can develop policies to manage the use of technology.
Its in the dialogue about democratization, fairness and responsible Ai - Google CEO sandar Pichai invited AI researchers into his office; top machine learning practitioners were regarded as the king of modern philosophers; Elon Musk and others talked about the terrorist impact of AI technology on human beings in the coming decades
Consumers may find data protection hopeless, or engineers may know something is morally wrong, but they cant find recourse.
In general, startups may regard ethics as a good add-on, but not a must. Engineers who want to be the first to enter the market and complete the product release by the deadline may scoff at the idea of putting valuable time on ethics. CEOs and politicians may only talk about morality, but in the end they will only send out signals of sympathy or conduct moral cleansing.
Artificial intelligence is called one of the biggest human rights challenges in the 21st century. Its not just about doing the right thing or making the best AI system possible, its about who holds the power and how AI affects the balance of all the forces it involves.
These power dynamics will define the future of business, society, government, the lives of individuals around the world, privacy, and even our future rights. Almost every AI product manager likes to say that things are just beginning, but in the era of AI, if we cant solve the unbalanced power dynamics, there may be dangerous consequences.
Labor market and new era of gold plating
Deep learning, cloud computing, processors like GPUs, and the computing power needed to train neural networks faster - technologies that have become the cornerstone of large technology companies - have driven todays Renaissance.
The fourth industrial revolution came at the same time as historical income inequality and the new era of gilding. Just as 19th-century railway tycoons used farmers eagerness to bring crops to market, technology companies with proprietary data sets used artificial intelligence to further consolidate their market position and monopoly position.
When data is more valuable than oil, companies with valuable data have huge advantages and are most likely to consolidate their position as wealth or industry leaders. This certainly applies to big names like apple, Facebook, Google, IBM and Microsoft, but it also applies to traditional businesses.
At the same time, the speed of M & A of technology giants accelerated and further consolidated their strength, thus consolidating other trends, because R & D is almost entirely owned by large enterprises.
According to a recent report from the Institute of artificial intelligence (HAI) at Stanford Universitys human center, the growth of artificial intelligence may lead to huge social imbalances.
The potential financial advantage of AI is very large, and the gap between AI and the rich and the poor is too deep. As far as we know, the global economic balance can change through a series of catastrophic structures, a proposal of Hai calls for the U.S. government to invest 120 billion dollars in education, research and entrepreneurship in the next 10 years.
The co-author of the proposal is Dr. Li Feifei, former Google cloud chief AI scientist. If properly directed, the age of artificial intelligence could bring an era of prosperity for all, she said
PwC estimates that by 2030, artificial intelligence will contribute $15.7 trillion to the global economy. However, if we use it irresponsibly, it will lead to a greater concentration of wealth and power on the few elites who are ready for this new era, while most people in the world will fall into poverty and lose their sense of mission.
Erik Brynjolfsson, director of MITs initiative on the digital economy, studied the impact of artificial intelligence on future work. If you look at the overall economy, youll see a trend coming, Brynjolfsson said of the number of jobs that could be replaced in the next few years for machine learning. Machine intelligence can be used to redesign and add tasks to the workplace, but it is most often used to replace work.
Analysis by the Brookings Institution and research by Brynjolfsson and Tom Mitchell of Carnegie Mellon University show that the impact of automation on unemployment is expected to vary by city and state, and that instability or unemployment is expected to have a disproportionate impact on low-income families and people of color. According to a recent McKinsey report, automation will lead to the highest number of unemployed African American men.
Previously, the median income in the United States had stagnated since 2000. Brynjolfsson called the relationship between median income growth and productivity growth big decoupling.
For most of the 20th century, these roles were interrelated - more production, more wealth, higher productivity - and closely related to the wealth of ordinary people, but these roles have recently diverged, he said. The cake is bigger and bigger, we create more wealth, but these wealth gather in the hands of a few people.
Brynjolfsson believes that the challenges of the AI community have made the most advanced AI fields in the Department of Defense Advanced Research Projects Agency (DARPA) such as the autonomousvehiclechallenge and ImageNet, but the enterprises and AI communities should begin to turn their attention to shared prosperity.
A lot of people may be left behind. In fact, a lot of people have been left behind. Thats why I think the most urgent challenge now is not just better technology, although I fully support it, but to create common prosperity.
Technology giants and the way to power
With the popularization of artificial intelligence, another major trend is that for the first time in the history of the United States, most of the labor force is people of color. According to the U.S. Census Bureau, by 2030, most of the U.S. cities, and even the whole country, will no longer have the majority of the population.
These demographic changes make the lack of diversity within AI companies more prominent. Crucially, there is a lack of racial and gender diversity in the creation of decision-making systems - what Kate Crawford, director of the ainow Institute, calls white issues..
Photo source: Google 2019 diversity report - gender and race statistics of technical labor representatives
According to an analysis by wired and elementai in 2018, women accounted for only 18% of the research published at large AI conferences, compared with 15% and 10% of the researchers at Facebook and Google. A spokesman for the two companies said Google and Facebook did not provide data on the diversity of AI research.
A report released by the ainow Institute in April described in detail the distinct cultural differences between the engineering professionals in charge of technical research and the extremely diverse people deploying artificial intelligence systems. The organization calls it the Ai accountability gap..
The report also acknowledges the hidden human resources in AI systems, such as tens of thousands of moderators for Facebook or YouTube content, or kiwibot drivers in Colombia who are remotely driving kiwibot delivery robots near the University of California, Berkeley, San Francisco Bay area.
Photo source: Facebook 2019 diversity data - facebook technology workforce by race
The gap between companies that develop and benefit from AI and those most likely to be negatively impacted by it is widening, not narrowing, the report said According to the report, the AI industry lacks government supervision and power is concentrated in the hands of a few enterprises.
In a paper published in August at UCLA, Dr. SAFIYA noble and Sarah Roberts documented the impact of the lack of diversity in the technology industry. They believe that we are now witnessing the rise of digital technology, which is actually a power system for storing resources, which may judge a persons value according to his ethnic identity, gender or class.
Even under federal law, American companies cannot self regulate and innovate to end racism. Among the elite of modern digital technology, the myth of elitism and intellectual superiority is used as a symbol of race and gender, integrating resources disproportionately from people of color, especially African Americans, Latinos and Indians, the report wrote. The investment in the myth of the elite suppresses questions about racism and discrimination, even if the products of the digital elite are filled with race, class and gender markers.
Although people are talking about how to solve the problem of diversification of the technology industry, many aspects of the technology industry have only made gradual progress, and funding for Latino or black entrepreneurs still lags behind that of white entrepreneurs. In order to address the general lack of progress in diversity and inclusion initiatives in the technology industry, a pair of data and social researchers suggested that technology companies and AI companies adopt racial literacy.
One of them, mutalenkonde, co-author of the algorithmic Accountability Act, introduced in both houses of Congress earlier this year, requires the Federal Trade Commission (FTC) to assess algorithmic biases and allow the agency to issue penalties based on company size.
She is also an executive director of artificial intelligence and a researcher at the Berkman Klein Center for Internet and society at Harvard University. She is now assessing how artificial intelligence and misinformation could be used to target African Americans in the 2020 election. A survey released by the Senate Intelligence Committee last October found that interventions in the 2016 election were aimed only at African Americans on Facebook, twitter and instagram.
Before that, she and a small team worked to advance the concept of racial literacy.
According to nkonde and her coauthors, implicit bias training and diversity initiatives - promoted by the tech giants who publish the annual diversity report - dont play a role in creating a tech workforce that looks like users. In order to make meaningful progress, enterprises should put aside their vague wishes and start to take practical steps to educate people about racial literacy.
The real goal of building literacy capacity in technology is to imagine a different world, a world where we can break through old patterns, wrote a paper explaining the framework. If we dont solve the problem of race in the field of technology, the new technology will inevitably reproduce the old differences. But this is not the case.
The co authors hope that racial literacy will become part of the computer science student curriculum and the training of technology company employees. Their approach draws on Howard Stevensons racial literacy training for schools and includes implicit association tests to determine stereotypes people hold.
Ethnic literacy aims to provide people with training and improve their emotional intelligence so as to solve the situation of ethnic pressure in the workplace. This may include computer scientists and designers, as well as machine learning engineers, who are allowed to talk openly about how a product or service can perpetuate structural racism or negatively affect different user groups.
The purpose of this is to let people discuss the possible problems of products or services in an open and non confrontational way. In interviews with employees of large and medium-sized technology companies, researchers found that in many technology companies, race related issues are taboo.
People want to pretend it doesnt matter, and it actually reinforces racist patterns and behavior, nkonde said. This will mean that companies have to be clear about their values, rather than trying to cater to everyone by avoiding clear expression of their values.
Nkonde believes that racial literacy will become increasingly important as companies such as alphabet develop products that are critical to peoples lives, such as medical services or facial recognition software sold to the government.
Another expected result of ethnic literacy training is to create a culture within the company that sees value in a diverse workforce. A study released last year by the Boston Consulting Group found that more diverse organizations have higher revenue and innovation capabilities. But if hiring and retention data are any indication, its that Silicon Valleys tech giants dont seem to realize it.
Guillaume Saint Jacques, a senior software engineer at LinkedIn, believes that AI ethics is not only the right thing, but also commercially reasonable. Prejudice, he believes, can get in the way of profits.
If you have a big prejudice, you may only cater to one group, which will eventually limit the growth of your user group, so from a business perspective, you want everyone to participate In the long run, its actually a good business decision.
Personal autonomy and automation
Powerful companies may show their strength in different ways, but their business plans can have an impact on individuals.
Perhaps the best summary of this new power landscape comes from Shoshana zuboff, a retired professor at Harvard Business School, in his book the age of surveillance capital. The book details the creation of a new form of capitalism that combines sensors such as cameras, smart home devices and smartphones to gather data into AI systems to predict our lives (how we will behave as consumers, for example) in order to understand and shape our behavior on a large scale.
Surveillance capitalism unilaterally claims that human experience is a free, raw resource that can be transformed into behavioral data. While some of these data (Insights) are applied to product or service improvements, the rest are declared a proprietary behavioral surplus, entered into an advanced manufacturing process called machine intelligence, and manufactured into predictive products that predict what you are, will be, and will be doing in the future, zuboff wrote.
She believes that this economic order was created by Google in Silicon Valley, but was later adopted by Amazon, Facebook and Microsoft, as well as Chinese counterparts such as Baidu and Tencent.
Zuboff describes surveillance capitalism as an unprecedented form of power that few people fully understand, adding that there is currently no effective collective or political means of action against it.
She questioned that regulatory capitalism could cause serious damage to human nature when markets turn into a fully defined project.. Without control, zuboff said, this relatively new market power can overthrow peoples sovereignty and become a threat to the concept of imagination, will, commitment and future building of the western free democratic countries.
These big companies have accumulated a lot of new knowledge from us, but not for us. They predict our future for the benefit of others. As long as the regulatory capitalism and its behavior futures market can flourish, the ownership of new behavior modification means will overshadow the ownership of production means and become the source of wealth and power of capitalism in the 21st century.
Zuboff believes that a major by-product of monitoring capitalism is an overwhelming sense of helplessness. Thats what you see people shrugging, saying that theres nothing to stop big tech companies with a lot of resources and wealth.
Edward Snowden, the leaker, seems to agree with zuboff.
Enterprises and governments are increasingly using metadata collection to make decisions that affect human life, from using mobile devices to track users activities to social credit scoring in China. When Snowden was recently asked by NBC why people who dont commit crimes should pay attention to monitoring technology, he said that the purpose of the data is generally to eliminate personal monitoring.
These activity records are constantly created, shared, collected and intercepted by companies and governments. Ultimately, it means that when they sell these, when they trade these, when they do business on the basis of these records, they dont sell information. They sell us. They are selling our future. They are selling our past. They sell our history, our identity, and ultimately, they steal our power and let our stories serve them.
Ruha Benjamin, an associate professor at Princeton University and author of race after technology, is also concerned about agency, because whether people support the vision of AI bringing about the end of the world or Utopia, they are talking about giving power to machines.
Whether technology will save us or kill us, they are giving up power, Benjamin said at an in-depth learning conference at Kenyatta University in Nairobi, Kenya
There is a very different manifestation of individual power within a large company. About a year ago, for example, more than 20000 Google employees around the world went on strike over a variety of ethical issues. Among them, according to organizers, are $90 million in compensation for Android founder Andy Rubins alleged sexual harassment, the end of mandatory arbitration, and Googles involvement in the Pentagons Maven project.
A few months ago, thousands of Google employees signed an open letter protesting the companys involvement in the artificial intelligence project for drone target detection. A few months later, Google promised to terminate its Maven contract in 2019 and released a set of AI principles, including a promise not to make automated weapons.
Similarly, Facebook employees called on CEO Mark Zuckerberg to verify the facts or ban political advertising, while Microsoft and GitHub employees called for the termination of their contract with ice.
Challenging big technology companies takes courage and organization - especially for those employed by them - but these protests show that individuals can regain some power, even in the face of behemoths.
Government and society
With the Renaissance of artificial intelligence, Elon Musk has become the contemporary Paul Revere, who has issued warnings about killer robots and artificial general intelligence (AGI). When Russian President Vladimir Putin said that countries controlling AI would control the world, musk responded that he believed that the AI arms race would lead to World War III.
Musk has joined more than 4500 AI and robotics researchers and signed an open letter to fight for the future against unmanned automation weapons. If or when countries introduce autonomous killer robots and have the right to choose human life and death, it may indeed become the ultimate expression of power.
However, despite a lot of attention paid by people like musk to assumptions, face recognition has been used in some cities, such as the Detroit police department, which is testing real-time face recognition. At the same time, the results returned by the algorithm are believed to have a negative impact on the lives of millions of African Americans, with poor performance of non binary gender and people of color.
AgI scenarios such as Terminator: Skynet may not have appeared, but the military is already considering the ethical application of AI.
Artificial intelligence, power and civil society
While the fight over the use of artificial intelligence to control online political speech continues, new problems continue to emerge, such as the bias that led intermediaries and advocacy organizations to require technology giants to ban the use of algorithms in place of judges for pre-trial bail assessment.
The AI partnership, created by AI researchers from companies such as apple, Facebook and Google, links organizations such as Amnesty International and human rights watch to the worlds largest AI companies. Terah Lyons, executive director, said power was at the heart of the ethical debate among NGOs and tech giants about how AI would affect society.
She believes that the lack of diversity in the AI industry, the lack of corresponding influence on the construction and deployment of systems and tools, and the power and influence of individuals within technology companies and institutions are all functions of power.
Compared with these large and resource rich technology companies, there are many differences in the strength and resource allocation of civil society and non-profit organizations with a small amount of resources. Therefore, empowering them more effectively is an important factor for you to start fair competition for effective cooperation. She said.
The same travel restrictions affect AI researchers who are interested in attending international conferences. Last year, at a seminar on artificial intelligence held by neuroips in Montreal, Canada, about half of the participants were refused visas by immigration officials, and this year, applicants reported the same again.
Such events have prompted partners in the field of artificial intelligence to urge countries to provide special visas for travel to AI research conferences, just as they do for medical professionals, athletes and entrepreneurs in some parts of the world.
The power relationship between the state and the technology giants
Casper klinge is Danish ambassador to Silicon Valley. Some countries have business and innovation centers in the San Francisco Bay area, but klinge is the first ambassador sent to Silicon Valley to represent a countrys diplomatic interests.
The Danish government has sent him as a global superpower to deal with companies such as apple, Amazon, Google and Facebook, which gather a large number of AI talents in the world. Klinge believes that more small countries should do the same, so that they can work together for common goals. Klinge said in his two years as Secretary General of NATO, he learned that building multilateral alliances with other small countries was part of his job.
Monopolies are not new to the government, but klinge says AI like autonomous driving and search has changed the rules of the game, making such technology companies more important to national interests than many other countries, and generating demand for what Denmark calls technology diplomacy.
Klinge pointed out that technology giants are distorting the nature of international relations and creating a new reality that countries must treat technology giants like global superpowers.
We can no longer see them as neutral platforms, they are just neutral providers of what people want to do. I think we have to treat them in a more mature and responsible way, which also means that we are no longer naive, we are more balanced, and we ask for them to take responsibility. My work is just a symptom, and we are trying to do something more systematic to have a more balanced and realistic view of technology companies and technology itself, he said
What about the future?
In the debate about AI ethics, power is everywhere, but that doesnt mean we have to keep inaction. There is another way, as the ethnic literacy project shows.
When Ruha Benjamin said technology needed social imagination, she called for it. Cathy ONeil also mentioned this in her book, weapons of math destruction.
Big data processes have codified the past. They didnt create the future. It takes moral imagination to do so, and thats something that only humans can provide, she wrote.
A distorted power structure makes words like democratization meaningless in fact, but handing over AI to more people who solve major problems may significantly change peoples views on AI and have a positive impact on human life.
Of course, there are many examples of AI being used to improve human life. Researchers at MIT have developed an algorithm that allows children to go to school faster, saving Bostons school district $5 million a year in transportation costs. New York Fire Department and New York University are using artificial intelligence to improve emergency response time by understanding the most effective field path - one of dozens of projects on google.org that use data-driven methods to create artificial intelligence for the benefit of mankind. Human beings are using artificial intelligence to build more efficient greenhouses and increase crop production, which may help to avoid hunger in the coming decades and feed the world when the global population expands to 10 billion. There are many such examples.
But technology that can predict the future, subvert economic and social order, imprison people, or make decisions about our health always means a power struggle beneath the surface of impressive technological progress.
There is a power dynamic in AI systems that can make people of color perform worse in bail assessments, healthcare that affects millions of people, homeless services, and facial recognition.
This can also happen when AI experts in the EU urge countries to avoid large-scale monitoring and instead use AI to empower people. Samsung and the United Nations have put forward the initiative of using artificial intelligence to achieve the goal of sustainable development.
As Camilla rygaard hjalsted, chief executive of Digital Center Denmark, said, active climate change goals help to recruit AI talents, or machine learning application to climate change may be a great moon landing plan for AI.
It exists in the fledgling AI dialogue program to protect military children, detect when Gang shootings may occur, or provide sexual health counseling to adolescent girls in Pakistan.
It exists in open source projects, such as masakhane, which is dedicated to creating machine translation for more than 2000 African languages. The project currently has 60 contributors from all corners of the African continent working on the development of artificial intelligence capable of preserving and translating these languages. According to the United Nations, Africa has the youngest population on earth. From now on to 2050, Africa will account for more than half of the global population growth. Machine translation in Africa may be important to drive AI dialogue, communication, and business online and in the real world.
For the past three years, Kathleen siminyu has worked in the womens division of machine learning and data science in Nairobi, Kenya. I think language is an obstacle. If we remove this obstacle, many Africans will be able to participate in the digital economy and eventually enter the AI economy, she said. So, yes, as a person sitting here contributing to the local language, I think we have a responsibility to bring those who are not in the digital age into the era of artificial intelligence.
If you only focus on a part of the AI ethics debate, its easy to conclude that making AI ethics a part of the engineering and design process is politically correct, a requirement of corporate social responsibility, and may hinder real progress.
Not really. AI ethics means building models in the best way possible, taking human beings into account, and keeping models in a cycle, which is indispensable for future technologies and systems that people choose to manage the world.
These power dynamics seem to be the most daunting when we have no other vision of the future, no other possibilities than a planet of unemployment on global watch heading for World War III.
In charting the path to a better world, it is important to recognize the dynamics of power in it, because just as AI itself can be a tool or weapon, it can put individuals and society at a favorable or unfavorable position. Startups, tech giants and communities that want a better world have a responsibility to dream and share those aspirations.
AI is changing society. It cant be just a few privileged people who decide how this will happen or build the world.
Source: responsible editor of cloud Hunting: Li Yipeng nbjs9851