Algorithms prevail! Why are human beings and artificial intelligence becoming more and more alike?

category:Internet click:186
 Algorithms prevail! Why are human beings and artificial intelligence becoming more and more alike?


The algorithm tells us how to think, which is changing us. As computers learn how to imitate, are we beginning to become more and more like them?

Silicon Valley is increasingly predicting how people will respond to e-mails, how they will respond to someones Instagram photos, and how they will be eligible for government services. Soon, Google Assistant, the upcoming Google Voice Assistant, will be able to make real-time calls to book hairdressers.

From hospitals to schools to courts, we have brought algorithms almost everywhere. We are surrounded by automation systems. A few lines of code tell us what media content to watch, who to date, and even who the judicial system should send to prison.

Is it right that we hand over so much decision-making power and control to these procedures?

We are obsessed with mathematical programs because they can give quick and accurate answers to a series of complex problems. Machine learning system has been applied in almost every field of our modern society.

How does algorithm affect our daily life? In a changing world, machines are learning human behavior quickly and brilliantly, what we like, what we hate, and what is best for us. We now live in a space dominated by predictive technology.

By analyzing and collating massive data, we can provide immediate and relevant results, and the algorithm has greatly changed our lives. Over the years, we have enabled companies to collect a large amount of data so that they can provide us with a variety of suggestions and decide what is best for us.

Companies such as Alphabet or Amazon, the parent company of Google, have been inculcating data collected from us for their algorithms and instructing AI to use the collected information to meet our needs and become more like us. However, as we get used to these convenient functions, will our way of speaking and acting become more like a computer?

The algorithm itself is not fair, because the builder of the model defines success. Data scientist Cathy ONeil

At the current rate of technological development, it is impossible to imagine that in the near future, our behavior will become guided or dominated by algorithms. In fact, thats already happening.

Last October, Google launched Smart Reply, a smart response feature for its mailbox service Gmail, to help you write or respond quickly. Since then, the assistant function has caused a storm on the Internet, and many people have criticized it, saying that its tailor-made recommendations are harmful and make people look like machines. Some even think that its response may eventually affect our way of communication, or even change the norms of e-mail.

The main problem with algorithms is that as they become so large and complex, they will begin to have a negative impact on our current society and endanger democracy. As machine learning systems become more and more common in many areas of society, will algorithms rule the world and take over our ideas?

Now lets look at what Facebook does. As early as 2015, their new NewsFeed streams were designed to filter subscriptions from users and turn them into personalized newspapers, enabling users to see what they had previously praised, shared and commented on.

The problem with personalized algorithms is that they put users into filtered bubbles or echo chambers. In real life, most people are less likely to accept their confused, disgusting, incorrect or hateful views. As far as Facebooks algorithms are concerned, the company gives users what they want to see, so each users information flow becomes a unique world, and they are a unique reality in themselves.

Filtering bubbles makes public argument more and more difficult, because from a systematic point of view, information and false information look exactly the same. As Roger McNamee recently wrote in Time magazine, On Facebook, facts are not absolute; they are a choice, initially left to users and their friends, but then amplified by algorithms to facilitate communication and user interaction.

Filtering bubbles creates the illusion that everyone believes that we are doing the same thing, or that we have the same habits. We already know that on Facebook, the algorithm exacerbates this problem by exacerbating polarization and ultimately undermines democracy. There is evidence that the algorithm may affect the outcome of the British referendum or the 2016 U.S. presidential election.

Facebooks algorithm advocates extreme information rather than neutral information, which puts false information above real information and conspiracy theory above facts. Roger McNamee, Silicon Valley Investor

In todays world full of information all the time, screening information is a huge challenge for some people. If used properly, AI may enhance peoples online experience or help people quickly deal with the growing content information load. However, to work properly, algorithms need accurate data about what happens in the real world.

Companies and governments need to ensure that the algorithms data is unbiased and accurate. Since nothing is perfect, data estimation with natural bias has existed in many algorithms, which not only brings danger to our network world, but also to the real world.

It is necessary to advocate a stronger regulatory framework so that we do not fall into technological wilderness.

We should also be very cautious about the ability we give to algorithms. People are increasingly concerned about the transparency of algorithms, the ethical implications behind decisions and processes made by algorithms, and the social consequences that affect peoples work and life. For example, the use of AI in courts may increase prejudice and discriminate against ethnic minorities because it takes into account risk factors, such as the community in which people live and their association with crime. These algorithms can make catastrophic systemic errors, putting innocent people in prison.

Are we in danger of losing humanity?

In his book Click Hereto Kill Everybody, security expert Bruce Schneier writes, If we let computers think for us, and the underlying input data is bad, then they think badly, and we may never notice it.

Hannah Fry, a mathematician at University College London, has led us into a world where computers can operate freely. In her new book Hello World: Being Human in the Age of Algorithms, she argues that as citizens, we should pay more attention to the people behind the keyboard, those who write algorithms.

We dont have to create a world in which machines tell us what to do or how to think, although we may eventually enter that world. She said. Throughout the book, she repeatedly asked, Are we in danger of losing humanity?

Now, we have not reached the stage of human exclusion. Our role in the world has not been marginalized and will not be for a long time. Human beings and machines can work together with their strengths and weaknesses. The machine is defective and will make the same mistakes as ours. We should pay attention to how much information we hand in and how much ability we give up. After all, algorithms are now an inherent part of human beings, and they will not disappear in a short time. (ROBOM)

Focus on the smartman 163, to interpret the major events of AI companies, new ideas and new applications.

Source: Netease Intelligent Responsible Editor: Ding Guangsheng_NT1941