With the advent of Alpha dogs, driverless and intelligent translation, the term artificial intelligence, which has existed for more than 60 years, seems to have become a hot word overnight. At the same time, machine learning, in-depth learning and neural network are widely mentioned by the circles of science and technology and enterprises. But the fact is, in such a noisy and enthusiastic atmosphere, most people still know little about this field.
Terry Sejnowski must be one of those who are qualified to talk about the ongoing AI revolution.
Schenowski was one of the few researchers who challenged the mainstream methods of AI in the 1980s. They argue that AI implementations inspired by brain biology, known as neural networks, connectionism and parallel distributed processing, will ultimately solve the problems that plague logic-based AI research, and thus propose a mathematical model for using skills that can be learned from data. It is this small group of researchers who have proved that the new method of brain-based computing is feasible, thus laying the foundation for the development of deep learning.
With the publication opportunity of In-depth Learning: the Core Driving Force of the Intelligent Age, the American science and technology media The Verge interviewed Terence Schenowski and discussed with him the difference between Artificial Intelligence, Neural Network, In-depth Learning and Machine Learning? Why does deep learning suddenly become ubiquitous and what can it do? What cant we do?
The following is the full interview:
Figure: Book Seal (CITIC Publishing Group 2019.2)
Q: First of all, I would like to ask about the definition. What are the differences between the words artificial intelligence, neural network, deep learning and machine learning?
Terrence: Artificial intelligence dates back to 1956 in the United States, when engineers decided to write a computer program that tried to emulate intelligence.
In artificial intelligence, a new field has grown up, called machine learning. Instead of writing a step-by-step program to do something -- a traditional method of artificial intelligence -- you collect a lot of data about what youre trying to understand. For example, imagine that you are trying to identify objects, so you can collect a large number of their images. Then, through machine learning, this is an automated process that can analyze various features and determine that one object is a car and the other is a stapler.
Machine learning is a very large field, and its history can be traced back to a more distant period. Initially, it was called pattern recognition. Later algorithms became more extensive and complex in mathematics.
In machine learning, there are neural networks inspired by the brain, and then deep learning. Deep learning algorithm has a specific architecture, in which many layers of data flow through the network.
Basically, deep learning is part of machine learning and machine learning is part of artificial intelligence.
Question: Is there anything that deep learning can do that other programs cant do?
Terence: Its very labor-intensive to write programs. In the past, computers were so slow and memory was so expensive that people used logic, the working principle of computers, to write programs. They manipulate information through basic machine languages. The computer is too slow and the calculation is too expensive.
But now, computing power is getting cheaper and labour is becoming more expensive. And computational power has become so cheap that slowly, computer learning is more effective than human programming. At that time, in-depth learning would begin to solve the problems that no one had written the process before, such as in the fields of computer vision and translation.
Machine learning is computationally intensive, but you only need to write a program that can solve different problems by giving it different data sets. And you dont need to be a domain expert. Therefore, for anything with a large amount of data, there are a large number of corresponding applications.
Question: Deep learning seems to be everywhere now. How did it become so dominant?
Terence: I can pinpoint this particular moment in history: December 2012 at the NIPS conference, the largest AI conference. There, computer scientist Geoff Hinton and his two graduate students show that you can use a very large data set called ImageNet, containing 10,000 categories and 10 million images, and use in-depth learning to reduce classification errors by 20%.
Typically, errors on this data set are reduced by less than 1% in a year. Within a year, 20 years of research have been leapfrogged.
It really opened the floodgate.
Question: The inspiration for deep learning comes from the brain. So how do computer science and neuroscience work together?
Terence: Deep learning is inspired by neuroscience. The most successful deep learning network is a convolutional neural network (CNN) developed by Yann LeCun.
If you look at the architecture of CNN, its not just a lot of units, theyre connected in a way that basically mirrors the brain. The best part of the brain that has been studied is the visual system, and basic research on the visual cortex shows that there are simple and complex cells. If you look at CNN architecture, you will find the equivalent of simple cells and complex cells, which comes directly from our understanding of the visual system.
Yann did not blindly attempt to replicate the cortex. He tried many different varieties, but he eventually converged in the same way as those that naturally converged. This is an important observation. The convergence of nature and artificial intelligence can teach us a lot, and there is more to explore.
Question: How much of our understanding of computer science depends on how well we understand the brain?
Terence: Most of our AI now is based on our knowledge of the brain in the 1960s. We now know more, and more knowledge is incorporated into the architecture.
AlphaGo, the process of defeating Go champions, includes not only cortical models, but also a part of the brain called the basal ganglia model, which is very important for making a series of decisions to achieve goals. There is an algorithm called time difference, which was developed by Richard Sutton in the 1980s. When combined with in-depth learning, it can perform very complex games that humans have never seen before.
When we understand the structure of the brain, and when we begin to understand how to integrate them into artificial systems, it will provide more and more functions than we have now.
Q: Will artificial intelligence also affect neuroscience?
Terence: They work in parallel. Innovative neurotechnology has made tremendous progress, from recording one neuron at a time to recording thousands of neurons at the same time, involving many parts of the brain, which has completely opened up a new world.
I said there is a convergence between AI and human intelligence. As we learn more and more about how the brain works, these insights will be reflected in AI. But at the same time, they have actually created a whole set of learning theories that can be used to understand the brain and allow us to analyze thousands of neurons and how their activities are generated. So theres a feedback loop between neuroscience and artificial intelligence, which I think is more exciting and important.
Q: your book discusses many different deep learning applications, from autopilot to financial transactions. Which particular area do you find most interesting?
Terence: One application that I was completely shocked by was the generation of confrontation networks, or GANS. Using traditional neural networks, you give an input and you get an output. GAN can carry out activities without input - generating output.
Yes, Ive heard about this in the context of the stories of creating fake videos on these networks. They really produce new things that look real, right?
In a sense, they produce internal activities. It turns out that this is how the brain works. You can look somewhere and see something, and then you can close your eyes, and you can begin to imagine things that are not there. You have a visual imagination that when its quiet around you, your alarm clock will come to mind. Thats because your brain is generative. Now, this new network can generate new patterns that never existed before. So you can give it, for example, hundreds of car images, which create an internal structure that generates new images of cars that never existed, and they look exactly like cars.
Q: On the other hand, what ideas do you think might be over-hyped?
Terence: No one can predict or imagine how the introduction of this new technology will affect the way things are organized in the future. Of course, there is hype. We havent solved the really difficult problem yet. We dont have universal intelligence yet. Some people say that robots will replace us in the near future. In fact, robots lag far behind artificial intelligence because duplicating bodies is found to be more complex than duplicating brains.
Lets take a look at this technological advance: lasers. It was invented about 50 years ago, when it occupied the whole room. It will take 50 years for technology to commercialize from occupying the whole room to using laser pens in my speech. It must be pushed to a size small enough to buy it for five dollars. The same thing will happen in hyped technologies like autopilot cars. It is not expected to become ubiquitous next year or in the next 10 years. This process may take 50 years, but the point is that it will be progressively advanced to make it more flexible, safer and more compatible with the way we organize transport networks. The mistake of hype is that people set the wrong time scale. They expect too many things to happen too soon, but in fact things only happen at the right time.
Note: This article is compiled from the book Deep Learning: the Core Driving Force of the Intelligent Age and the interviews with Terence Schenovsky by the technology website The Verge and TechRepublic. The original address is as follows: