Artificial intelligence will transform every aspect of our civilization.
That's according to Jürgen Schmidhuber, whose deep learning artificial neural networks are widely used by the world’s most valuable public companies. But how do you create a curious robot? And what reassurances can be given to those who fear the rise of the machines?
Can you tell us what differentiates true artificial intelligence from what we can already see around us today?
True AI is not just about pattern recognition but about general problem solving through active interaction with an initially unknown world. True AI acts, perceives, acts, perceives – and gets a stream of input data which it shapes on its way to solving goals. Most commercial AI, however, is just about passive pattern recognition, e.g., better speech recognition on your smartphone, better gesture recognition, better prediction of the stock market, better prediction of what you want to do next given the data on your smartphone. But your smartphone does not have arms; it does not directly and actively shape the world. It can influence you by giving you advice, such as “now you are in a foreign city and there is a second-hand shop not far from here, which you should go to because there is a good deal for you.” But general robot control is more complicated than that, and still less developed than mere pattern recognition. It’s not going to stay like that, and in the next few decades we will see very sophisticated robots that will be able to solve all kinds of problems – including strawberry picking, which is harder than most people think.
How do you build an artificial agent that curiously and creatively explores the world which at the beginning it doesn’t understand?
We equip such an agent with two recurrent neural networks: one is the controller and the other is the world model. The controller sees the incoming stream of perceptions – video, audio and so on – and translates that into commands which make the robot move and shape the history of incoming inputs. The other one, the model of the world, learns to predict what happens if the agent does this and that. Over time, the prediction machine looks at all the data that’s coming in, all the actions that were executed, all the perceptions that came in, and tries to find regularities in these data.
Watch Jürgen Schmidhuber's full keynote at the 2016 European Communication Summit
And regularity is important because…
Regularity detection means better prediction and better compressibility, because whatever you can predict well you don’t have to store. If you have a video of 100 falling apples, once you can predict these falling apples because you understand gravity then you can greatly compress the sequence of incoming data. And all of science is about that, about finding simple rules behind the observations. Now, we can measure the depth of the insight of a neural network by looking at, first, how many computational resources it required to encode the data before learning took place and how many afterwards, when it discovered the regularity. And the difference between before and after is a number and that’s the fun that the network has. It is a reward signal which goes to the controller, and the controller is trying to maximise all the expected reward until the end of its lifetime.
This means that it is trying to change its internal connections so that it becomes better at translating the incoming inputs into action sequences that lead to even more data that the model can learn something from. Now the controller is motivated to invent experiments that improve the world model, just like a scientist is motivated to come up with new experiments which improve his understanding of the world. Maybe he comes up with a new data sequence generated through an experiment and he hopes that there is a previously unpublished physical law detectable in there. And if it was a successful experiment, then he has a lot of joy, excitement and fun as this is happening. We can build artificial agents that do the same.
At the beginning of this year, a film was made from a screenplay written by artificial intelligence based on your LSTM networks, Sunspring. We’ve also seen examples of AI-written journalism. Will those in the communications industry soon join the strawberry pickers in the ranks of those whose work has been displaced by artificial intelligence?
Back in the 1980s I said: It’s easy to predict which jobs are going to go, like taxi drivers, but it is very hard to predict the new jobs that are being created all the time. There is this idea of the playing man, homo ludens. Homo ludens invents new professions all the time and most of these are luxury professions. For example, although Usain Bolt is much slower than the fastest machine he still makes hundreds of millions by running against other humans. All these new types of interactions with other people on social networks, blogs, YouTube and so on, who could have predicted that 20 years ago? If you look at unemployment rates today they are pretty much the same as we had back then.
A popular fear of artificial intelligence is due to its great potential – many are alarmed at the prospect of opening a Pandora’s Box. How do you react to people like Elon Musk have been vocal about the potential dangers of artificial intelligence?
Few people make AIs, many talk about them. It is true that recently many philosophers and entrepreneurs and physicists and other people who don’t know too much about AI have warned about the dangers of AI. So have many science fiction authors. I am often meeting such people, and then I tend to point out that there is no new level of self-destruction capability, because there already is much older technology that can destroy civilisation in two hours without any AI, and that’s H-bomb rockets.
"It is true that recently many philosophers and entrepreneurs and physicists and other people who don’t know too much about AI have warned about the dangers of AI."
People forget that since the end of the cold war, although there was a dramatic reduction of nuclear warheads, we still have more than 10,000 of them and you need just a few hundred to devastate the biosphere and to make civilization as we know it impossible. So 50-year old technology in the form of H-bomb rockets is already maximally destructive, much more so than anything that we can do today with conventional weapons equipped with AI.
But the idea of a vastly superior intelligence being in more powerful position that mankind and potentially taking over humanity is surely cause for concern?
Will they really want to take over humanity? That is a debatable scenario inspired by Arnold Schwarzenegger movies and it has not so much to do with reality. Who might want to take you over or enslave you? Only others like yourself, who have similar goals. That’s why humans usually quarrel with other humans but not so much with kangaroos. People are interested in other people that are similar to themselves because they share goals and can either collaborate with them or compete with them, or sometimes both, as when one company competes against another company – each of them being a collection of humans. The supersmart AIs of the future will not be so interested in humans, just like humans are not so interested in ants. The supersmart AIs of the future will mostly be interested in other supersmart AIs, simply because those will share similar goals in an environment that might be quite disconnected from what we have here in this little biosphere.
"The supersmart AIs of the future will not be so interested in humans, just like humans are not so interested in ants."
We are much smarter than ants, aren’t we? Only when they invade our houses do we take measures against them. But most of the ants in the world are happily living in the forest and we are glad they are doing that, and the weight of all ants is still comparable to the weight of all humans, because we don’t have too many goal conflicts with each other. That’s going to be the same thing with the supersmart robots.
You have been pioneering self-improving general problem solvers since 1987, and very deep learning neural networks since 1991, all the while in Europe. The recurrent neural networks developed by your research groups in Switzerland and in Munich were the first to win official international contests. Is there enough communication about Europe’s track record of innovation and artificial intelligence development?
Some journalists, when they research AI, go to Silicon Valley blogs that are written as if this stuff was invented in Silicon Valley. Of course much of it wasn’t. For example, much of deep learning was developed by Europeans and it started 50 years ago when a mathematician from the Ukraine, Alexey Grigoryevich Ivankkneko, built the first deep learning networks in the 1960s. He was the father of this field of deep learning. Then in 1970 there was a Finnish scientist, Seppo Linnainmaa, who invented a widely-used method called backpropagation which is used to train deep networks today, and then at the beginning of the 1990s, my first student, Sepp Hochreiter, identified a problem with that method because it doesn’t really work well for deep neural networks, and we overcame that problem through various methods. Our techniques are now widely used by the world’s most valuable public companies, often not based here in Europe but in the United States, in China and in Southeast Asia.
What can communicators do to spread the word?
Do the proper research, don’t just copy what you find on blogs. Look next door at what is happening there and how can you communicate and represent that in a good way. Well, that’s your job, you are the communication engineers.