Structure and Character of the Internet
Computer scientist David Clark on security issues, the idea behind information-centric networking, and the inf...
Every day, we hear that artificial intelligence will solve all our problems—from self-driving cars to curing cancer. At the same time, some scientists and industry leaders believe that artificial intelligence poses an existential threat to humanity. So where does the truth lie, and what is hidden behind this term?
The term “artificial intelligence” was coined in the 1950s, and even then, debates about its meaning were already underway. Early text-editing systems were considered “intelligent.” This is when the joke emerged that artificial intelligence is what humans can do, but computers cannot—yet. In other words, AI was initially viewed as automating human cognitive activities.
In the 1980s, so-called expert systems became widely popular. They significantly influenced the automation of business processes governed by strict rules. Once, armies of managers were responsible for implementing business rules. Later, these rules became part of the code for management programs. Under the influence of expert systems, the rules were separated from the code and gathered into tables. In modern management systems, rules can be modified without reprogramming the system itself.
These examples illustrate that automation systems for cognitive activities do not learn independently. All their knowledge, such as expert rules, must be manually developed and entered. Recently, the focus has shifted to so-called machine learning systems. These systems aim to replace the manual development of rules with automatic learning through examples. Until the late 1990s, machine translation systems operated on regulations developed by dozens of linguists. The success of these systems left much to be desired. With the rise of the internet, gathering a vast amount of parallel texts in two languages became possible. This led to the introduction of statistical translation models. The parameters of these models were automatically optimized based on parallel texts without the need for linguistic rules. A similar approach was applied to speech recognition. This shift provided a significant leap forward in translation quality as soon as the number of training examples reached tens of millions.
Modern statistical machine learning systems based on deep neural networks have achieved impressive results in machine translation, speech recognition, and image analysis. This leads optimists to believe that cures for cancer and intelligent robots capable of discussing any topic are just around the corner. Pessimists warn of mass unemployment and even uncontrollable robots taking over the world. Both sides, however, are jumping far ahead into the realm of science fiction.
All current artificial intelligence systems are narrowly specialized. Over the years, many systems have been developed to automate various human cognitive activities, such as playing chess or recognizing handwritten words. However, even the most advanced chess program cannot answer where the current world champion, Magnus Carlsen, was born. It can only make chess moves—nothing else. We do not know how to create systems with general rather than specialized intelligence.
IBM tried to turn this into a marketing campaign based on the idea that if a computer can win at chess, it can do anything, like curing cancer. But in reality, this is not the case. At the current stage of development, different AI methods can solve specific problems quite successfully. However, a theory of general intelligence still needs to exist.
Statistical machine learning systems require massive amounts of labelled data—parallel texts or images with highlighted objects. Such data is only available in a few domains. A lack of sufficient training examples leads to a high number of errors.
Our extensive knowledge and reasoning allow us to learn from a tiny number of examples. Psychological experiments show that one picture of a wildebeest is enough for a person to recognize it later, even if they’ve never seen one before. The best neural networks, however, need thousands of images. Humans can most likely do this faster and more accurately because they have rich conceptual representations of animals, which they can compare to something new.
What do we need to build general intelligence? We don’t have a complete answer to this question yet. We can identify a few necessary components by drawing a parallel with human intelligence. It’s often said that artificial intelligence lacks common sense. But what is common sense? It’s our knowledge and the logic of applying it. A two-year-old child doesn’t need to touch a hot stove ten times to learn to fear it. They already have a model of hot objects and an understanding of what happens when they come into contact with them. Burning their hand once is enough for them to avoid such objects in the future.
How do we acquire knowledge? Very little of what we know comes from our own direct experience. From early childhood, we learn from those around us. Our knowledge is collective, and so is our intelligence. We constantly help and guide each other. This concept needs to be present in current machine learning systems. We can’t offer advice to a neural network, and it can’t teach us anything. If one robot learns to recognize a sheep and another a cow, they can’t help each other in any way.
Until we solve these problems, there’s no need to fear terrifying, omnipresent robots. Instead, we should be cautious of myths about artificial intelligence and blindly follow the instructions of far-from-perfect machines. However, the same can be said about humans.
Computer scientist David Clark on security issues, the idea behind information-centric networking, and the inf...
Questions you’ve always wanted to ask
Linguist David Adger on the Facebook chatbots experiment, the history of getting the AI to understand and spea...