Language and Artificial Intelligence
Linguist David Adger on the Facebook chatbots experiment, the history of getting the AI to understand and spea...
This lecture is part of the collaboration between Serious Science and the Technology Contests Up Great READ//ABLE.
Gödel Machines are self-referential universal problem-solvers, making provably optimal self-improvements. In a certain sense, they formalize the informal remarks of I. J. Good of 1965 on an intelligence explosion through self-improving superintelligence.
The Gödel machines are inspired by Kurt Gödel’s self-referential formulas. Maybe you know that in 1931, Gödel became the very founder of theoretical computer science: he introduced the first universal coding language, which was based on natural numbers on the integers, and this universal coding language allows for formalizing the operations of any formal axiomatic system or any digital computer in axiomatic form through numbers. So, the storage is basically represented in the form of integers, and Gödel used that language to represent both the data (his data were axioms and theorems) and programs operating on the data, such as proof-generating sequences of instructions manipulating the data. He then famously constructed formal statements that talk about the computation of other formal statements, so one statement talks about sequences of operations that generate new statements. He even had these famous self-referential statements, which basically imply that they are not provable by any computational theorem prover. That’s how he identified the fundamental limits of mathematics of theorem proving and of computing, and therefore also of artificial intelligence.
So Gödel back then was the guy who showed the basic limits of AI. What he did had an enormous impact on the science and philosophy of the 20th century. Furthermore, much of early artificial intelligence in the 1940s (Zuse and others) to the 1970s was actually about theorem improvement and about deduction in Gödel style through expert systems and logic programming.
A Gödel machine is a computer that rewrites any part of its own code as soon as it has found proof that the rewrite of the code is useful, where a problem-dependent utility function and the properties of the hardware and the entire initial code are all described by axioms encoded in an initial proof searcher which is a piece of software which is also part of the initial code of the Gödel machine which in principle can be rewritten. So what does this proof searcher do? The proof searcher systematically and also, in a certain sense, efficiently tests computable proof techniques. A proof technique is a program whose output is approved, so starting from axioms, you generate lemmas and new theorems until, finally, you have some theorem that you want to prove.
The Gödel machine generates such theorems until it finds a theorem that says the rewrite that I’m proposing here is indeed useful because after the rewrite you will get more reward per time than before.
So this initial software, which includes the proof searcher, keeps searching for theorems until it finds a provably useful computable self-rewrite. I was able to show that such a self-rewrite must be globally optimal; that is, there are no local maxima since the code first had to prove that it is not useful to continue the proof search for alternative, maybe even better, self-rewrites. Implicit in the proof is the statement that there are no alternative self-rewrites that are even better than what I have so far. Unlike previous non-self-referential methods based on hardwired proof searches, the Gödel machine not only boasts an optimal order of complexity but can optimally reduce any slowdowns hidden by the standard asymptotic optimality notation, the O notation, provided that the utility of such speedups is provable at all.
Now, one might question: does the exact business of formal proof search make sense at all in an uncertain real world like this? And the answer is yes, it does. All we need to do is we just need to insert into the original software of the Gödel machine with the proof searcher the standard axioms for representing uncertainty and for dealing with probabilistic settings and with uncertain worlds. In fact, the original write-up of the Gödel machine really addressed this issue and was about expected rewards: you want to maximize the future expected reward in your limited lifetime, that’s the objective function, and that is the initial utility function, and since utility can be defined as an expected value using axioms and everything that you need to reason about expected values, we are all fine.
Now, human-machine learning researchers also routinely prove theorems about expected rewards in stochastic worlds, and a machine equipped with a general theorem prover like the Gödel machine and the appropriate axioms can do the same. So the Gödel machine as a proof searcher is trying to find a target theorem which says that a piece of code that will rewrite the Gödel machine, including the proof searcher, is good, and this target theorem seems to refer only to the very first self-change which may completely rewrite the proof search subroutine which is part of the original software of the Gödel machine.
Now the question is, doesn’t this make the proof of the global optimality theorem invalid? What prevents later self-changes from being destructive? However, this is fully taken care of. The proof of my global optimality theorem shows that the first self-change executed by the system will be executed only if it is provably useful in the sense of the present utility function. If it is probably useful for all future self-changes that might happen through additional computation of the Gödel machine, and these future self-changes are influenced, of course, through the present self-change, which is setting the stage for the future self-changes, but it’s all good, it’s all taken care of, this is actually the main point of the whole self-referential setup.
The Gödel machine implements a meta-learning behaviour: it learns how to learn in a certain, even optimal, mathematically optimal sense. But what about a meta meta and a meta meta meta and a meta meta meta meta level? The beautiful thing is that all the meta-levels are automatically collapsed into one single level, one single meta-level if you will because any proof of a target theorem automatically proves that the corresponding self-modification is a useful basis for all future self-modifications affected by the current one. All these worries are taken care of in a recursive fashion.
Is the Gödel machine computationally more powerful than a traditional computer such as a Turing machine? No, of course not.
Any traditional computer such as a Turing machine or a Post machine or any other reasonable computer can become a self-referential Gödel machine by just loading it with a particular form of machine-dependent software, software that is self-referential and has the potential to modify itself.
But Gödel machines cannot in any way overcome the fundamental limitations of computability and of theorem proving, which were first identified in 1931 by Kurt Gödel himself.
Linguist David Adger on the Facebook chatbots experiment, the history of getting the AI to understand and spea...
Why do we need neural networks and how do we train them
How artificial neural networks are trained and used in industry