Mathematician Gareth Jones on Gödel’s incompleteness theorem, the halting problem and why the subsets of the natural numbers are uncountably infinite

I would like to tell you about undecidable problems. This is a topic in the theory of algorithms. In the early part of the 20th century mathematicians had a great sense of confidence that their subject could answer eventually any reasonable problem. This confidence was shattered in the 1930s by the announcement of several really surprising results. The first of these was Gödel’s incompleteness theorem. Kurt Gödel proved in 1931 that there are statements about the natural numbers, that is the positive integers, which are true but which can never be proved. It’s not that we don’t know a proof: he showed that we can never find a proof. And then a few years later Alonzo Church and Alan Turing independently showed that there are problems in mathematics which cannot be solved in the sense that there is no algorithm to solve them. Again, it’s not that we haven’t tried hard enough to find an algorithm: there is no algorithm to solve these problems.

I’ll give you a few examples in a few minutes but let me be a bit more precise about these problems. These are what are called decision problems.

A decision problem consists of a set of objects called instances (these could be numbers or equations, anything you like) together with a property that these objects may or may not have. The decision problem asks, does this particular object have the property we’re interested in?

For example, the objects could be integers, the property could be divisibility by 10, and we ask whether this integer is divisible by 10 or not. There is an easy algorithm to solve that problem: you just look at the last decimal digit, and if it’s 0, the integer is divisible by 10, if it’s not 0, it’s not divisible by 10. So that’s an algorithm which solves this problem. We say that a problem is decidable if there is an algorithm to solve it, an algorithm being a finite process which can be followed by any human being or any computer that is programmed to follow it and which always gives the correct answer: it must after a finite number of steps to produce the correct answer, yes or no.

What I would like to do first is give you an intuitive argument as to why one might expect some problems to be undecidable. The reason is that there are infinitely many problems one can ask, and there are infinitely many algorithms one can describe, but the infinite number of problems is bigger than the infinite number of algorithms. So there simply aren’t enough algorithms to solve all the problems. The algorithms form a set that is countably infinite. That’s mathematical jargon for saying that you can form these algorithms into a list, and there’s a first a second and a third member and so on: all algorithms will eventually appear in this list. You can do this with integers: for example, I can list all the integers. I can’t go ‘1, 2, 3’ and so on because then the negative numbers and 0 are left out. But I can go 0, 1, -1, 2, -2 and so on, and all the integers appear sooner or later in that list. In the same way, you can do this with the rational numbers. You cannot do it with the real numbers, there are too many of them to form them into a list.

Historian and Philosopher of Logic Stephen Read on the history of paradoxes, semantic paradoxes, and its direct connection to the foundations of mathematics

We’ve contradicted ourselves, and the contradiction must come from assuming that we could list all the subsets of the natural numbers. So we cannot do this listing process, there are too many subsets of natural numbers to form them into a list. They are uncountably infinite and that means that there are more of them then there are algorithms. We can write the algorithms in the list, we could just write them down in English and put them into dictionary order, just order them lexicographically: there’s no problem about that. But we cannot list all the subsets of the natural numbers and therefore we cannot list all the decision problems because for every subset of the natural numbers there is a decision problem: is this integer in that particular set? So there are at least as many decision problems as there are subsets of the natural numbers.

So I’ve tried to prove to you that there must be undecidable problems, but you will be unhappy about that because I haven’t given you an example: this does not produce a specific example of an undecidable problem. What Turing and Church did was produce concrete examples of such problems. I’ll very briefly describe Turing’s problem. He considered the halting problem.

An instance of the halting problem consists of a pair consisting of an algorithm and an input to that algorithm. The halting problem asks, if you run this algorithm with this input, will it eventually halt or will it run forever?

Let me give an example. Suppose, the algorithm starts with an integer, and the algorithm says ‘keep subtracting 1 until 0 appears and then halt’. Now, if I start that with a positive integer (say, 3) and run the algorithm, we’ll get 3, 2, 1, 0 and we’ll halt. If I start with a negative integer (say, -1) and run the algorithm, I keep subtracting 1 and I get -1, -2, -3 and it just goes on forever: we wait for 0 to appear and it never appears. So the algorithm does not halt with that particular input.

You could say we can solve this problem by just taking the algorithm and the input we are given and running the algorithm with that input and waiting to see what happens. If the algorithm halts we will see it halt and we will get ‘yes’ as an answer, but if the algorithm does not halt it will simply keep on running and we will be waiting for something to happen and we will never get a negative answer. So simply running the algorithm is not adequate, it doesn’t always give us an answer. Turing was able to prove by a rather ingenious method of contradiction that there can be no algorithm which solves this halting problem. There is no uniform way of describing, of determining whether a given algorithm with a given input will halt.

So the halting problem is unsolvable. I won’t describe Church’s undecidable problem: it’s algebra and it requires quite a lot of symbols to make it work. But what these two examples of undecidable problems do is they give rise to other undecidable problems in other areas of mathematics. Since the 1930s mathematicians have found examples of important problems in algebra, topology, in analysis and so on which are proved to be undecidable.

Mathematician Gareth Jones on number theory, why there are infinitely many of prime numbers and how to calculate the proportion of prime numbers up to any given n

Here’s an example. Let the equation be x^{2}+y^{2}=0. It’s easy to solve that: you can put x = y = 0, that gives you a solution in which the variables are all integers. Now, try x^{2}+y^{2}=3. You can solve it but not with integers: there are no integers x and y such as x^{2}+y^{2}=3. Just try it.

That particular polynomial is easy to deal with, but in 1971 a Russian mathematician Yuri Matiyasevich proved that there can be no algorithm which decides this question for all Diophantine equation. He did it building on earlier work by Davis, by Putnam and Robinson in the United States. It was a time when cooperation between Russia and the West was not very good but nevertheless, in subjects like mathematics, it’s occasionally possible for people in different nations to talk to each other. What Yuri Matiyasevich did was he used properties of the Fibonacci numbers to prove that there could be no algorithm to solve this problem. This is rather remarkable because the Fibonacci numbers are named after Fibonacci who was a 13^{th}-century Italian mathematician Leonardo of Pisa. The Fibonacci numbers are normally named after him but in fact, they were discovered about five or six hundred years earlier by Indian mathematicians. It’s really rather remarkable at a very ancient mathematical structure, the Fibonacci sequence, was used to prove this very modern result that Hilbert’s tenth problem is undecidable.

Read more