**IMPOSSIBILITY: THE LIMITS OF SCIENCE**

**Professor John Barrow**

If you look at mythological accounts of the nature of the universe, then there is an answer for every question. Everything has a meaning, and nothing is left unexplained. Modern science is not really quite so ambitious. It focuses its attention upon soluble problems, it asks questions which it is believed can be both answered and also the answers can be tested. We have also learnt that the nature of science is possible, has its form that we know, only because there are limits to what is possible. There are constraints on what can happen in nature. There are conservation principles in nature. There appear to be so-called laws of nature. One way of viewing those is as constraints on what is possible in nature, and in this talk, I am going to talk about particular types of limits of science. The whole idea of limits is a very big topic. I am not going to talk about what we *should* do. I am going to talk about what we *can* do, and what we *cannot* do. There are different types of limit on what can be done and what can be known via the scientific process and in an exploration of nature. Some of those limits are rather unusual, and there are some general lessons about the nature of reality, I believe, to be learned from those limits.

The idea of learning more and more about the world is bound up with the idea of progress. Some of you will have heard of Moore’s Law. Gordon Moore was the co-founder of the Intel chip production company, and in 1965 he wrote an article in *Electricity* magazine, a popular electronics magazine, where he made a prediction about the rate of progress in the micro-electronics industry. By progress, he meant the ability to fit more and more transistors on to a tiny area of silicon circuit board and also the cost of so doing. The trend that he predicted, which has gone on and been maintained to pretty high accuracy ever since, has become known as Moore’s Law. He looked at the passage of time against the number of transistors that you can fit on a 3 square centimetre piece of silicon, and when Moore began thinking about the problem, that number was about 2,300, back here in 1965. We are now up to half a billion or so. This is a measure of progress in micro-engineering, the ability to make thin sheets of silicon, just a few atoms thick, and the ability to make gates of tiny nanometre size. The lesson is that the number of transistors which can be fit on the unit area doubles every 2 years, so 2 years was the doubling time of the measure of progress in the micro-electronics industry. Alternatively, you can say you could buy the same amount of computing power for half the price 2 years further on. This acted as a very important guide, as well as a stimulus, to the growth of the computing industry, because if you are building pieces of hardware and peripherals, it’s no good waiting till Mr Moore says, “Ah, we’ve now found a way to put 100 million transistors on my unit area,” and then start to make your hardware devices to exploit that. You have got to have them in the pipeline, ready to go when the processing capability moves on. So that is a classic measure of progress.

One of the odd things about the concept of progress is that it is actually rather a modern idea. If you look back to medieval times and even before, you will find that people did not have this vision of some great golden age in the future that we were moving towards, where things would be better and more wonderful, and their daily life was not engaged with the notion of progress and industrial production in quite the same way. For them, the golden age was in the past, looking back to the era of the Greeks, when knowledge was created, and the works of Aristotle laid down our philosophical views and so forth. So the idea of progress is a relatively modern one. We see it particularly stimulated by events like the Industrial Revolution, and of course in modern times it dominates the way almost everybody seems to think about society and the progress of the world.

Progress in science has a rather different type of structure, and in modern times, this idea of looking at limits of science and things that are not possible springs from thinking about scientific theories and the development of science in a rather particular way. A theory is proposed, it gives rise to lots of predictions that are confirmed by experiment rather well, so it becomes very successful, and people use this theory more and more successfully, and so they start to think that perhaps this theory can tell you everything within its domain of applicability, that it has no limits. The theory might be quantum mechanics, it might be mathematics, it might be a certain type of computational procedure. Then what you find happens is that, as you explore this sophisticated theory more and more deeply and you understand it better, you discover that something very odd happens, that the theory predicts that in some sense it cannot predict. It predicts that it has limitations to what it can do even on its own terms. It is like somebody once said of something else: it contains within itself the seeds of its own destruction. This happens in several different types of scientific theory, and what it implies is that perhaps rather deep types of explanation about the world, deep types of logical structures, are self-limiting in some unusual way, and we are going to explore how that happens.

At the end of the 19th Century, just like at the end of the 20th Century, people started to think a lot about whether we have explained everything that we should explain or we could explain, ans what we ought to do next. It is a typical ‘end of the century’ mania. It is interesting always to look back to see what people thought in the past, and to see how right they were. After all, we talked about computing just now. The founder of IBM Computers, Mr Watson, famously said at the end of the 1940s that he could not conceive of a greater market in the world for more than about six computers!

There was a physiologist towards the end of the 19th Century who gave a famous lecture on the future, the progress of science and the limits of science. As he was approaching the end of the 19th Century, he tried to pick on what he believed were insoluble problems. These might be problems which were practically insoluble for a very long time, or some others that he believed would never be solved. It is very interesting to see the problems that he picked on, in some way rather far-sighted, because if you were making such a list today about key problems, challenging problems, most of them would still be there. The first of his seven problems was the problem of the origin of life. Remember, this is after Darwin’s Theory of Natural Selection had been proposed and talked about for a long time. So he believes that there is a potentially insoluble problem as to where and how did life begin: if it was a one-off event, requiring probable conditions, this may not be something that we would ever be able to discover. He is also concerned about the origin of language, such as how did language and languages develop, how did human linguistic capability develop along with living things? He is concerned also that we may never be able to discover the origin of human reason, the structure of the mind. He is worried also about how we can understand how organisms, living organisms, evolved all the capabilities that they have, so are they all just the results of natural selection or sexual selection or do some of them arise for other reasons or by chance?

He wanted to know about the origin of the forces of nature and the ultimate nature of matter. He believed that this would prove to be an insoluble problem. His sixth problem, rather similar to the human reason idea, was the origin and nature of consciousness. Here, he thought this was a problem that was deeply insoluble in the sense that we were trying to study ourselves, we were trying to study the very facility that we had for studying the problem in the first place, so there was something of a dangerous loop. He was also rather insightful on picking here what he called sensation, but today it would be called the understanding of qualities. So what is something like redness? Why does the mind recognise redness or hardness or something else, and other people recognise that quality as well so that we can talk about it and share it? Lastly, the problem that he thought was ultimately going to be totally insoluble was the problem of free will. So that was the perspective 125 years ago. In many ways, it is surprisingly up-to-date.

If we look at the issue of limits today, we probably would not go about it in the same way. We would not start listing problems that could not be solved. Instead, we are interested in types of limit that there could be on our capability either to do things or to know things. There are four types of limit that I will have something to say about. The first is just practical: what are the limits on our ability to do certain things, and these can be rather mundane. It can be simply cost, it can be the amount of time involved, and so on. There are also perhaps more sophisticated types of constraints that arise from the very structure of our thinking processes, so we can imagine that there is sometimes perhaps a mismatch between how a reality might be and the ability for our mind to understand and appreciate it. Most interesting of all, we will find that there may be intrinsic limits that are created basically by the nature of knowledge itself. Lastly, there are some rather particular limits, which I will call cosmological, that there are certain questions about the universe that I believe we will never be able to answer because of the nature of physics.

Well, let’s say a little about those practical limits for a while. The practical limits, to some extent, are fairly obvious. In particle physics, we have learnt that all the interesting hidden symmetries in the world, the true picture of what the laws of nature are like and their symmetries and how they are joined together, really becomes illuminated only if we go to very, very high energies. The structure of string theory, a part of which exists in theory but which is not confirmed by experiment, faces the challenge of being able to create experiments in high enough energy environments to ever see the consequences of that theory’s predictions. The universe is not made for our convenience. There is no reason why all the decisive theories of nature should be testable by experiments that we can actually do. So high energy physics always has the challenge and the worry that the most decisive effects which reveal the laws of nature may only be evidenced on very tiny scales of length or at enormously high temperatures and energies.

Staying with particle physics, you also, if you are trying to pursue that line, produce accelerators with arbitrarily high energies, you have financial cost to worry about. What is the limit, if you like, of the amount of expenditure you are willing to make on pursuing something like the Higgs-Boson or the structure of space time?

More modern practical limits we have learnt about revolve around the nature of complexity and chaotic unpredictability. We have learnt that there are certain situations which are sufficiently sensitive to making little changes in their behaviour now so that if you try to come back and look at how they are behaving tomorrow, you will not successfully predict their future behaviour. This is not because they do not obey any laws or they are irrational or crazy. They are simply extremely sensitive to the state in which they start off, and unless you have perfect knowledge of that state, you will have ever-diminishing knowledge of its future behaviour. The weather is a classic example. We are not good at predicting the weather tomorrow, not because we do not understand the mathematical physics of weather – there is no unknown meteorological ingredient - but simply because weather is a complicated and sensitive evolving system. We do not know the state of the weather everywhere now with perfect accuracy, and so that small inaccuracy and uncertainty that exists in our knowledge between different weather stations amplifies and grows very considerably, so by tomorrow, our forecast may be largely a work of fiction. There are many very important areas of the world where the interaction between that situation and high complexity makes it very difficult for us to understand and predict the future. The future of climatic evolution, complex ecosystems, even on Earth, are classic and very important examples where we run into this difficult problem.

We have also learnt that the processing of information, computation if you like, has certain important constraints. It uses energy. It generates waste heat in the process. There is a limit to how large you can build a computer because of the amount of energy that you generate in the processing, and so computation itself turns out to have certain limitations.

This type of practical process you could attempt to classify and catalogue in some way that owes something to Frederick Nietzsche. Nietzsche’s somewhat speculative philosophy of the past placed great emphasis on human ability to do things, to manipulate things, and he saw human progress as very much in our ability to manipulate nature in certain ways. I like to think of this in a number of different ways, and the motivation comes from astronomy, many years ago, in the late 1950s, when astronomers were first preoccupied with the issue of extraterrestrial intelligence – is there life on other worlds, why do we not see signals, what sort of signals should we be looking for?

One way to think about the problem that was suggested by a Russian astronomer called Kardashev was to think about the capability of different advanced civilisations, to measure their progress by their ability to manipulate their environment. If we think for the moment just on the very large scale, we know that we, for example, have the ability to manipulate bits of our planet. So we could extend a land mass somewhere, we can either mistakenly or purposefully change the nature of the atmosphere, we could change the topography, we make large buildings and bridges and so forth. So a Class 1 society is sufficiently advanced to change the nature of a planet, and that requires a certain amount of energy for manipulation. Why you are interested in that if you are looking for extraterrestrial intelligence is because that tells you roughly the order of magnitude of waste energy and heat that might be emitted by such a civilisation. So a Class 1 civilisation makes a certain amount of mess, if you like, and gives out a certain amount of radiation. If you have the weakest type of astronomical detector, this would be the first civilisation class that you might see.

The second class is much more advanced. This has the ability not just to manipulate its own planet, but to manipulate other bodies within its planetary system. It could perhaps deflect a small planet. It could change the atmosphere of and rejuvenate a dead planet, like Mars for example. We cannot quite do that, but we can certainly imagine how it could be done in the future. It requires much more energy, and so has a different level of visibility.

A Class 3 civilisation has the ability to change the nature of stars, so it can do things to its home star, like the Sun, which changes some of its characteristics. You can go on: an even more energetic and advanced civilisation could change the nature of the galaxy in which it lives, and you could imagine going on almost to what I call Class Omega, a type of civilisation that has the ability to somehow manipulate the nature of the expansion of the universe or something equally grand.

It is more interesting to do the opposite of what Kardashev did I think, that the nature of progress we have seen in recent years is not to do as Kardashev suggested, to manipulate bigger and bigger things. It is completely the opposite: it is to manipulate smaller and smaller things. So we can think of a measure of progress which looks at the nature of small scale manipulators. Our predecessors, running around in the African Savannah, could manipulate things about the size of their own bodies. They could fell trees, they could move rocks, they could build shelters. Before you have sophisticated tools, the scale of things that you can move around are limited to things of about your own weight or the weight of several of your friends and yourself, and that requires a certain characteristic amount of energy.

As you move on to much more modern times, you can see we have gone through stages of being able to move very large amounts of material, to make buildings, to make bridges and so on but, more recently, we are seeing the course of going to smaller and smaller scales: the ability to carry out surgery, such as transplant surgery, on organs in our own bodies; and then, to go smaller still, to start to study and manipulate genes as part of our genetic makeup. In the world of the physicist, we can go smaller still. We can manipulate and start to do engineering on the scale of molecules and of single atoms. This so-called nano-technological revolution which we are just entering really shows you what progress perhaps will look like in the future, as people try to think of ways in which engineering might move to even smaller particles of nature rather than atoms, perhaps to protons, perhaps to electrons, perhaps to other elementary particles. We can envisage that progress is miniaturisation and very, very advanced civilisations may well be extraordinarily small. It really rather pays to become small: you generate very little pollution, you use very little energy, you produce very little waste heat. Ultimately, in this direction, you could see going smaller and smaller, your ultimate capability is to try to engineer with the structure of space and time itself. In the last lecture, we saw something of what that might mean in the sense of time travel and exploiting the way in which the flow of time can be changed.

Cognitive limits on our capability are difficult to evaluate, but we can appreciate them in certain ways. We know that we are the outcome of an evolutionary process which has selected for certain attributes and abilities which may no longer be relevant in quite the same way that they were when they were first selected. Half a million years ago, for example, our predecessors might have required certain abilities for their survival and multiplication which are no longer quite so relevant today. You can think of how it might be that we have an appreciation for certain aesthetic qualities or for music, for example, which are just by-products of abilities that really did have a survival value long ago. So we like symmetry, where we are physicists or a wallpaper designer, but why did our propensity and like for symmetry evolve? One possibility was that what we have at the beginning is really a sensitivity to recognise living things in a crowded field where there are both living and non-living things. Living things can be distinguished from non-living things in a very crude way, by the fact that living things are generally symmetrical from right to left, whereas non-living things are not and they need not be. So if you look in a crowded field and you see something that has left/right symmetry, it is odds on it is something that might want to eat you for lunch or which you could eat for lunch or which might be a potential mate. So having a little bit more of that ability to identify right/left symmetry than somebody else will give you an advantage, and that sensitivity may well propagate in the species. That is an example of how our very distant evolutionary history may bias the way we perceive the world and the things that we like so that we tend to do science and acquire knowledge in particular ways.

Our propensity for both language and mathematics are unusual. Human linguistic skills are extraordinary. You might meet able-boded people who cannot do mathematics or cannot play a musical instrument, who cannot write, who cannot read, but you will go a very long way before you find someone who cannot speak any type of language, if they are able-bodied. What we do when we speak is extraordinarily sophisticated, far more complicated and far more difficult than any of those other skills that people claim to find so difficult, like playing the piano or solving differential equations. So we have certain very complex linguistic skills, and perhaps many of our other ways of analysing the world, doing mathematics and so forth, are piggy-backing on that language structure that we have within us.

We might worry that there are certain conceptual barriers that we have. We are used to thinking of the world in very particular ways: cause and effect, that everything has a cause, that the effects of causes just occur locally, and it is very difficult for physicists to think of the world other than in this rather traditional local interaction way.

The last way of looking at the problem here involves simply logic and conceptual barriers, and I want to give you a simple example: that it could be that the way the brain works encounters certain paradoxes which prevent it using logic in quite the way you might imagine. Suppose that we have a situation of voting, and I talk about voting because there is one interesting theory of the way the mind works by Marvin Minsky, which he calls the Society of Mind, where he looks upon the mind rather like a Parliament in a sense, that different parts vote to act and there is a winner and then the action takes place. In some parts of the space programme, you may be interested to know there is obviously very sophisticated computer control of a launch of something like the Space Shuttle, but there are several computers controlling the launch, and they vote at the end whether to launch or not to launch. You may have a 2:1 vote by the computers not to launch. They don’t necessarily all think the same.

Suppose that you have three people, I will call them Alex, Bill and Chris, and they have got to decide something – perhaps they are going to buy something together, it is a car perhaps, and there is an Audi, a BMW and a Cortina to decide between. They are pooling their money, so they decide they will vote on their preferences. Well, Alex prefers the Audi to the BMW to the Cortina. Bill prefers the BMW to the Cortina to the Audi. Chris prefers the Cortina to the Audi to the BMW. So if we then add up votes, what happens? A beats B, B beats A, A beats B, so A beats B 2:1, 2 votes to one, and B beats C, B beats C, C beats B, so B beats C 2:1, but you would expect that must mean that A beats C. Not so. A beats C, but C beats A, and C beats A, so C beats A by 2:1. If you have these simple preference voting structures, you can produce this paradoxical outcome. So it could be that you had a computational structure, a mental structure, which ran into this type of paradox, and you would have to have a sort of a randomiser which broke the deadlock in some way.

Well, so much for those practical and cognitive limits, I want to talk now about what I think are the most interesting aspects of this issue, and that is what I call intrinsic limits. These are limits to being able to compute, being able to decide the truth or falsity of statements, of being able practically to perform computations, and then the limits that are imposed by quantum uncertainty and causality.

I will start with the quantum uncertainty because most people have heard about this, and this is a good example of the pattern I mentioned at the beginning, where you have a very successful theory, that then it turns out is not able to predict something. It predicts that it cannot predict something. So it does not just fail to do it, because of incompetence or it has not been worked out fully enough, but it predicts quite definitely that something cannot be done. The interesting first example became known as the Uncertainty Principle of Werner Heisenberg. What Heisenberg’s Uncertainty Principle tells you is that if you make simultaneous measurements of the position and the motion of a particle, and the motion we will describe by the momentum and call it P, then the product of those uncertainties in position and momentum is non-zero. It is always bigger than some finite number. This tells you that you cannot know where something is and how it is moving with perfect accuracy, even if you have perfect instruments.

There is a conventional way of explaining this in a popular way, which is worth giving, and so I will give it. When you start to encounter things that are very, very small, what do you do when you measure where they are located? For example, when I look at this projector here, what I mean by seeing it is that some photons of light have bounced off it and entered my eye, and because the projector is a large object with a large mass, the impact and rebound of the photons into my eye has essentially no discernible effect upon the location or the movement of the projector. But if the thing I am looking at is made smaller and smaller and smaller, eventually the energy, the wave length of the photons that hit it, start to disrupt it by their rebound into my eye or my detector. So the very act of measuring the location of something moves it, and so it is not where you thought it was after the measuring interaction has taken place. You can think of this inequality as being a measure of that disruption of things by the measuring process. That gives you an idea as to why it is not surprising that there might be a limitation of this sort.

But in fact, that is really too simplistic a way of explaining the Heisenberg Uncertainty Principle, because it implies that the tiny atom or electron sitting there that you’re trying to observe really has a definite position and motion, but you somehow just can’t get at it because when you try to observe it, you move it, whereas in fact this Uncertainty Principle is selling you something much more unusual. What it is telling you is that the concepts of position and momentum, which are familiar to us in large scale everyday life, the extent to which those properties of what you are observing can co-exist when they are very small is limited by this inequality. So there is a limit to the extent that we can simultaneously talk about the notion of a position and of a momentum of something that is very small that is given by this Uncertainty Principle. This is a good example of how a theory which does a lot of very successful predicting ultimately predicts its own limitations, the limitations of some of the concepts that it is using when you enter a particular domain.

If we now move to the ability to calculate and compute, it is useful to think of, for example, all the truths of mathematics: all the formulae, all the theorems that there could be. Some of them might be ones that you are using in physics or chemistry or biology, and others might be just ones that you like because you are a mathematician. Here, right in the background, if you like, is the whole of those mathematical truths. You can think of mathematics as all the possible patterns that could exist.

The whole world of mathematics is infinite in extent. The mathematics that we know about is finite. In the past, it was limited by our ability to calculate with pencil and paper. Even if you calculated your entire life, like some of these people who want to calculate by hand all the decimal places of pi up to millions and millions fold, there is a finite limit to the amount of that mine of mathematical truth, that infinite mine that you could bring out. Even if you use the fastest computers, and nowadays in mathematics computers are used to do things much more sophisticated than simply add up columns of numbers is becoming commonplace. There are computer programmes that carry out logical operations, that do algebra, that can do integration, manipulate formulae which have hundreds of thousands of terms in them, which you would always make a mistake in moving around if you tried to do it by hand – you could never be sure it was correct. So there is a realm of what we might call practically computable truths, things that you could discover in that great sea of mathematics by using our fastest computers for very long periods of time.

Beyond that, there is another frontier, another boundary, and this contains what we might call all computable truths. Alan Turing predicted, and then proved rigorously, that there are mathematical statements which cannot be resolved – shown to be either true or false – by running any computer programme for a finite time, so these are called uncomputable problems. We know many of them. These problems have a complexity that cannot be got at just by doing the same thing over and over again with a little change, more and more rapidly, that at each step, you need a new, novel idea, you have to take a different type of step. The problems are by no means esoteric or unusual: being able to show, for example, as Roger Penrose did, that it is possible to tile an infinite plane with two jigsaw puzzle pieces, of different shapes, but only two pieces, to tile the whole of an infinite plane without periodically repeating the pattern. This is something that no computer could solve, so this is a problem which is uncomputable in character.

If we go beyond that, we start to enter a world of non-computable mathematical problems, and mathematicians are well able to prove with pencil and paper results about things that go on in that area of mathematics - they don’t do it using computers, they do it by using novel ideas – but in this realm, a very remarkable discovery was made by Kurt Gödel, whom we learnt a little about last week in a different context, that there are statements of mathematics which can be neither shown to be true nor false using the rules of mathematics. So there is something called mathematical truth which is much larger than the realm of decidable truth: things which we are able to prove true or give counter-examples and show they are false. We are going to have a look at some of these little realms of uncertainty in a bit more detail.

The first interesting thing about using computers to do things like mathematics is that they are not a panacea as most people fondly imagine, that computers do not enable you to calculate and do anything that you want to do. They very quickly run into problems of tractability, and the reason is because the world of mathematical problems, scientific problems, can be divided into two parts - the same 2 parts you met when you were at school – the easy problems and the hard problems! I remember there was a letter in *The Times* about a year ago: there had been an article complaining about some Maths A-level paper where there was a problem where people had claimed that the problem was ‘un-do-able’ and that this was unfair to the candidates. Someone had written in saying he could not understand what all the fuss was about, standards must be falling, because when he was at school, all the problems were un-do-able on the maths examination paper!

Easy problems have a rather particular definition. They are problems which if they have, say, n parts, then as you add extra parts, the calculation that you need just grows in proportion to the number of parts. This is like doing your tax return. If you have 100 sources of income, then you will generally find that your tax return takes about 100 times longer to fill in than if you have one source of income, and if you had a 101 st source of income suddenly appear, the time that you will take to add that new source of income to your tax return will just grow in proportion to the number. So these problems are under control: as they grow in size, the computational time does not do anything very dramatic.

However, there is a different type of problem, which is characterised by the adjective hard, and each time you add an extra bit to these problems, the calculation time doubles. So this is a very rapid growth, and very quickly, if you have even a relatively small number of parts, the calculation time and the computation time becomes unrealistically long. Such problems are called intractable. They are not necessarily very exotic. Here is a simple example.

When I was very young, I can remember there was a sort of Christmas present that I did not particularly want to receive. There was the one that you most did not want to receive, which was the so-called sensible present: things like new underwear and socks; and then there were strange little puzzles that aging relatives no doubt played when they were children and thought that you would still enjoy, and one of these was something called a monkey puzzle. You may remember these. It’s a bit like a jigsaw puzzle. It is really a two-dimensional Rubik cubic. So you have, say, nine pieces, and on each piece, there are four parts of dismembered monkeys shaded in different ways, and you have got to join up the tops of monkeys to the bottoms of monkeys of the same colour. Generally, you managed to do this, and you produced a set of correct joins. But suppose you are a computer, you knew nothing about monkeys, you knew nothing about monkey puzzles, you just knew about matching, and you had to set about by search to solve this problem, what sort of challenge faces you? Well, you soon find out that this is one of these hard problems. You see that you have got nine cards to match up. After you put the first one down, you have then got eight choices for the next one that you put down, and then after that you have got seven, and then 6. So to start off with, you have got 9 times 8 times 7 times 6 times 5 times 4 times 3 times 2 times 1 choices to make, but when you put the first one down, and then each subsequent one, you have got 4 orientations to look at, and so there’s another 4 to the power 9 choices to make. So you have got 9 factorial times 4 to the power 9, which is 95,126,814,720 possible moves. That’s a lot. Your fastest computer could cope with that, but you can kill the computer by just buying the deluxe model of the puzzle, which is 5 by 5, so if you have got just 25 cards, you have got 25 factorial times 4 to the 25 moves to investigate, and that number is so large that if I had a computer that would check a million moves a second, it would take 533 trillion trillion years to do the puzzle by this method. So you can see you do not have to be doing anything terribly sophisticated because ‘brute force’ computation without some other type of thought fails.

There is a famous type of problem which mathematicians would love to be able to resolve, and that is that if someone gives you a problem like that, which is hard, is there some way in which you can turn it into an easy problem? Is there some clever calculation or procedure that turns a hard problem into an easy one? There is a beautiful theorem by Cook from 1972 that shows roughly that if you could do that for any hard problem, you can do it for all of them, so if you find that procedure, tell me first! There’s a million dollars waiting for you for one of the Clay mathematical prizes, but this would be the most commercially valuable discovery that could conceivably be made. It would also produce a dramatic security crisis in the world, because all the world’s commercial and military and diplomatic codes are based upon coding principles which can be broken in principle but only by solving a hard computational problem, taking thousands of years. So if you had a way of making those easy, you could break any code in a short time.

The particular problem that mathematicians always use to highlight this issue is something called the travelling salesman problem. If we imagine you have a salesman who has got to visit lots of cities, and he has got to make one visit to every city, and one only, and he knows the distances or the costs between any pairs of cities, what route should he take so he visits every city once and only once but minimises the total cost? Well, it is not too mind-boggling when you have just got half a dozen cities, but if the route contains hundreds of cities or thousands of points to visit, it is not so obvious how you solve that on the back of an envelope. No one has ever found a way to turn this into an easy problem, and my bet is that in fact there is no way to do that. It is intrinsically hard.

Until about a year ago, and still just for one computer, this is the biggest such problem that has been solved, and it is amazingly small. The largest travelling salesman problem solved by a single computer took 1.5 years of running time on one of the world’s most powerful computers, just to figure out the way to visit 3,038 sites. It is commercially very valuable. This is not about vacuum cleaner salesmen in the West Midlands. What this route is, it is a piece of silicon which you want to wire in such a way that the wiring visits each terminal point once and only once and there are no closed loops, no short circuits, and you are looking to instruct a robot to carry out this wiring as efficiently and quickly as possible, because if you are making million upon million of these chips, if you can find a way to cut 5 or 10% off the robotic movement time, then you are making money.

One of the great problems in biochemistry also revolves around this issue. It is the problem of protein folding. It is a great mystery that has been in biochemistry: how it is that you start with a linear chain of amino acids which then fold and collapse and produce a three-dimensional molecular structure, whose exact three-dimensional geometry determines the function of the eventual molecule it turns out and predicts and explains what it does, determines what it is going to do. This folding process is finding, presumably, the minimum energy state, the most stable configuration that it which it can lodge itself. If you are a mathematician and you look at this process, then it is a travelling salesman type process. It is one of these computationally hard problems in which a very complicated structure has to figure out what is the lowest energy state that I organise myself in. This process is done in a few seconds by nature all the time, whereas if you are a mathematician, you would say that this would take millions of years for a computer to find the minimum energy state. So there is a big puzzle here: how it is done? Maybe the real minimum is not found, maybe it is just a good enough low energy state, or maybe the natural selection of bio-molecules actually selects for ones that are rather good for some reason at finding a low energy state. So here is an example in a completely different area of science where there seems to be a confrontation with this type of intractability in what nature is doing.

A typical example of these type of intractable problems is provided by processes that mathematicians called trapdoor functions or trapdoor operations. If you have a lot to do with trapdoors, then you will know what the general characteristic is. A trapdoor is something which it is very easy to fall through in one direction and very difficult to climb back in the other direction. There are mathematical processes, for example, which are very easy for a computer in our sense, introduced just now, taking say two prime numbers with 50 digits in, and multiplying them together. It is rather tedious, the computer will take a second or so to do it, but there is no problem in principle, so that is the easy route. But if you showed any computer on Earth a 100 digit number and told it that it was the product of 2 large prime numbers, it will be busy for many thousands of years trying to find those 2 factors. That is the principle which all the world’s codes are really based upon today, that the encryption process is the easy process, but breaking the code would correspond to carrying out a hard process. It is not impossible in principle, it is just impossible in practice.

So there are some of the limits of doing computations to learn about the world. The last thing I want to tell you about involves our friend Kurt Gödel again, whom we saw last week. We saw him talking about the possibility of time travel in the universe, but he is most famous for work that he did in the 1930s which now goes by the name of Gödel’s theorem. Gödel surprised everyone by very much going against the expectations of mathematicians at the time. David Hilbert had challenged mathematicians to find a procedure whereby if you laid down a mathematical statement, that by following this procedure, you could determine whether it was true or false, and this was known as Hilbert’s programme. He was busy showing that this procedure was possible for logical systems that are much simpler than arithmetic, systems like geometry for example. If you have a statement of geometry, by using Euclid’s rules essentially, you can show whether it is true or false, and so people where just waiting for this result to be extended to include arithmetic, which is a bigger logical structure. But Gödel surprised everyone as a young man by proving the opposite, that there were statements of arithmetic which, although true, could be proved to be neither true nor false by using the rules of arithmetic. He did that by making certain assumptions, and I will just mention a few of them. Generally, what he proved goes by the name of undecidibility. If you have a logical system, arithmetic is the paradigm which is finitely specified, so there is only a finite number of rules, axioms, there is not an infinite number, and it is big enough and complicated enough to contain arithmetic. That is important. Geometry is a simpler system than arithmetic, and this theorem does not apply therefore to geometry. It has got to be logically consistent, and all that means in practice is that you cannot prove that zero is equal to one, because if you could prove that, you could prove anything was true. If these assumptions are true, then there are statements of arithmetic which can neither be shown to be true nor false.

I want to show you how this proof works. It was a new and unusual form of argument, and this is a snapshot cartoon version of it that just involves knowing about how you multiply things together and prime numbers. What Gödel used was that there is a well known feature of all numbers, that if you take any number, you can split it up into a product of the prime numbers which are its factors in one and only one way. So that factoring is like a barcode; it is a unique identifier for the number. 54, for example, is 2 x 3 x 3 x 3, so 2 x 3 cubed, and 9,000 is 2 cubed times 3 squared times 5 cubed. Any number you like, you can express it as the product of powers of the prime numbers, so the next number would be 7 and so on. He defined something that we now call the Gödel number: the Gödel number of 9,000 is just the read-off of those powers of the prime, so 323 is the Gödel number of 9,000. Take an even bigger number, 243 million: split it up into primes, and you find all the 2 factors, then all the 3s, then all the 5s, and its Gödel number is 656. What Gödel is now able to do is to set up a one to one correspondence between numbers, arithmetic, and statements about numbers. So he says, well, let’s let say the number 6 correspond to a concept like zero, 5 to be equals, and so the Gödel number 656 corresponds to the statement that zero equals zero, and so he is able to represent statements about numbers by numbers themselves. The end of his argument is to consider what is really just this statement: the theorem having Gödel number x is undecidable. He now works out what the Gödel number is of that statement, and inserts it in for the number x, and he has therefore then proved that there is an undecidable statement of arithmetic.

This is a very interesting, unusual argument. You can find examples of undecidable statements. Suppose you have infinite strings of symbols, suppose they are digits, and you have a way of defining what you mean by the string, the sequence being random, and what you mean by random is that you cannot find a formula or a rule, a rule of thumb, an abbreviation, which abbreviates the sequence. If it was all even numbers, 2, 4, 6,.8, 10, and so on for ever, you could replace that sequence by just the statement that it is 2n, and so on for ever. So a non-random sequence can be replaced by a statement shorter than the sequence itself. One of Gödel’s undecidable statements is the fact that you can never prove such a sequence to be random. So if you are given a sequence, you can show it is non-random, because you could find an encapsulation, you could find the rule which generated it, but you could never prove that something is random. There could always be an unknown, an undecidable formula of Gödel which, if you knew it, would compress and encapsulate the sequence.

There is a metaphorical way at looking at this theorem of Gödel’s. You can think of thee types of properties that you might have of lists or collections of things. A weak type of lack of knowledge about something is that if it is unlistable. You might have a programme or procedure which you hoped would be able to generate all the examples of a particular sort. An example of something that is listable would be even numbers, so your programme could print them all out very early, very easily. But it is possible to have lists of things, collections of things, which are unlistable, in the sense that you can have a programme which might print out all the quantities which have that property, everything listed has that property, but it is not able to list everything that does not have the property. A slightly trickier situation that you know less about is that there is no computational process which can tell, if you show the computer an example, whether it does or does not have the property. So this first one is being able to list it, and this one is really being able to check it. If you show whether this, you show the computer an example, can it tell whether something is a prime number or the product of 2 prime numbers or not? There are things which are not listable and there are things which are not computable in that sense either, but what Gödel is telling you is that there are other things which are neither listable nor computable, and you might say that they are things that have just prospective properties, that there are qualities of things which are more complicated than mere listing or computing can create. You might say that there are some qualities of things which cannot be captured just by computation, and a rather poetic way to say that is that, for example, no non-poetic account of reality could be complete.

© Professor John Barrow, Gresham College, 26 January 2006