SCIENCE IN A COMPLEX WORLD:
WONDERS, PROSPECTS AND THREATS
Sir Martin Rees FRS
My Lords, Ladies and Gentlemen, let me start off by saying what a pleasure and privilege it is to give this lecture, especially in these marvellous surroundings in London ’s Guildhall. I feel I’m in church, but I won’t be giving a sermon. But though it is not a sermon, I will start with a text, a quote, and my quote is this: ‘In the past century, there were more changes than in the previous thousand years. The new century will see changes that will dwarf those of the last.’ That was a familiar sentiment at the dawn of the New Millennium, but these words actually date from a hundred years earlier, and refer to the 19 th and 20 th centuries, not the 20 th and 21 st. They are from a lecture entitled ‘Discovery of the Future’ presented by H. G. Wells at the Royal Institution in 1902. The programme note called him H. G. Wells BSc. He was very proud then of his University of London degree which he gained as an External Student. Wells’ lecture was mainly in visionary mode. I quote again: ‘Humanity has come some way and the distance we have travelled gives us some insight of the way we have to go. All the past is but the beginning of a beginning, the dream before the awakening.’ His rather purple prose still resonates a hundred years later.
Certainly Wells was right in predicting that that 20 th century would see more changes than the previous thousand years, in science and in technology. And indeed very little of our present day world could have been predicted back in 1900. Many forecasts made in the early parts of the century were highly misleading. They were based on extrapolations of the past. For instance, I looked at a popular science magazine from the 1920s, which had an article depicting the future of aviation. Aeroplanes were shown with wings stacked one above the other. The thinking then was that if a bi-plane was better than a mono-plane, then to stack wings like Venetian blinds was better still! That’s the extrapolation, but that’s not the way things went. Ernest Rutherford, the greatest nuclear physicist of his time, said in 1937 that the idea of getting nuclear energy was moonshine. Thomas J. Watson, the founder of IBM, claims he said he thought there would be a need for 5 computers in the United States . Well, there are a hundred million, aren’t there, at least!
I found another rather surprising example of prediction: a book called The World in 2030. It was written in 1930 by Lord Birkenhead. He’s now remembered, if at all, as an insufferably arrogant Tory Lord Chancellor, and I was rather surprised to find him writing a book with this title. In fact, he had read quite a bit of the works of J. B. S. Haldane and Wells and others, and he envisaged in his book that women wouldn’t give birth naturally but they would rear embryos in flasks, and various other horrors that would happen in 2030. But he said that even in a hundred years, he could not imagine a woman in a position of political leadership! So, there again, limited vision.
Well, in view of these examples, I will not venture any specific predictions about the next century. I will be prudently unspecific. Because the biggest innovations are the least predictable; they are the ones that are qualitatively new and not just an extrapolation. That has always been the case. Francis Bacon 400 years ago addressed this question, and he quoted three things: gunpowder, silk and the mariner’s compass as things that couldn’t have been planned for, were quite different from anything conceived before, and of course that’s the situation now. Another great science fiction writer, Arthur C. Clarke, opined that any ultra-advanced technology was indistinguishable from magic. Mobile phones, GPS and the Internet would have seemed magic to H. G. Wells.
But Wells himself wasn’t an optimist. In 1902 he highlighted the risk of global disaster. Let me quote again: ‘It is impossible to show why certain things should not utterly destroy and end the human story and make all our efforts vain…something from space, some pestilence, some great disease of the atmosphere, some trailing cometary poison, some great emanation of vapour from the interior of the Earth, or new animals to prey on us, or some drug or wrecking madness in the mind of man.’ That’s H.G. Wells in 1902. Were he writing today, Wells would have been elated by the amazing advances of science, but perhaps equally anxious, although in less flowery prose, by its downside.
But I’ll start off with some wonders within one scientific area, the area of space and the cosmos. I choose this area for three reasons: first, because it is my own specialist field, second, it’s surely one that would have fascinated Wells, just as it now fascinates a wide public, almost as much as dinosaurs do, and it’s a subject that’s relatively unthreatening.
An iconic picture from the 1960s was the first photograph of the whole Earth from space – this famous picture here. The fragile beauty of our home planet and its land, oceans and clouds contrasted starkly with the sterile moonscape on which the astronauts in the late 1960s left their footprints. Probes have now been sent to seven other planets of our solar system, beaming back pictures of varied and distinctive worlds. But that’s just within our solar system. Our cosmic perspective has vastly enlarged. Our Sun is just one of a hundred billion stars in our galaxy.
When we were young, we were all taught the layout of our own solar system – the sizes of the nine major planets, and how they move in orbit around the Sun. But twenty years from now, we’ll be able to tell our grandchildren far more interesting things on a starry night. Planets have now been found orbiting nearly a hundred other stars like the Sun, and that number is rapidly growing. We think that most stars have retinues of planets orbiting around them. Stars aren’t just twinkling points of light. We think of them now as the suns of other solar systems, and in 20 years, we will know the orbits of each star’s retinue of planets and the sizes and even some topographical details of the bigger ones. The night sky will become much more interesting.
The planets found so far orbiting other stars are roughly the size of Jupiter or Saturn, the giants of our solar system, but there is every reason to suspect that these are the largest planets in other multi-planet solar systems and the smaller planets are simply harder to find. Of greatest interest though will be possible twins of our Earth, planets the same size as ours orbiting other stars like our Sun.
Just today, there was a transit of Venus: the dark shadow of the planet moved across the face of the Sun. Suppose you’d been viewing from so far away that even the Sun looked like a point of light. You could still have detected that transit, because while it was going on, the Sun’s brightness would have dimmed by about one part in a hundred thousand. So an alien astronomer could have detected Venus moving across the face of the Sun, and within a few years, we will have detected Earth-like planets around many other stars by looking for those transits, by looking at the brightness of the star very carefully, measuring it with great precision so as you can detect if it goes, gets dimmer by one part in ten thousand, and then recovers as the planet moves away. This transit method, which we will have within a few years, reveals, as it were, the shadow of these planets around other stars. It will take 20 years before we have telescopes powerful enough actually to get pictures showing the light from them. What will they look like?
Well, let’s imagine we’ll be looking at our solar system from thirty light years away – that’s the distance of a nearby star. The Sun would look an ordinary star, and Earth would be, in Carl Sagan’s nice phrase, ‘a pale blue dot’ seeming very close in the sky to its star, our Sun, which would outshine it by many billions. It’s like detecting a firefly next to a search light to detect one of these planets. But if you can detect it just as a dot, and watch it carefully, you can learn quite a lot. Suppose our alien astronomers were watching the Earth. Then the shade of blue would be slightly different depending on whether the land mass of Eurasia or the Pacific Ocean was facing you, so by watching carefully, you could learn the length of the day, that there were continents, you could learn something about the climate and seasons. By studying the light in detail, you could infer that the atmosphere had oxygen and ozone in it and perhaps therefore had a biosphere. By observing other earths, we will be able to do this within twenty years.
Our home planet, the Earth, the third rock from the Sun, is very special, but among the zillions of planetary systems, there will surely be many which are planets like the Earth, on Earth-like orbits with temperatures such that water neither boils nor freezes, and the key question then is of course would there be life on any of these planets? That’s a question, I’m afraid, that we still can’t answer at all. We’ve no idea whether life is a rare fluke or whether it is near-inevitable in the kind of initial soup expected on a young planet. But it would plainly be a momentous event if we could find another biosphere, or even the simplest kind of life elsewhere.
People are looking for life on Mars. If you look at a picture of Mars-scape, it’s clear there’s not much life there, but even finding freeze-dried bacteria would be a great achievement here. There’s no evidence yet, but people clearly are looking for evidence of any life on Mars. It’s not an attractive landscape. There may be some oceans or relative oceans there, but no sort of prime beach front property there now!
Mars is the focus of attention, but later this year, in fact about Christmas this year, the European Space Agency’s Huygen’s Probe, part of the cargo of NASA’s Cassini mission to Saturn, will parachute into Titan’s atmosphere seeking anything that might be alive on this giant moon of Saturn. Titan is the biggest moon of Saturn.
This is a possible site for life, but the other place people will look one day is in Europa, which is one of Jupiter’s moons. Europa is believed to be covered by an ocean, which is frozen at the top but may be liquid underneath, and within 10 or 15 years a probe may be sent to go beneath the ice and look under that ocean for anything with tentacles or fins or anything simpler than that.
If life had emerged twice within our solar system, here on Earth and somewhere else, that would then be very important, because that would say that it would be widespread in our galaxy - because if it happened twice just within our one solar system, then it can’t be a rare fluke and it should be present on millions of other planets. That’s why this search is so very important – to look for any evidence of even the simplest vestiges of life.
Even if there is life somewhere in our solar system, no one expects it to be advanced life. The question then is whether on some Earth-like planet far away orbiting a distant star there could be another biosphere harbouring creatures that could be deemed intelligent – intelligent aliens. Belief in a plurality of inhabited worlds is actually quite an old belief, and in earlier centuries, people were much more optimistic about it. Gordiano Bruno, a Dominican Monk, was burnt at the stake in the year 1600 in Rome for what were called obstinate and pertinacious heresies, and among his conjectures were that heavenly bodies could bear upon them creatures similar, or even superior, to those upon our human Earth. The British astronomer, William Herschel, thought the Moon and even the Sun were inhabited, and a hundred years ago, it was widely thought that Mars had life, even intelligent life, on it. In fact, a French foundation around 1900 gave a prize, called the Guzman Prize, for the first evidence of extraterrestrial intelligence, and in the rules for that prize, Mars was excluded because it was thought too easy to find intelligent life on Mars!
So our perspective has changed, but we are as flummoxed and uncertain as Bruno was four hundred years ago about the likelihood of intelligent life. Despite this enduring ignorance, or maybe because of it, discussion on this subject is strongly polarised. Some people side with Bruno; others argue dogmatically that life must be such a rare phenomenon that we are alone. For myself, I think utter agnosticism is the most rational of starts. We don’t know enough about life’s origins, still less about what natural selection can and cannot do to produce complex life from simple life, to say whether aliens are likely or whether the odds are a trillion trillion to one against, in which case there wouldn’t be any apart from on the Earth. And even if primitive life were common, the emergence of advanced life may not be.
Biologists talk about what would happen if you re-ran evolution on the Earth. Some say you’d end up with creatures like us. Some say you might up with a planet just covered with ants or something like that. We just don’t know. These are all biological questions, and biology is a harder subject than astronomy. Astronomy deals with things that are very big, but being big is not the same as being complicated, and the most complicated things in the universe are living things like us, which is the challenge of biology.
But searches for extraterrestrial intelligence, what’s called SETI, are surely a worthwhile gamble, even if one accepts that the odds against success are very heavy, because of the philosophical import of any detection. A manifestly artificial signal, an ultra-narrow band radio transmission or a message as boring as a set of prime numbers or the digits of pie, would convey the momentous message that intelligence, though not necessarily consciousness, had evolved elsewhere and wasn’t unique to the Earth, and that concepts of logic and physics weren’t peculiar to, as it were, the hardware in human skulls. In SETI searches, it makes sense to listen rather than transmit, because any two-way exchange, even with a nearby star, would take decades, because of the speed of light. So there would be time to plan a measured response if we get a signal, but no scope for snappy repartee, as it were!
But suppose we don’t succeed? Then absence of evidence would not be evidence of absence. That’s because some brains out there may package reality in a way that we can’t conceive and couldn’t recognise. Others could be uncommunicative, living contemplative lives, perhaps deep under some planetary ocean, doing nothing to reveal their presence. There could be intelligent life out there even if we can’t detect it. On the one hand, perhaps there isn’t any. That would be, in a sense, disappointing: SETI searches would fail, we’d feel perhaps alone in the universe. On the other hand, there’s a compensation, because it would then boost our cosmic self-esteem. We’re here on this tiny Earth, but we could then justly say that this tiny Earth is the most important thing in the galaxy, the one place where things of fascinating complexity such as life exist.
A century from now, we may still be listening in vain for signals, but I think we’ll at least by then understand how life began here on Earth, and that will help us to gauge how likely it is that life, at least simple life, exists elsewhere.
Some people, of course, claim that we’ve been visited already by such life. I get lots of letters from such people, but my response to that is that if aliens really had the brain power and technology to come all the way to the Earth, they wouldn’t merely just spoil a few cornfields or content themselves with briefly abducting a few well-known cranks, which is all they seem to do.
Well, if we’ve had no visits, what about the chance of humans travelling far from Earth? This would have fascinated H. G. Wells too of course. But it was in the 1960s that Man’s space flight went, as it were, from the cornflakes packet to reality. But since 1970, the glamour of course has faded. The Apollo programme to land an American on the Moon was an isolated episode, motivated primarily by the urge to beat the Russians, and it’s more than thirty years since the last Lunar landing. Nobody under 35 can remember when people walked on the Moon. To young people, the Apollo programme is a remote historical episode. They know the Americans landed a man on the Moon, just as they know the Egyptians build the pyramids, but the motivations seem just as bizarre in the one case as in the other. The 1995 movie Apollo 13, which some people here may have seen, was a docu-drama starring Tom Hanks of the near-disaster that befell James Lovell and his fellow astronauts on a voyage round the Moon. This movie was for me, and I suspect for many others of my vintage, an evocative reminder of an episode we followed anxiously at the time. But to a young audience, the outdated gadgetry and the right-stuff values portrayed in that movie seem as outdated and antiquated as the traditional Western.
There has been a big change in perceptions of manned space flight. And it’s understandable. In fact, when I’m asked about the case for sending people into space, my answer is always this: as a scientist and practical man, I am against it, but as a human being, I am in favour. What I mean by that is that practical activities in space, for communication, science, weather forecasting and navigation, are better and far more cheaply carried out by computers and robots. The practical case for manned space flight gets ever-weaker with each advance in miniaturisation and robotics. The International Space Station is a huge turkey in the sky, neither practical nor inspiring, allowing a few astronauts to go round and round the Earth in circles, in greater comfort than the Russians in the Mir spacecraft, but at vastly greater expense. Despite all that, I am nonetheless an enthusiast for space exploration as a long-range adventure for at least a few humans. The next humans to walk on the Moon may be Chinese, because only China seems to have the resources, and the willingness to undertake a risky Apollo Star programme.
I hope that Europeans and Americans will one day venture to the Moon and beyond, but I don’t think this will happen the way President Bush wants to do it, with a vast NASA programme. I think it will have to be in a very different style and with different motives. Costs must come down enormously and there must be an overt acceptance that the enterprise is very dangerous and a willingness to take risks. I think a role model for the future astronauts is not a civilian NASA employee, but someone more in mould of Steve Fossett, the wealthy serial adventurer who, after several expensive failures, succeeded in a round-the-world balloon flight. He has a craving for dangerous challenges on sea and air. He is now planning a non-stop flight single-handed around the world, and also wants to beat yachting records. Were Fossett to come to a sad end, we would mourn a brave and resourceful man, but there would not be a national trauma like there is in the US when the Shuttle crashes. We know that Fossett willingly took the risks and it was perhaps the way he wanted to go. Future expeditions to the Moon and beyond will I think only be politically and financially feasible if they are cut price ventures, perhaps privately funded, spearheaded by individuals in that mould, who accept they may be having one-way tickets. The first travellers to Mars or the first long-term denizens of a lunar base would confront very hostile environments. Nowhere in our solar system offers an environment even as clement as the Antarctic or the deep ocean. So it’s very important that Space doesn’t offer a solution to Earth’s problems. Not at all. We can’t all emigrate there. So we have to face Earth’s problems here on Earth, and these are in some respects even more threatening.
I’ve talked about some wonders and some prospects, but now a word about the threats. The Chairman mentioned my book Our Final Century. I should say that I gave the title Our Final Century?; the publishers removed the question mark, and the American publishers changed the title to Our Final Hour! Americans like instant gratification, and wanted dis-gratification as well, I suspect!
Until fifty years ago, the worst threats were the natural ones – floods, pestilences, earthquakes, asteroid impacts, and the like. But most of those aren’t getting any worse, and the environmental threats that worry us most are those that are aggravated by human activities – global warming, mass extinction, and the rest. I have no time to discuss these at all this evening, but I would like just to give a quote - a quote that offers a sober assessment -and from someone whose views seldom resonate with scientists but who I think does put it rather well in this case. It’s a quote from Prince Charles. What Prince Charles said in a lecture at Cambridge a few years ago is this: ‘Scientists do not fully understand the consequences of our many-faceted assault on the interwoven fabric of atmosphere, water, land and life in all its diversity. In military affairs, policy has long been based on the dictum that we should be prepared for the worst case. Why should it be so different when the security is that of the planets and our long-term future?’ That’s a coded statement of the controversial maxim known as the Precautionary Principle, and the seriousness of a threat is its consequence multiplied by its probability. The threats where the Precautionary Principle is important are those where even if the probability is low, one is anxious about the potential global consequences if we are unlucky.
So much for the natural environment; what about the rest of science and technology? Since the 1950s, the most serious threat has been manmade, the threat of all-out nuclear war, which would of course have killed a billion people had it happened. We were lucky, but it doesn’t mean that we were exposed to a prudent risk. To give an analogy, suppose that someone offers you a game of Russian roulette, where there’s a bullet in one of the six holes in a magazine, and you have a one in six chance of being killed, and suppose they say you have a hundred pounds if you survive. Now, the likely outcome, five time out of six, is that you would end up still alive and a hundred times better off, but you would be an idiot to take that bet. If it was five million pounds, then some people might. But the fact that you come off okay doesn’t mean you were exposed to a prudent risk, and if you look back at the Cold War, I think that’s one thing that should worry us. Many people like McNamara say the threat was something like twenty percent. So we survived, but do we think it was worth that risk? That risk has gone away temporarily, it’s in abeyance since the end of the Cold War, but nuclear weapons still exist and there could be some new superpower stand-off later this century which might be handled less well. So that’s still a threat, and that sets a sort of baseline risk to civilisation for this coming century.
But there will be other threats stemming from newer technologies. The tensions between benign spin-offs from new discoveries and the threats proposed by the Promethean power that science gives us are sharpening up. In this new century, not only is science advancing faster than ever, but it is causing extra dimensions of change. Whatever else may have changed over the last few thousand years, one thing that hasn’t changed has been human beings and human nature, but in this coming century we can’t assume that, because targeted drugs, genetic modification, and perhaps even silicone implants into our brains, may change human beings themselves, their minds and attitudes, as well as their physique. And that’s something qualitatively new in our history, which makes this coming century even less predictable than the 20 th Century was to Wells in 1900.
And when projecting a hundred years ahead, then the lesson of the last century is that we should keep our minds open, or at least ajar, to things that seem on the fringes of science fiction. Some rather flaky futurologists who are normally found in California conjectured a technical change may actually accelerate towards a sort of cusp, what they call a singularity. They think that computers may become more intelligent than human beings, and then of course making a computer is the last thing humans need do, because they will then take over and invent still more intelligent machines, and they will discover the new science, not us. That’s probably science fiction, but I think we can’t be absolutely sure, and I certainly won’t venture any predictions. But we can predict some trends extending at least to the year 2020 - sixteen years from now.
Miniaturisation, although already amazing, is far from its theoretical limits. Each tiny circuit element of a silicon chip contains billions of atoms. It’s exceedingly large and coarse compared to the smaller circuits that could in principle exist, and one long term hope is to assemble nanostructures and circuits, bottom-up, by sticking single atoms and molecules together. This is how living organisms grow and develop, and it’s how Nature’s computers are made. An insect’s brain has about the same processing power as a powerful present day computer, but it is made in a quite different way. And within twenty years, all humans will be bathed in a cyberspace which allows instant communication with each other, not just in speech and vision, but via elaborate virtual reality. I think these are realistic predictions.
These technical advances are exciting and they are in many ways benign. They are benign because they are engines of economic growth, but they reduce pressure on resources and on the environment and don’t increase it. Miniaturisation obviously does. So does anything that removes the need for real travel. These technologies, and biotech, are good news in the sense that they reduce environmental pressures. But even these advances will have a dark side. They will pose ethical dilemmas. More than that, they will present some new threats.
Let me mention one of those threats, one from biotech, which already looms large. I want to quote something from a report of the American National Academy of Sciences. It says: ‘Just a few individuals with specialised skills could inexpensively and easily produce a panoply of lethal biological weapons that might seriously threaten the US population. The deciphering of the human genome sequence and the complete elucidation of numerous pathogen genomes allows science to be misused to create new agents of mass destruction.’ This is a fairly near-term prediction, and an organised network would not be required for this, just one scientifically trained fanatic, or a weirdo with a mindset of those who now design computer viruses, or the mindset of an arsonist. The impact of even a local incident, bio or cyber, would be hyped and globalised by the media, causing wide disruption, psychic and economic. Everyone will be mindful that the same thing could happen again, anywhere, anytime.
Catastrophes could arise simply from technical misadventure, triggering some unintended runaway. Some commentators on biotech, robotics and nanotech worry that when the genie is out of the bottle, the outcome may be impossible to control. Such people urge caution in, as it were, pushing the envelope of science in some areas, simply because of unease about where it might lead. In fact, in America , one vocal proponent of this line is a chap called Bill Joy, who was the Chief Scientist of Sun Microsystems. He wrote an article in, of all places, Wired magazine, which is a magazine full of articles about gizmos and technical wonders, and it was rather surprising that an article by one of the heroes of cyber technology should take this line, but his article, entitled ‘Why the future doesn’t need us’, attracted wide comment, and even the London Times likened it to the memorandum from Frisch and Peierls in 1940 which alerted governments to the possibility of atomic bombs. Bill Joy is worried about remote threats of physics-based technologies, about computers taking over, and things of that kind, which still seem rather like science fiction, but he thought we should already guard against such nightmares by putting the brakes on the science that opens these doors. He advocated what he called fine-grained relinquishment, which is to give up the dangerous science and go ahead with the rest. But that doesn’t make sense, because of the obvious difficulty that most discoveries can be applied both for good and for ill.
The uses of academic research generally can’t be foreseen. The inventors of lasers didn’t foresee that an early application of their work would be to eye surgery, detached retinas and so on. The discovery of x-rays certainly wasn’t motivated by a search for ways to see through flesh; it was an accidental discovery by a physicist. We can’t reap the benefits of science without accepting the risks, and nobody would advocate a blanket prohibition on all risky experiments and innovations because that would surely paralyse science and deny us all its benefits. In the early days of steam, hundreds of people died horribly when poorly-designed boilers exploded, and most surgical procedures, even if now routine, were risky and often fatal when they were being pioneered.
So we must accept some risks. But we should - and here I support the Precautionary Principle - be circumspect about carrying out any experiments that generate conditions with no precedent in the natural world, or which could release a dangerous pathogen, or have some small possibility of a really catastrophic downside. Our risk estimates are subjective and uncertain - even a small probability of a really catastrophic downside is unacceptable if the risk is global - and I think a precautionary attitude is then appropriate and it leads to the need for some constraints on what we do.
Of course there’s already a constraint, an acceptance of constraints, for ethical reasons, in some areas of science - research on embryos or dangerous pathogens and so forth – but one surely should worry about how effective such regulation can be. Science is an international enterprise and with strong commercial pressures, etc., so my worry is that even if there was a consensus internationally that some kind of science or technology should be regulated for ethical reasons or because of their dangers, such regulations could be as hard to enforce internationally as the drug laws have proved to be. So I think that’s a real downside.
But we can nonetheless try to minimise these risks by focusing on the sciences that pose the least risk and seem to hold out the greatest hope.
We can’t do everything in science, and there is an ever-widening gap between what can be done and what can be afforded. At the moment, effort in science is deployed sub-optimally. This seems to be so whether we judge in purely intellectual terms or take account of likely benefit to human welfare. Some subjects have had the inside track and gained disproportionate resources. Others, like environmental research, renewable energy sources, biodiversity studies, and so forth, deserve more effort than they have. Within medical research, the focus is mainly on cancer and cardiovascular studies, the ailments that loom largest in prosperous countries, rather than on the infectious diseases endemic in the tropics. Choices on how priorities in research should be set and how science is applied aren’t just for the scientists to make. They should be debated far beyond the scientific community. That is why everyone needs at least some feel for science, and a realistic attitude to comparative risk, otherwise public debates won’t get beyond tabloid-style sloganising, and we have seen cases when that’s happened. But though the public has to formulate ethical guidelines, etc., scientists do have special obligations.
The scientists who developed the first atomic bombs actually set a rather good example in the post-War era for researchers in any branch of science that has grave societal impact. These men were uprooted from placid academic laboratories and sent to Los Alamos . They were mindful that fate had plunged them into epochal events, and many returned to academic work after the War, but when they did, they sustained a lifelong concern for arms control. Some of this great generation are still with us – our own Sir Joseph Rotblat, the most inspirational among them. These people didn’t say they were just scientists and that the use made of their work was up to politicians. They felt they had a special responsibility. We feel there is something lacking in parents who don’t care what happens to their teenage children, even though it is generally beyond their control. Likewise, scientists should not be indifferent to the fruits of their research. They should plainly forego experiments that are themselves risky or unethical. More than that, they should try to foster benign spin-offs but resist, so far as they can, dangerous or threatening applications. They should raise public consciousness of environmental hazards.
The insights of 21 st Century science will surpass those of the 20 th Century. Some people say all the great science has been done. That is anything but the case. As its frontiers advance, their periphery gets longer, and ever more new mysteries come into focus to challenge us further in all the sciences, and the resultant technology will confront us with a diverse array of challenges, disruptive asymmetric threats to our security from individuals, even more empowered by bio and cyber technology, and there will be the risk of disastrous error as well as of terror and an ever-widening range of ethical issues. Society insistently needs latter day counterparts of Joseph Rotblat - not just campaigning physicists, but biologists, computer experts, and the environmentalists as well. Academics and independent entrepreneurs I think have a special obligation here, because they have more freedom either than those who are employees or than those who are direct government employees, and they are not subject to commercial pressure.
I will end as I began with a cosmic perspective. Let’s think about the enormous time spans of cosmic evolution. The stupendous time spans of the evolutionary past, over which Darwinian evolution occurred, are now part of common culture. It is four and a half billion years since the Earth formed. But astronomers are mindful that even longer time spans lie ahead. The Sun has been shining for four and a half billion years, but it will be another six billion years before the fuel runs out and it swells up, blows up its outer layers, destroying the inner planet and vaporising any life that remains on Earth. So that’s six billion years from now.
To get some feeling for these time spans, let’s consider an analogy. Imagine a walk across America , and imagine you start in New York when the sun was born, and pace yourself to end up in California when the Sun is about to die, ten or eleven billion years later. To make that walk, you would have to take one step every two thousand years. That’s a measure of the vastness of cosmic time. Moreover, recorded history would be just five or six steps, and – this is the important point – those five or six steps would come somewhere before the halfway stage in your walk, somewhere in Kansas perhaps. I don’t want to offend any Kansans, but that’s not the high point of the journey! So there’s a lot more future to come. There’s an unthinking tendency to imagine that humans will be around experiencing this final event, but any life and intelligence that exists then when the Sun dies could be as different from us as we are from the bacterium. The unfolding of structure, intelligence and complexity in the cosmic perspective still has immensely far to go. The future is longer than the evolutionary past.
And now let me refer again to the picture of the Earth from space. This view has been familiar to us only for the last forty years of course. But suppose that some aliens had been watching the Earth for the entire history of the Earth, four and a half billion years. What would they have seen? Over nearly all that immense time, Earth’s appearance would have altered very gradually. The only abrupt worldwide changes were triggered by major asteroid impacts or volcanic super-eruptions, which turned Earth transiently grey. Apart from those brief traumas, nothing happened suddenly. The continental land masses drifted, the ice cover waxed and waned, at some times whitening the entire planet, successions of new species emerged, evolved and became extinct. But in just a tiny sliver of the Earth’s history, the last one-millionth part, a few thousand years - a few thousand years in the four and a half billion - the patterns of vegetation started to change much faster than before. They signalled the start of agriculture. Changes accelerated as human populations rose. Then other things happened, even more abruptly. Within fifty years, little more than one-hundred of one-millionth of the Earth’s age, the amount of carbon dioxide in the atmosphere, which over most of Earth’s history had been gradually falling, began to rise enormously fast. The planet became the intense emitter of radio waves - the total outputs from all TVs, cell-phones and radar transmissions. And something else happened, unprecedented in Earth’s four and a half billion year history: some metallic objects, albeit very small ones, a few tons at most, escaped from the biosphere completely. Some were propelled into orbit around the Earth, some journeyed to the Moon and planets, a few even were on paths that would take them into deep Space.
A race of advanced aliens watching our solar system from afar could confidently predict Earth’s final doom in six billion years when the Sun blows up, but could they have predicted this unprecedented spike of activity less than halfway through the Earth’s life, these human-induced alterations occupying overall less than a millionth of the elapsed lifetime and seemingly occurring with runaway speed? And if they continued their vigil, what might these hypothetical aliens witness in the next hundred years? Will a final spasm be followed by silence, or will the planet itself stabilise? And will some of the metallic objects launched from Earth spawn new oases of life elsewhere, creating even an expanding green sphere that eventually pervades the entire galaxy?
The answer will depend on what happens this century, whether our civilisation controls its advances or succumbs to threats. This thought should give us even stronger motives to cherish this pale blue dot in the cosmos and not foreclose life’s future, a future that’s human and perhaps even post-human. What happens in this uniquely decisive century - just two centimetres on my imaginary transcontinental walk - what happens in this century, will resonate into the far future and far beyond the earth.
© Sir Martin Rees, 8 June 2004