Does society risk my life through safety? The perils of too much risk-aversion

  • Details
  • Transcript
  • Audio
  • Downloads
  • Extra Reading

We would all like a riskless life, right? The Precautionary Principle embodies the idea that if the consequences of an action are unknown, but are judged to have potential for major or irreversible negative consequences, then it is better for society to avoid that action. Yet society's adherence to the Precautionary Principle and knee-jerk responses to avoid future disasters can worsen our risks as we all fall prey to the Illusion of Control.

Download Transcript


Professor Michael Mainelli


Good evening Ladies and Gentlemen.  It’s my privilege to welcome you to Gresham College tonight.  I’m pleased to find so many of you are risk-seeking enough to take a chance on tonight’s lecture.  Or should I ask if you are just risk-averse to tonight’s television schedule?


As you know, it wouldn’t be a Commerce lecture without a commercial.  So I’m glad to announce that the next Commerce lecture will continue our theme of better choice next autumn, that’s Wednesday, 12 September, here at Barnard’s Inn Hall at 18:00.

An aside to Securities and Investment Institute, Association of Chartered Certified Accountants and other Continuing Professional Development attendees, please be sure to see Geoff or Dawn at the end of the lecture to record your CPD points or obtain a Certificate of Attendance from Gresham College.

Well, as we say in Commerce – “To Business”.

We would all like a riskless life, right? Yet society’s adherence to the Precautionary Principle and knee-jerk responses to avoid future disasters can worsen our risks.  Tonight’s theme could probably be parodied as “you can’t have everything in moderation because then you have nothing in proportion.” 


Universally Risk-Averse


In fact, a riskless society is definitely not in moderation.  One intriguing science fiction story that examines a riskless society is Ringworld by Larry Niven [Niven, 1970].  Given today’s controversies, this novel is interesting in being one of the first to identify that growing civilisations struggle with global warming.  This sci-fi tale may not be great literature, but the book is packed with ideas. As always in sci-fi there are some aliens, called Pierson’s Puppeteers.  Being descended from herbivorous herd animals, Puppeteer morality is based on cowardice: the ruling class is known as they-who-lead-from-behind, and the supreme leader is called the Hindmost.  The Puppeteers are so risk-averse that they have moved their world to avoid the possibility of their sun expanding; they’ve developed a spaceship that in the event of danger, such as winding up inside a star, will hold its occupant in stasis until the star dies around the spaceship.  The plot unfolds as the Puppeteers discover that a circular ribbon world over 200 light years away is going to collide with their worlds in the distant future.  Scared into doing something, they send an emissary to assemble an investigatory mission.  The emissary chooses two humans from an overpopulated world.  Niven postulates that luck might be genetic.  As the human female is the luckiest person in the world, the sixth generation of birth lottery winners, the Puppeteer chooses her as a mascot, or good-luck-charm, for the trip.  The book makes two points relevant to tonight’s essay.  Firstly, through fear, the Puppeteers have developed very advanced technology, which is a bit of a paradox given the risks.  Secondly, the book postulates that anyone convinced enough to do something about solving global risks might just be insane.


Now a Gresham College audience is rational.  We’d like to start with the facts.  1,700 people died in the UK today.  This US chart documents the well-known top killers, heart disease (1 in 5) and cancer (1 in 7), followed by stroke (1 in 24) and motor accidents (1 in 84); but how many people expect to see suicide (1 in 119) in fifth place, three times more likely than firearm assault (1 in 314).  The UK numbers are more striking, because firearm assault and other assaults are less common than in the USA (the USA’s homicide rate is three times the UK’s, 5.87 to 1.75), people are at greater risk from themselves than from being murdered by others, 8.5 suicides per 100,000 per annum compared with 1.75 homicides.  Globally 14.5 suicides per 100,000 compares with 8.8 homicides per 100,000.  If anything, suicide may be under-reported for a number of religious, social and insurance reasons, while homicide is widely reckoned to be one of the more reliable global comparative crime statistics.  Paul Judge highlighted another mortality oddity that, despite our exaggerated concern for the very young, children up to the age of 14 have a very small chance of dying from external causes. We should more usefully worry about 14 to 20 year olds and those over 75.

From a quick perusal of statistics, one might conclude that public monies would be better spent keeping us away from sharp objects while alone-on-our-own, or away from motor vehicles than on air safety or rail safety or floods and earthquakes.  Yet, when 97% of the causes of death are illness or disease, and 3% are accidental causes, shouldn’t we putting almost all public monies into health care and medical research?  This leads us quite rapidly to a basic problem with risk, what are we measuring?  Are we trying to measure a steady-state level of risk, i.e. there is a natural murder rate or disease rate which is unchangeable, or a current number whose target we shall lower?  Given the wide discrepancies in international murder statistics and health, unless you believe that certain national characteristics are inherently violence-prone and unchangeable, many countries could do markedly better.  Even Britain, say when compared with Japan.


Risk and public policy is an important topic, yet few of us spend our time perusing mortality statistics.  Our society is driven by public perceptions of risk.  “Studies into voluntary and involuntary risks indicate that people are willing to accept the risks associated with activities that they choose to participate in compared with those that they have no control over.” [Wint, 2006, page 17]  9/11 and 7/7 changed our lives, from increasing bureaucracy when opening a bank account to more inconvenient travel to entering a country.  Overseas travellers to the USA have fallen by 17% since 2000 [The Economist, “Visa Policy – Keeping Out More Than Terrorists”, 8 February 2007].  Do you think that world peace is more threatened by terrorism or by larger numbers of people now less familiar with today’s superpower?  Our diets have been changed by BSE, by avian flu, by Sudan 1, by acrylamide, by genetically modified organisms, by pesticides and other food scares.  Our children’s lives have been changed by the 1993 Lyme Bay kayaking disaster, by increasing awareness over paedophilia, by MMR vaccine controversy, by dangerous dogs legislation, by concerns over obesity and school food.  Our work has been changed by health & safety regulations, smoking bans and rail crashes.  Our play has changed – you can’t race bathtubs any longer; volunteers cannot work on cathedral steeples.  Our science has been changed by scares about genetic tampering, stem cell research and nanoparticles or nano-goo.


Public Risk Theatre


Perceptions of risk lead to public responses.  We often laugh at the unintended consequences that result.  For instance the enforcement of compulsory helmets for cyclists in Australia led to more years of life lost to poor health than lives saved from cycle accidents because people reduced their cycling [Adams, 1995, pages 144-147]; or banning beef-on-the-bone in the UK in 1997, which avoided a risk somewhat more remote than getting hit on the head by a meteorite.  Perhaps we need those crash helmets for a different reason.  We laugh at anti-suffocation notices on plastic bags for children not-old-enough-to-read; or whinge that supermarket plastic bags punctured for safety, thus unusable for wet rubbish; or snigger at labels that coffee is hot following successful lawsuits pointing out the warning was lacking; or bemoan bans on the British Royal Legion selling poppies with pins to remind us of the horrors of war.  Observers of these crazy situations often invoke O’Toole’s Corollary of Finagle’s Law: “The perversity of the Universe tends towards a maximum.”

In fact, there is a fairly standard theatre of public risk.  First, we become aware of a novel risk or new way of looking at an old risk.  If it is interesting enough, a public debate begins.  The media find some new ‘talking-head’ experts to take sides and all then inflate every hard-won nugget of possible information into an entire series of spam headlines and articles.  If the media are successful, they generate so much hue and cry, “something must be done!”, that politicians leap in to gain face-time.  The media, and we, measure success in how much senior ministerial time is spent talking about the problem.  Of course, politicians wouldn’t be politicians if they didn’t believe that government should do something, despite any evidence to the contrary.  Frustration levels rise as everyone realises that politicians also subscribe to the belief that “It doesn’t matter what you do, it only matters what you say you’ve done and what you say you’re going to do.”  Naturally, in the face of public demand, the Government “has to do something”.  The Government comes up with some quick fixes and hastily implements them in a hurried display of action, simultaneously reinforcing the idea that managing this risk is a governmental problem, while the media gloat that “it was us what dun it”.  Meanwhile, the quick fixes, normally regulations or taxes or legislation, have unintended consequences.  The chattering classes tut that “it was another botched government intervention by the nanny state”, while civil libertarians complain that it should have been better thought through, and businesses groan about the costs.  Frustration levels rise again when everyone realises that little works, and what little does work costs a lot.  Some people throw up their hands and disengage.  If frustration is very high, we generate new risk perceptions leading to another cycle and new regulations.  The usual course of events is so well-known that the Better Regulation Commission summarised it in the report from which this diagram is taken [Better Regulation Commission, 2006, pages 7, 9]


The Better Regulation Commission’s recommendations are rational - provide better information and training on risk; set up a ‘gate-keeping’ panel to limit government action, which they called the “Fast Assessment of Regulatory Options” Panel; assess costs and benefits before acting; seek non-governmental ways of reducing risk; prune regulation regularly; set clear responsibilities and measure progress.  Their primary recommendation was [2006, page 38]:

“In its policies, regulations, announcements, correspondence, targets, performance agreements and actions, the government should:

a)       emphasise the importance of resilience, self-reliance, freedom, innovation and a spirit of adventure in today’s society;

b)      leave the responsibility for managing risk with those best placed to manage it and embark on state regulation only where it represents the optimum solution for managing risk;

c)       re-examine areas where the state has assumed more responsibility for people’s lives than is healthy or desired; and

d)      separate fact from emotion and emphasise the need to balance necessary levels of protection with preserving reasonable levels of risk.”

This is a noble and worthy recommendation, but not easy.  For the most part, public sector risk initiatives are qualitative and paper-intensive.  Government wants civil servants to be “risk aware”, but is handing out more slowly the financial information needed to manage risk.  It’s a bit like asking staff to be “cost aware” but not giving them a basic financial system that tells them actual costs.  The recommendation is particularly difficult because risk is always about perception, not fact.  Objective odds don’t matter.


Risk = Perception, Perception = Risk


The Chinese recognised that a “crisis = danger plus opportunity” (wéijī = wéixiăn + jīhùi).  Similarly, literature about risk discriminates among ‘risk’ as a chance or probability, ‘hazard’ as a danger or a dangerous object or condition, ‘threat’ as an indication of an object or condition that could influence the level of risk.  For instance, the risk of theft might have a weighted probability of £30,000; the hazard might be an unlocked door; the threat might be organised crime.  A fairly traditional definition is that “risk = severity times likelihood”.  ‘Likelihood’ is always a future, perceived likelihood, thus risk is always a subjective perception.  Niels Bohr (1885-1962), the physicist, in a slightly different context, pointed out a crucial truth, “An independent reality in the ordinary physical sense can neither be ascribed to the phenomenon nor to the agencies of observation.”  Change the perception and you change the risk.  It’s all about people.


‘Risk’ is not so much a research topic in its own right, rather a thread of an approach running through economics, finance, systems theory, sociology, organisational theory, control theory and decision theory, on up to public policy.  Risk as a thread is distinguished from, but related to, risk management as a research topic.  Bold claims are made on the cultural impact of risk-based thinking.  In his broad survey of the history of risk, Bernstein [1996, page 1] opens with the statement: “The revolutionary idea that defines the boundary between modern times and the past is the mastery of risk: the notion that the future is more than a whim of the gods and that men and women are not passive before nature.”  Douglas and Wildavsky see risk “as a joint product of knowledge about the future and consent about the most desired prospects” [Douglas and Wildavsky, 1982, page 5].  This view implies conscious knowledge, so they also highlight “hidden risks” which are either unconscious, unseen through cultural filters, unknown or not widely known [Douglas and Wildavsky, 1982, pp. 16-28].  One fundamental point is that you don’t have a risk if you don’t know about it.

Ulrich Beck’s influential work, Risk Society, [Beck, 1986] analyses our post-industrial, post-modern society in terms of risk transfer, rather than wealth distribution.  Like Giddens and others, he encourages us to think about reflexive modernity, the feed-back and feed-forward effects of risk in the entirety of our social systems.  He highlights our difficulties in interpreting science for social purposes, the intractability of incalculability, the multiplicity of risk definitions and our struggles with causality and legality.  Perhaps his most telling conclusion is that “in definitions of risks the sciences’ monopoly on rationality is broken. [Beck, 1992, page 29]  Ulrich echoes Frank Knight’s economic assessment, “Profit arises out of the inherent, absolute unpredictability of things, out of the sheer, brute fact that the results of human activity cannot be anticipated and then only in so far as even a probability calculation in regard to them is impossible and meaningless.” [Knight, 2002, page 311]  The inherent potential for political conflict, particularly where low objective risks meet high subjective perceptions of threat, is widely observed [e.g. Louisot, 2004, page 46]  The post-modern conclusions about the political implications of risk-focus are strikingly similar in many cases.  For example, Sir John Krebs initiated a workshop at the Royal Society in 2005 that set out principles for risk assessment – stakeholder consultation, an iterative process, transparency about uncertainties, broad engagement and better communication.


Of course, Donald Rumsfeld made this sentiment famous:

Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know.  We also know there are known unknowns; that is to say we know there are some things we do not know.  But there are also unknown unknowns - the ones we don’t know we don’t know.”

Douglas’ and Wildavsky’s overriding theme is that risk is selected through social organizations and that the management of risk is undertaken by organizations.  So, perhaps the media shouldn’t be parodied as Chicken Littles, but applauded for helping us to start to recognise risks.  The next point to make is that we often don’t know the likelihoods or the severities to calculate risks from hazards.  Just to take one case, the odds of asteroids hitting the earth and the damage they can do are well-studied, but still inexact.  Risk and uncertainty (risk where the probabilities of occurrence are poorly known or unknown) play a major part in determining courses of action:

“There are three fundamental causes of difficulty in making a selection between alternative courses of action, which we define as uncertainties in knowledge of the external environment (UE), uncertainties as to future intentions in related fields of choice (UR) and uncertainties as to appropriate value judgements (UV).  A perception of uncertainties of the first kind can lead to demands for further gathering and interpreting of information about the present and future state of the community or its physical setting (sometimes expressed in the form of demands for “more research”); while a perception of uncertainties of the second kind can lead to demands for a widening of the field of decision (often expressed as a demand for “more coordination”); and a perception of uncertainties of the third kind can lead to demands for “more policy guidance”.  These all represent demands for a change of some kind in the context of decision for the situation now being considered.”[Friend and Jessop, “The Nature of Planning” in Open Systems Group, 1981, pages 239 - 240]

Douglas also explores the bounds of institutional thinking and relates social behaviour to risk: “one strand is cognitive: the individual demand for order and coherence and control of uncertainty.  The other strand is transactional: the individual utility maximizing activity described in a cost-benefit calculus” [Douglas, 1986, page 19].  At an economic level “it has to be recognized that business activity entails risks.  Then the question for economic policy becomes how to persuade risk-averse citizens to assume necessary risks.” [Douglas, 1985, pages 43-44]

“The culturally learned intuitions which guide our judgment for any of our fields of competence teach us enough probabilistic principles but they are heavily culture-bound.  We are all lost when we venture beyond the scope of our culturally given intuitions.” [Douglas, 1985, page 33]  Risk is intimately linked with reward:  “The emphasis on ability to cope goes along with a different set of assumptions about risk management.  Instead of assuming general risk reduction as the ideal, Clark assumes that effective hazard management must seek to increase ability to tolerate error and so to improve ability to take productive risks [Clark W, 1977, “Managing the Unknown: An Ecological View of Risk Assessment” in Kates].  It is a tempting synthesis to say that cultures are defined by their distinguishing risk/reward profiles.  These risk/reward profiles may be subconscious to the point that they are automatic assumptions:

“The culture of any group of people is that set of beliefs, customs, practices and ways of thinking that they have come to share with each other through being and working together.  It is a set of assumptions people simply accept without question as they interact with each other.”[Stacey, Ralph D., 1993, page 41]

Hofstede sounds a cautionary note on cultural differences leading to unsound assumptions.  He maintains that attempting to apply organisational theories developed in one country may be unsound in another country, in particular American theories that have a cultural bias such that “the extreme position of the United States on the Individualism scale leads to other potential conflicts between the U.S. way of thinking about organizations and the values dominant in other parts of the world.” [Hofstede, 1980, page 61]


Basic cost/benefit analysis highlights cultural bias.  Many people object to cost/benefit analysis because it implicitly puts a price on human life.  Yet every safety department has to allocate resources based on costs and benefits, thus every quantitative cost/benefit study on safety runs the risk of the media publishing the number that has to be there to do the analysis.   The Economist reported that in 1988 the Department of Transport rebased its financial value of a life to £750,000 for a road accident and £2M for a rail accident [The Economist, “The Cost Of Living”, 16 March 1996, page 29].  Your death is a greater loss to society when you take the train. Moreover, the Department chose the road death value after examining a range of numbers.  “Ever cost-conscious, it chose the lowest.”

Before you laugh too much, think of all of the dinner party talk about being “unfairly caught” by police or speed cameras above some speed limit that could have been lower.  55% of drivers admit to exceeding the speed limit a little every day.  70% to 85% of drivers admit to speeding, distinguishing between ‘ordinary, safe speeding drivers’ and ‘dangerous speeding drivers’.  Driving at 40mph in a 30mph zone was seen in one study as more acceptable than dropping litter.  Look at the implicit cost/benefit calculations.  In a speech last year, “The Regulation of Risk – Setting the Boundaries”, Rick Haythornthwaite, Chairman of the Better Regulation Commission, pointed out a recent road safety advertisement of a young girl lying crumpled on the road and the concluding message that a child hit by a car travelling at 40 mph has an 80% chance of being killed while a child hit by a car travelling at 30 mph has an 80% chance of surviving.  He observed:

“It represents some sort of balance – moderated by public acceptability - between the costs to the economy of further slowing down travel times and the benefits of saving the lives of more children.  The advertisement implies that 20 percent of children struck by a car travelling quite legally at the urban speed limit are likely to be killed.  In effect, it is saying that this is the optimal level of protection that child pedestrians should enjoy and that any lower speed limit would be subject to the law of diminishing returns.  This is a particularly telling – if inadvertent - example of the trade-offs and judgments that our policy makers are obliged to make in regulating to control risk.” [Haythornthwaite speech, 2006]

Of course this can get interestingly controversial, rapidly.  Who determines where the 30 mph speed limit applies?  Where exactly do child pedestrians become more or less valuable, at the edge of the neighbourhood, beside a motorway?  On the continent our 30 mph speed limit would be either 50 kph or 30 kph.  Are children in the 50 kph zone less valuable than British children, or more valuable in the 30 kph zone?  In the USA, the comparable 30 mph zone would be 25 mph.  Are American children more valuable than British children overall?  You change the cost/benefit-of-a-life calculation every time you race to make some appointment in your automobile.  God’s not in heaven, he or she is behind the wheel, or in the traffic safety department.  Nessus, the Puppeteer in the Ringworld says, “the majority is always sane”, thus a courageous Puppeteer isn’t merely regarded as insane, but is insane.  When it comes to speeding, the majority are God.  God normally says a statistical life is worth somewhere between £1 million and £5 million, unless you’re in the developing world. 


Global Risks, Global Solutions


“We are running out of oil; we’re extracting annually more than we discover …”

“Asteroids are going to crash into the earth and destroy all mammalian life …”

“Avian flu is going to kill tens of millions …”

“Global warming is going to destroy civilisation …”

Which of these risks belong at the top of the world’s concerns?  Despite possible catastrophe, are they so improbable or our resources so puny that we should not waste resources trying to manage them?  If we agree to manage them, how do we do so jointly across diverse cultures?  Or should we reject collaboration and legitimacy for raw power?  We have reacted forcefully to some global risks, including the rash of business scandals that began with Enron, terrorist attacks, floods and natural disasters. Other risks we have done little about, such as asteroids. Still others, such as poverty and disease, pose a more tangible threat to society but we are often unaware of the scale of our response – how much, is it too high, is it too low?

A World Economic Forum Report, “Global Risks 2006”, foresaw risks that would remain headline risks, such as terrorism, oil-price spike, fiscal crisis, influenza pandemic, HIV/AIDS, TB, malaria, earthquake.  The report anticipated that a number of other risks might move up the global agenda, including climate change, liability regimes, counterfeits, electro-magnetic fields.  Global Risks 2006 classified numerous risks as outliers, for example, space weather and biodiversity.  The report continued on to develop four key risk scenarios – “Oil Price Shock: Price Spike Above US$80-$100 barrel”, “Influenza Pandemic”, “Terrorism” and “Climate Change”.  The Report also highlighted the conflation of risk, that increases in certain risks increase the likelihood or impact of other risks.

Yet, it can be too convenient to have Global Risks studies.  They imply that somebody somewhere is at least thinking about these big risks; these known unknowns are somebody else’s problem.  Adrian Berendtgot me thinking about the moral hazard of some of these risks in banking operations.  He stated, “If there is a catastrophe, our customers will understand that we can’t service them, whereas, if we’ve had a computer glitch, they’ll go to our competitors.”  So we have another conundrum – we are negligent about not addressing risks that are local, but not negligent about risks that knock out the world.  Not protecting our building against flooding is negligent, while not helping to prevent London flooding isn’t negligent.   Local efforts are displaced by apparent action, however feeble, from a central or higher authority. 


Clear, Simple, and Wrong?


One German concept now widely promoted as a means of dealing with risk is the Vorsorgeprinzip, or Precautionary Principle.  The Precautionary Principle is a moral or political principle most often applied in the context of the environment or human health, where the consequences of acting on complex systems may be unpredictable.  The Precautionary Principle embodies the idea that if the consequences of an action are unknown, but are judged to have potential for major or irreversible negative consequences, then it is better for society to avoid that action.  In the absence of a scientific consensus that harm would not ensue, the burden of proof falls on those who would advocate taking the action.  John Stuart Mill highlighted the importance of prevention in ethics, “the only purpose for which power can be rightfully exercised over any member of a civilised community, against his will, is to prevent harm to others.” 

The Precautionary Principle echoes a medical principle found in Hippocrates, though not part of the Hippocratic Oath, (“have two special objects in view with regard to disease, namely, to do good or to do no harm”, “Epidemics”, Book I, Section XI), now more widely known and clearly stated in Latin as “primum, non nocere”, “first, do no harm” (often incorrectly attributed to Galen).  The Precautionary Principle applies to institutions and institutional decision-making processes as well as individuals – “better safe than sorry”. Speaking about science, Sir Martin Rees said at Gresham College, “The threats where the Precautionary Principle is important are those, where even if the probability is low, one is anxious about the potential global consequences if we are unlucky.” [Science In A Complex World: Wonders, Prospects And Threats”, 8 June 2004] 

The Principle creates problems in both assessing harm and proving harm will not emerge.  At one extreme, the Precautionary Principle excludes cost/benefit analysis.  A deeper problem is discerning action and inaction.  Is banning  mercury in thermometers, freon in refrigeration, or even carbon dioxide exhaust from automobile engines and power plants, removing a risk or taking an action?  So a clarification includes, “... a willingness to take action in advance of scientific proof [or] evidence of the need for the proposed action on the grounds that further delay will prove ultimately most costly to society and nature, and, in the longer term, selfish and unfair to future generations.” []  The Precautionary Principle raises awareness of ethical responsibilities towards maintaining the integrity of natural systems, and the fallibility of human understanding.  The Precautionary Principle, in subtly-different guises, is included in the United Nation’s 1982 World Charter for Nature, the 1987 Montreal Protocol, the 1992 Rio Declaration on Environment & Development and the Maastricht Treaty.

In economics, the Precautionary Principle affects rational decision-making.  The irreversibility of possible future consequences presents a quasi-option situation such that a “risk-neutral” society should favour decisions that allow for more flexibility. Gollier et al conclude that “more scientific uncertainty as to the distribution of a future risk, i.e. a larger variability of beliefs, should induce Society to take stronger prevention measures today.” [Gollier et al, 2000]

In deciding how to apply the Principle, analysts may use a cost-benefit analysis that factors in both the opportunity cost of not acting, and the option value of waiting for further information before acting. One of the difficulties of the application of the Principle in modern policy-making is that there is often an irreducible conflict between different interests, so the debate is always political, not scientific.  Van Asselt and Vos [van Asselt and Vos, 2006] highlight their paradox of uncertainty and risk that “on the one hand, it is recognised that science cannot provide decisive evidence on uncertain risks, while on the other hand lawyers and policy makers appeal to science for some kind of certainty.”  Crichton notes in his novel, State of Fear, [Crichton, 2005, page 678] that “The ‘precautionary principle’, properly applied, forbids the precautionary principle. It is self-contradictory. The precautionary principle therefore cannot be spoken of in terms that are too harsh.”  Stier [Stier, 2005] elaborates, “An unstated corollary is ‘Precaution should be taken regardless of the risk of any precautionary action.’  That is, trying too hard to err on the safe side can lead to doing something less safe.  Crichton cautions that we need to be cautious about being cautious.  “The current near-hysterical preoccupation with safety is at best a waste of resources and a crimp on the human spirit, and at worse, an invitation to totalitarianism.” 


A humorous take on all this states that the risk/reward calculation for engineers looks something like this:

RISK: Public humiliation and the death of thousands of innocent people.

REWARD: A certificate of appreciation in a handsome plastic frame.

Being practical people, engineers evaluate this balance of risks and rewards and decide that risk is not a good thing. The best way to avoid risk is by advising that any activity is technically impossible for reasons that are far too complicated to explain.  If that approach is not sufficient to halt a project, then the engineer will fall back to a second line of defence: “It’s technically possible but it will cost too much.” []


The Precautionary Principle also highlights ethical problems in inter-generational risk transfer.  The 2006 Stern Review, The Economics of Climate Change, bolstered environmentalists’ economic foundations with cost/benefit analysis.  Critics have particular concerns about discount rates used to analyse long-term issues such as climate change.  Stern found it necessary both to explain the ethical ramifications of discount rates when assessing climate change economics and to apply a discount rate significantly below those found in typical financial analyses.  The importance of discount rates warranted a technical annex – “Ethical Frameworks and Intertemporal Equity”.  In the annex, Stern links the principle of ‘protection from harm’ to the ‘polluter pays’ principle.  He invokes an ethical imperative that “future generations should have a right to a standard of living no lower than the current one” and elaborates a concept of ‘stewardship’ as a special form of sustainability – particular aspects of  the world should be passed on in a state at least as good as that inherited from the previous generation.  Critics, such as William Nordhaus, Professor of Economics at Yale, dispute changing today’s discount rate for any specific risk.  If the discount rate is altered by politics for one risk, then all discount rate analysis can be altered and all discount rate analysis becomes political grandstanding.  Others argue that future generations will, under discount rate assumptions, be richer than us and can pay for more in their future. 

Haab and Whitehead’s economic-environmental blog posits a reductio ad absurdum argument about discount rates in general, “an extra glass of wine for Alexander the Great matters more than all today’s capital stock”.  But the deeper point is that if discount rate analysis applies to inter-generational transfers, the same arguments apply to pensions, public infrastructure, taxation, bio-diversity or cultural heritage. The Precautionary Principle directs us simultaneously to undertake strong preventative measures and to keep our options open.


Illusion Of Control


In many decisions about risk, we are victims of the Illusion of Control – the tendency for human beings to believe they can control or influence outcomes that they demonstrably have no influence over.  In 1975, Ellen Langer showed that people often behave as if chance events are under personal control.  Langer’s experiments demonstrated the prevalence of the Illusion and that people were more likely to believe they had control where there were “skill cues” such as choice, competition, familiarity and involvement.  The classic example is in craps where it has been shown that people rolling dice tend to throw harder for high numbers and softer for low numbers.  Some people argue that the Illusion of Control helps people succeed by increasing motivation and persistence.  Others point out that undeservedly high estimates of self-efficacy can be maladaptive, for instance encouraging people to pursue failing courses of action. 

One study of traders working in investment banks [Fenton-O’Creevy et al, 2003] found that traders who were prone to a high Illusion of Control had significantly worse performance on analysis, risk management and contribution to desk profits, thus they also earned significantly less.  The Illusion of Control may promote striving towards goals, but it is not conducive to sound decision-making.  The Illusion also predisposes people to take greater objective risks.  But Bandura recommends that: “In activities where the margins of error are narrow and missteps can produce costly or injurious consequences, personal well-being is best served by highly accurate efficacy appraisal.” [Bandura, 1997, page 71]


The Risk Industry


We have seen the emergence of a ‘risk industry’.  Academics, think-tanks, consultancies and government departments, all thrive at the lush watering hole of risk.  Take any recognised scare, give it a new twist and, hey presto, the research and policy money arrives immediately.  In post-industrial, post-modern society, people create posts based on helping other people deal with fear.  Advocacy groups, activists, lobbyists and NGOs provide socially-useful leisure.  Giddens points out that “Paradoxically, scaremongering may be necessary to reduce risks we face - yet if it is successful, it appears as just that, scaremongering.” [Giddens, Reith Lectures 1999, “Runaway World: Risk”]  Michael Crichton lampooned the risk industry in his novel State of Fear.  In that book, an environmental activist precipitates environmental disasters in order to keep public interest in environmental catastrophe, and his own funding.  Within a heavily-criticised global warming example, nevertheless Crichton’s key theme was actually media, university and activist manipulation of fear to keep themselves in business.  We are like deer frozen in the headlamps of fear.  The more we demand that others manage our risks properly, the more we limit our own due diligence because we believe someone will help us out in need.

Interestingly, we have come full circle in the “risk is perception and the perception is the risk” cycle.  The media is obsessed with “water-cooler stories”. “The water-cooler in America is the coffee machine in colder climates—the place where office workers stand around and gossip. Water-cooler stories must have recognisable characters and a developing drama.” [The Economist, “Stop Press”, 2 July 1998]  Today, we have reached the point where computer model predictions are the news – the perception of risk is the news itself.


One of the dangers, as Crichton points out, is that this encourages ‘Consensus Science’.  Because we’ve moved beyond “publish or perish” to “citation or obituary”, scientists are increasingly seeking to popularise their work; they seek attention.  If anything, while reversing the old bias towards the existing order, we’re in danger of replacing it with a bias in favour of novelty.  Further, and confusingly, a novelty in favour of consensus over fact is emerging.  To paraphrase, “knowledge is power, power corrupts, thus knowledge corrupts”.   For example, current climate change debate scores points based on the number of scientists who believe in climate change, not just on the facts scientists have unearthed, or weaknesses in contrary theories of global warming.  The fundamental problem is that science is about seeking knowledge, not popularity.  In true science, not consensus science, one correct scientist with a provable theory trumps all gainsayers.

Copernicus and Galileo had severe problems with the establishment, but not so much with their fellow natural philosophers.  Today’s paradox is to find ways to develop novel science without contesting established scientific orthodoxy.  If you fail to be novel without overturning the apple cart, you fail to get research grants within an increasingly publicly-funded, and perception-driven, research environment.  Further, new or scarier interpretations or models become water cooler news themselves, increasing the perceptions of risk, if not the facts. 


Risk-Averse, Risk-Seeking or Risk-Managed?


For many people, the Nanny State, class action suits and a culture where “somebody ought to pay – and it isn’t me” implies that we as a society are risk-averse.  Simultaneously, society may well be risk-seeking.  We are not risk-averse in our growing passions for gap years in remote backwaters, for gambling, or for high debt and good living.  While we ridicule Carol Midgley’s ‘cotton-wool kids’, Sandra Wint, writing for the Royal Society of Arts’ Risk Commision, inconveniently points out, “Statistics of injuries and fatalities resulting from accidents in some extreme sports indicate that in spite of the possibility of injury or death people are participating in increasing numbers.” [Wint, 2006, page 21]  Our risk-averse cycle clashes with our risk-seeking cycle over issues of safety and freedom.


I would love a calculus of risk combined with “conservation of risk” equations such that the total amount of risk-selection in society had to balance over the population and over time.  Sadly, that’s not the case.  People want the up-side of reward for the risks they believe they face, but quite naturally expect to be able to slough off the negative impacts if they materialise.  We have a spiral of lower and lower responsibility for adverse events.  We have an easy-debt society where open markets make it simple to take loans, so we have a spiral of higher and higher debt.  But all debt involves responsibility.  Yet lenders are securitising debt like mad so they won’t be there when the hot potato of default stops.

We gab about how the growth in capital markets, new technology and new risk products increases our already excellent placement of risk with those who can best bear and gain from it under a post-modern, hyper-safe financial system.  But commercial difficulties arise from excessive risk-seeking, for example one-way mortgage renewals, more bankruptcies with less stigma, and easier technology access to finance, as well as the headline cases of credit cards for pets.  We load up on housing price risk and credit default swaps in the short-term, but still feel behind the Joneses in the long-term.  Our diminished responsibility makes us ripe for more bubbles, along with deeper pain and loss for those few who care.  In the long-term, our risk-seeking society clashes with our need for hard savings and investment returns, affecting our long-term losses and security.


Prospecting For Risk


So, how might societal risk-aversion hurt you and me.  There is the obvious case that our own risk-aversion can hurt us when we refuse to invest in risky projects and thus achieve lower long-term returns to pay our pensions.  Less obvious is that inter-generational transfers can backfire – our pensions may not be secure with our children, while their environmental future might not be safe with us.  At a simpler level, let’s look at how individual risk affects overall supply and demand for risk.   Imagine a person whose risks range from neutral (0.00) to death (-100) or ecstasy (+100).  Most people are willing to trade a little potential ecstasy for a bit of safety.  This leads to ‘lopping off’ risk on the left-hand side of the diagram.  But the problem is that no risk is ‘lopped off’ without a cost.  The cost moves the likely outcome to the right.  Thus the person, and society as a whole, can be protected against severe outcomes, but wind up feeling behind on average.


This leads to a further problem of framing.  It’s rare that I ask you to remember a previous lecture, but I must go back to a lecture last year, “Why Do People Play The Lottery? Make Up Your Mind!”, where we covered Prospect Theory.  In that lecture I emphasised the importance of framing.  People who feel that they are ahead take risk-averse decisions, while people who are behind make risk-seeking decisions.  I used the framework to explain why people might play the lottery – they take silly risks because they feel behind.

We can start to unify risk-averse and risk-seeking behaviour by recourse to Prospect Theory.  Because society takes on more and more of the responsibility for adverse risk, the chances of seriously adverse outcomes is greatly reduced.  But the average outcome for individuals is worse, because of the cost of risk-avoidance.  Thus, the average person feels that they are behind. 


According to Prospect Theory, when people feel that they are behind, they become risk-seeking.  On the other hand, when contemplating holes in the safety nets of society, the average person feels ahead.  He or she supports sealing those holes so others can’t fall through.  He or she is risk-averse.  The problem is the cost.  As society becomes more risk-averse on fundamentals, it fuels its risk-seeking individuals.  People start to play lottery-type games more and more, high-risk advancement strategies, long shot careers.  As Bob Giffords pointed out to me, “In a world dominated by risk management, it is rational for me to gamble, because the marginal cost for me of managing the risk is greater than the marginal benefit of it making a difference.”  While we scream that “somebody ought to do something”, meanwhile we’re taking risks to get ahead in the hopes that society does cover our backside.


So we can fall back on a classic supply and demand curve contrasting price and quantity.  Our price for risk is increasing as we become risk-averse, while our quantity of risk is increasing due to media supply. 


Our public risk aversion and our private risk propensity collide.  We wind up with higher cost and more risks.  Last year, at the Sir Thomas Gresham Lecture in the Docklands, Werner Seifert warned of the dangers of focusing unduly on risk reduction in financial markets:

“Capital markets need political support, yet their very functioning undermines the support.  As a result, markets are fragile institutions, charting a narrow path between overwhelming government interference, and too little government support. The greatest danger for a democratic capital market is not that it will lapse into socialism, but that it will revert to the relationship system, suppressing competition under the excuse of reducing risk. When Frederick Von Hayek described how capitalism helps to safeguard freedom, he emphasised that markets produce spontaneous order.  They solve the planning and coordination problem by delegating decisions.”


Regulating Risk Regulation


Rick Haythornthwaite also made a number of telling points in his speech about more than just advertising:

¨        “the best way to stop bad regulation - and moles - is to dig down to the root of the problem”;

¨        “the Prime Minister ... argued that we are in danger of developing a wholly disproportionate attitude to the risks we should expect to run as part of daily life”;

¨        “a regulatory response by the state to every risk in society erodes personal responsibility”

¨        “capitalism is a system that functions on trust.”

Power [2004] argues that by turning all governance into risk management we emphasise risk rather than reward.  “Individuals, organisations and societies have no choice but to organise in the face of uncertainty, to act ‘as if ‘ they know the risks they face.” [Powers, 2004, page 59]  All enterprise incurs risk, but enterprise is driven by reward not risk.  If we excessively focus on risk, we never innovate.  If we over-penalise failure, we get the shameful avoidance of failure rather than the courageous search for truth, which Powers calls the “small print” society.  Along the way, we destroy personal responsibility.

It is worth noting that government, as well as capitalism, is a significant part of the problem of trust.  Perversely, over-regulation may strike one as the least bad mechanism for enforcement if he or she doesn’t pick up the direct costs.  Consider the case where I don’t trust the legal system for enforcement.  For example, I run a restaurant and want to take a competing but not-up-to-standard fellow restaurant to court for breaking food safety laws.  I know I’ll fail in court because the legal system is grossly inefficient and expensive, thus paying taxes for a lot of restaurant inspectors may be a more expensive but acceptable substitute.  Another example of futility, can I successfully sue a solicitor or barrister for encouraging me to spend money and time on a poor case?  Lots of government policy is based on the false assumption that people trust government over other organisations.  I’m not sure I trust somebody who removes nearly half of everything I gain from me and my family and friends for their “higher purposes”, but I can see how government officials might believe I ‘trust’ them if they look at opinion surveys of faith in inspectors contrasted with  people’s trust in the legal system.

People talk of over-protection and under-protection.  Yet I’d like to live a bit of this life I’m being so protective about.  But what is an optimal level of protection?  How can it be achieved?  From the outset, and I think few would disagree, it is neither possible nor desirable to eliminate all risk from our lives.  The law of diminishing returns suggests that, the smaller and more remote a risk, the greater is the cost of eradicating it.  Regulation is not a free good, although governments often behave as though it is. 

Boyfield notes [Boyfield, 2006, page 7], “So far, regulation has certainly proved one of the boom industries of the twenty-first century.  Yet regulation is not a free good: it has significant costs and implications that are often hidden from the immediate view of both customers and the suppliers of the goods and services they buy.”  Regulators recognise that there is a point beyond which further risk reduction is impractical or unacceptable.  But how do they determine where that point lies?   Whenever I hear “not a free good” I think of markets.  Whenever I hear optimal and adjusting (e.g. perceptions within society changing speed limit acceptability) I think of pricing. 

Regulators struggle with a crude mechanism.  Things are either allowed, or they’re not.  It seems to me that much considered regulation gropes towards developing various settings on enforcement, and also interpretation.  Suspension of activity is a two state model, on or off, leading to a lot of gaming, e.g. people within an industry on one side using the threat of suspending all activity, and the threat of the threat, etc. for advantage.  Thus, why two levels?  Why not three, four ...?  That brings you back to markets.  We have numerous good examples of risk being managed adjustably through markets, e.g. the price of your fire insurance is infinitely adjustable.  We require fire insurance certificates and let the market do the rest. 

There is too little understanding – and therefore too little faith – amongst civil servants and the public as to the extent to which you can ‘trust’ alternatives to regulation.  The UK government has successfully experimented with a few market-based approaches to risk in the UK, for instance Pool Re for terrorism reinsurance or the Clinical Negligence Scheme for Trusts in the NHS.  In both cases, complex variables are reduced to insurance-style charges that managers can understand and incorporate into financial decision-making frameworks.  Encouragingly, the public sector is exploring other combinations of markets, risk management and indemnification/insurance mechanisms to handle a number of public asset issues from fisheries to health care to polluted land.

Trust is increased by putting something on the line - perhaps only by ‘putting something on the line’.  We trust judges, in the UK, because they have a lot of social standing to lose if they fail to perform against expectations.  We can sort of trust builders if we make them post retention bonds.  We all understand deposits under forfeit.  I applaud more creative uses of quasi-insurance, as that means somebody is ‘putting something on the line’.  We should think about the possibility of restructuring regulatory markets so that regulators have more on the line than political risk.  Perhaps all auditors should ‘have something on the line’ when they award a certificate, with indemnity insurance to back up what they certify?

Of course, faced with these problems of societal trust, novel markets to manage risk may arise from  within capitalism.  A number of observers such as Robert J Shiller at Yale and yours truly, have looked at the rise of the online gambling markets with interest.  Faced with government gridlock on risks, we may find that people want take their own risk management into their own hands through gambling-like markets.  Why not offer other people hedged bets on your risks - your car, your unemployment, your home value, your health?  We have micro-credit banks, so why not micro-risk firms?  There are already some experiments of direct credit lending by individuals to individuals through firms such as Zopa or Prosper.  Micro-risk markets are a natural next step, and an early indicator may be a new firm called HedgeStreet that provides an electronic marketplace for online investors to trade innovative financial instruments directly with each other.



“Of course I don’t believe in it!  But I understand that it brings you luck whether you believe in it or not.”

Niels Bohr (1885 - 1962) Danish physicist, when asked why he had a horseshoe on his wall; attributed.

We’ve had a fun tour of the wild and dangerous women of risk – Miss Information, Miss Calculation, Miss Placement and Miss Adventure, but we end up with Miss Trust.  So where do we go.  Adams believes that a world with no risk would have no uncertainty, but equally no freedom or individuality, and would result in no progress [Adams, 1995, page 19].  Adams defends “Bad Luck” [Adams, 2003], claiming that a society which can’t accept that accidents happen is destined to be governed by a culture of blame.  I sometimes wish our public debate were as much about putting risk back into society, regulation and business as it seems to be about destroying it.  In conclusion, risk is about perception, and bad perceptions imperil us.


Thank you.


Further Discussion

1.        Is it inevitable that debates about science, safety and risk are political?

2.        How might we encourage more informed debate without being submerged by the detail?

3.        What potential do micro-risk markets have for you?


Further Reading

1.        ADAMS, John, “In Defence of Bad Luck”, Spiked, 22 December 2003 –

2.        ADAMS, John, Risk, UCL Press, 1995.

3.        Bandura, A, Self-Efficacy: The Exercise Of Control. W H Freeman and Company, 1997.

4.        BECK, Ulrich, Risk Society: Towards A New Modernity, Sage Publications, 1992 (originally published as Risikogesellschaft: Auf dem Weg in eine Andere Moderne, Suhrkamp Verlag, 1986).

5.        Better Regulation Commission, “Risk, Responsibility and Regulation - Whose Risk Is It Anyway?”, 2006 & UK government response

6.        BOYFIELD, Keith, “Editorial: Better Regulation Without The State”, Economic Affairs, Institute of Economic Affairs, Volume 26, Number 2, June 2006, pages 2-8

7.        CRICHTON, Michael, State of Fear, Harper Collins, 2005.

8.        DOUGLAS, Mary, How Institutions Think, Routledge & Kegan Paul, 1986.

9.        DOUGLAS, Mary, Risk Acceptability According to the Social Sciences, Routledge and Kegan Paul, 1985.

10.     DOUGLAS, Mary and Wildavsky, Aaron, Risk and Culture: An Essay on the Selection of Technological and Environmental Dangers, University of California Press, 1982 (1983 ed).

11.     FENTON-O’CREEVY, Mark, Nicholson, Nigel, Soane, Emma and Willman, Paul, “Trading On Illusions: Unrealistic Perceptions Of Control And Trading Performance”, Journal of Occupational and Organisational Psychology, Volume 76, Number 1, March 2003, pages 53-68.

12.     GOLLIER, Christian, Jullien, Bruno and Treich, Nicolas, “Scientific Progress and Irreversibility: An Economic Interpretation of the ‘Precautionary Principle’”, Journal of Public Economics, Volume 75, Number 2, 200, pages 229-253.

13.     HOFSTEDE, G., “Motivation, Leadership and Organization: Do American Theories Apply Abroad?”, Organizational Dynamics, Summer 1980, 42-63.

14.     HOFSTEDE, G., Culture’s Consequences, Sage Publications, 1980.

15.     JANEWAY, William H, “Risk Versus Uncertainty: Frank Knight’s “Brute” Facts of Economic Life”, The Privatization of Risk, 7 June 2006

16.     JUDGE, Sir Paul, “Risk and Enterprise” (The Inaugural Lecture of the 252nd Session of the Royal Society for the Encouragement of Arts, Manufactures & Commerce), 26 September 2005.

17.     KNIGHT, Frank H, Risk, Uncertainty and Profit, Beard Books, 2002 (first published 1921).

18.     LOUISOT, Jean-Paul, “Managing Intangible Asset Risks: Reputation and Strategic Redeployment Planning”, Risk Management, Volume 6, Number 3, 2004, pages 35-50.

19.     MAINELLI, Michael and DIBB, Sam, “Betting on the Future: Online Gambling Goes Mainstream Financial”, Centre for the Study of Financial Innovation, Number 68 December 2004, ISBN: 0-9545208-5-8, 34 pages

20.     MILL, John Stuart, On Liberty, 1859.

21.     MOODY-STUART, Sir Mark, “The Politics of Risk” (RSA Institute of Management Annual Lecture), 13 May 2002

22.     NIVEN, Larry, Ringworld, Del Rey, 1970.

23.     OPEN SYSTEMS GROUP, Systems Behaviour, Harper and Row, 3rd edn, 1981.

24.     POWER, Michael, “The Risk Management Of Everything: Rethinking The Politics Of Uncertainty”, Demos, 2004

25.     The Royal Society, “Social Science Insights For Risk Assessment” (findings of a workshop held by the Royal Society and the Food Standards Agency on 30 September 2005.

26.     SEIFERT, Werner, “Productivity and Capital Markets: Globalization Meets Parish-Pump Politics”, Sir Thomas Gresham Lecture in the Docklands, Gresham College, 21 September 2006

27.     STACEY, Ralph D, Strategic Management and Organisational Dynamics, Pitman Publishing, 1993.

28.     STERN, Nicholas, The Economics Of Climate Change, HM Treasury, October 2006

29.     STIER, Jeff, “Going In Circles, Precautionary Style”, American Council on Science and Health, 14 February 2005

30.     VAN ASSELT, Marjolein and Vos, Ellen, “The Precautionary Principle and the Uncertainty Paradox”, Journal of Risk Research, Volume 9, Number 4, June 2006, pages 313-336.

31.     WINT, Sandra M E, “An Overview of Risk”, The Royal Society for the Encouragement of Arts, Manufactures and Commerce – Risk Commission, 2004/2006(update)

32.     World Economic Forum, “Global Risks 2006”




My thanks for help with this lecture go to Lord Jamie Lindsay, Bob Giffords, Mark Schaffer and Robert Muetzelfeldt for sparking some thoughts, Liz Bailey for newspaper clippings, Adrian Berendt for getting me to think about proportionality, and some great clients at PricewaterhouseCoopers - Sophie von der Brelie-Labmin, Christopher Michaelson and Alison Thomas – for encouraging me to dig deeper.



©Professor Michael Mainelli, Gresham College, 19 March 2007

This event was on Mon, 19 Mar 2007

alderman professor michael mainelli

Lord Mayor Professor Michael Mainelli

Mercers’ School Memorial Professor of Business

Alderman Professor Michael Mainelli MStJ PhD MPhil BA FCCA FCSI(Hon) FBCS CITP FIC CMC MEI is Honorary Life Fellow of Gresham College and Emeritus Mercers'...

Find out more

Support Gresham

Gresham College has offered an outstanding education to the public free of charge for over 400 years. Today, Gresham plays an important role in fostering a love of learning and a greater understanding of ourselves and the world around us. Your donation will help to widen our reach and to broaden our audience, allowing more people to benefit from a high-quality education from some of the brightest minds.