Trust, Trustworthiness and Audit
Baroness Onora O'Neill CBE
You can wonder why a philosopher got into this topic, but I actually think it is a pretty interesting area. I said at the breakfast table this morning that I was speaking here today and that the question to be addressed was 'what makes a good auditor?', and I was met with a barrage of flippant suggestions: a manager's answer - a good auditor comes promptly, signs anything, and goes away; a shareholder's answer - a good auditor is fast, cheap, and never fails to detect fraud, except perhaps when it's profitable; a professional answer - a good auditor comments only where required to, and is non-committal about everything else, however important it may be. And here am I, trying to say something less flippant about auditing. I shall have at the centre of my remarks, as an example, financial audit, because probably most of us find that crossing our paths more frequently than the other sorts. But I think the remarks are generalisable to the other varieties of auditing.
A common answer to the question 'Makes a good audit?' or a good auditor, might be that we do all this for the sake of trust: a good auditor is one who secures or increases trust, and we want, or in many circumstances require, audit because we want and need trust. Well, I think we need to show how and why audit may be linked to trust, and whether and when it should be linked; it is not automatically so linked. There has been a lot of audit over the past few years which now has generated a lot of mistrust. As I see it, audit should be linked to trust only because, if it is well done, it can support and reveal trustworthiness and untrustworthiness, and it can thereby give reasons for giving or refusing our trust. Trust cannot be obtained by direct methods because it is given or withheld by the other party in a transaction. Trust is only valuable to those who place it if it is well-placed, although ill-placed trust can of course be quite valuable to those in the business of duping others. As we now realise, misplaced trust has been given on a colossal scale that has had grave systemic effects. It has proved damaging for millions of lives, businesses and families. Forms of audit that encourage trust in the untrustworthy are, it seems to me, uncontroversially, not worth having.
So if we are to understand what audit is for and what makes a good auditor, I think we need to think as much about trustworthiness as about trust, and yet - and I regard this as a major curiosity - most current public debate says an enormous amount about trust, which is, in many respects, not linked to a debate about trustworthiness. This has damaging effects, and I am first going to comment on some of the limitations I see of those debates that talk endlessly about trust, and then on some of the difficulties of judging trustworthiness, and finally, on the part that systems of accountability, among them audit, could play in supporting trustworthiness and thereby indirectly assisting the intelligent placing and refusal of trust.
Let me look at the current debates for a moment. Some of the questions we ask about trust in public life are just empirical questions and some, perhaps the more interesting ones, are more practical. It is very easy, given contemporary debate, to be mesmerised by the more striking answers to very simple empirical questions about trust, to overlook the practical questions, which certainly matter more, especially in institutional life. So I am ultimately going to concentrate on the practical rather than the empirical, but I will first begin with some brief comments on the two central empirical questions people ask. They are, simply: who is trusted; and who is trustworthy?
Generally speaking, there is more discussion of the first: who is trusted? Pollsters, as we all know, ask who is trusted and particularly how far certain institutions, office holders, and professionals, politicians, journalists, managers, bankers, are trusted, and in particular, how far they are trusted to tell the truth. These polls have become part of institutional and professional life in contemporary societies. Their findings are very well publicised and very widely savoured.
In the UK for example, in 2007, an Ipsos MORI poll on trust in the professions found that 90% of respondents said they generally trusted doctors to tell the truth but only 18% said that they trusted journalists to do so. Other office holders got intermediate scores. The politicians were close to the journalists, but not as bad. The professors were close to the doctors, but not quite as good. I am not sure whether pollsters have looked at how much auditors are trusted. If it has not happened yet, it is sure to happen; it just needs somebody with a moderate amount of money to commission such a poll.
The pollsters are asking empirical questions about attitudes; attitudes generally towards typical holders of professional and institutional roles and only occasionally about attitudes to individuals. The questions about typical office holders and professionals seek an undifferentiated response - you have to say that you trust journalists at level three, five or whatever - but respondents are of course likely to have very varied experiences of any given type of institutional office holder. They will trust some people more than others, and in different ways. The attitudes that people claim to hold about types of office holder or types of institution are not the same as the attitudes that they hold towards or the judgements they make about particular office holders of those types or institutions of those types. I might have little trust in bank managers, but high trust in a particular bank manager whom I judge reliable or effective. You may claim to have high trust in doctors or teachers, but mistrust one whom you know has provided sub-standard care or teaching. This differentiation is entirely reasonable. Judgements about individual office holders and professionals do not need to be the same as undifferentiated attitudes towards the types of office holder and professional, and any penetrating explanation of trust would need to investigate the sources of that differentiation.
A typical reason is that someone knows or believes something about a particular case that differentiates it from others of the same type, so it is unlikely that a focus on attitudes to types of cases, which brackets all differentiation, is going to reveal very much about the matters that are most relevant to the differentiated, intelligent placing of trust. Attitudes look at uniform responses; judgements differentiate cases.
Of course, judgements will often be poor. They may fail to discriminate or take account of evidence. Attitudes are not exactly poor, but because they are allowed to float free of evidence; they are not invalidated when they fail to track the evidence. That is why attitudes are generally pretty easy to investigate, but the ease has a cost: neither the polls nor the more deliberative empirical investigations of attitudes give us useful evidence about trustworthiness. People often hold suspicious attitudes to the trustworthy and, as we know, credulous attitudes to the untrustworthy, both to their own detriment. We have only to think of the huge amount of misplaced trust in banks, not only by ignorant individuals but by supposedly canny professionals who entrusted billions, for example, to the aptly named Mr Madoff, who then made off with the money. Or the misplaced mistrust in the MMR vaccine, which has led some parents, particularly in Germany and in the UK, to risk measles for their own and others' children, in the mistaken belief that having the vaccination is more risky than having the disease. So, since misplaced trust and misplaced mistrust are very common, I think we need a quite different sort of inquiry if we are to address the practical questions about trust.
Let us now turn to the second empirical question: who is trustworthy? If we could readily lay our hands on robust empirical evidence showing who is trustworthy in which respects, this would settle whom it would be intelligent to trust and mistrust, in which respects, but unfortunately, this empirical question is far harder to answer than an empirical question about attitudes of trust and mistrust, for a reason that goes very deep: the untrustworthy have to try to mask their untrustworthiness. It is not an optional matter. Untrustworthy claims of commitments that are readily detectible inevitably fail.
It is easy to assume that we know much more about trustworthiness and untrustworthiness than seems to be the case. Commentators often make very confident claims about trustworthiness and untrustworthiness, which I think are typically based on pretty inadequate evidence. Auditors are probably very well-placed to offer examples of such credulity. However, in public debate, the examples of untrustworthiness that are most used are used in lazy ways. Commentators will point to compelling examples of untrustworthy action by office holders or professionals - the auditors of Enron, the social workers of Haringey, for example - and they suggest that all professionals of those types are similarly untrustworthy. That is a typical newspaper ploy: they highlight untrustworthiness. But it is noticeable that these commentators do not often point to compelling examples of trustworthiness. I suspect this is not because they think there are not any, but because they tacitly assume that trustworthiness is normal, unremarkable and not worth recording, let alone applauding. Untrustworthiness always gets the limelight and attention. Just as good news is no news, trustworthy performance is nothing to write home about. Maybe we should make more of it.
In any case, examples of trustworthy or untrustworthy action by particular agents could not, in principle, demonstrate whether those individuals are generally trustworthy or untrustworthy, or whether others holding similar positions were generally trustworthy or untrustworthy, let alone whether untrustworthiness, in those or related matters, is growing or shrinking. I think it is obvious that we simply know much less about trustworthiness than is often assumed.
So, the reality is pretty unfortunate. General claims about attitudes of trust to types of office holder or institution are easy to establish, but claims about levels of trustworthiness are rather hard to establish, but it is the latter and not the former that we need for practical purposes. It is no good trying to infer levels of trustworthiness from levels of trust, though, particularly in marketing, you often see people trying to do this: consider the type of claim that goes something like, 50,000 consumers of these biscuits cannot be wrong. Well, to put it bluntly, they jolly well can!
Trusting is not like following or ignoring fashion, where looking at what others are wearing gives us the relevant evidence if we want to dress fashionably, or for that matter unfashionably. Looking at pollsters' evidence about others' trust or mistrust does not show where it is sensible to place or refuse trust. Their trust and their mistrust may be pretty poorly placed. When we place or refuse trust, we need some evidence for judging trustworthiness, and cannot replace it by evidence of third parties' attitudes of trust and mistrust. Our practical aim is, after all, to place trust intelligently, to place it well, not to place it, lemming-like, where others place it.
Unfortunately, our behavioural economists report a lot of evidence of herding and imitation in economic behaviour, despite a very widespread awareness that copying others may be counter-productive, drive up the price you do not want to go up, and may fuel not only booms but busts. So I think we have a problem down that road, but I won't explore it.
From a practical point of view, I think what matters are not typical attitudes to others or to institutions, but, more narrowly, placing trust in the trustworthy, refusing it to the untrustworthy. Judging how to do this is often hard, especially where the evidence is either meagre or hyper-complex or misleading - and we have all three cases. It is going to be particularly hard if we set the practical aim very high and claim that we want some judgement of the overall trustworthiness of types of office holder or institution. Luckily, I do not think we usually need that. We generally need to judge the trustworthiness of particular office holders or particular institutions, and usually, we need to judge more specifically whether certain claims made by them or certain commitments made by them are trustworthy.
To take it down to the basics with ordinary life example, we may need to judge whether A's claim about an accident in which his car was damaged is true, and whether B's promise to pay for the damages to A's car is reliable. That is the sort of concrete situation people have to deal with.
In judging whether to trust others' claims or commitments, we typically need, and can often find, at least some evidence of their honesty, reliability, competence and motivation that is relevant. But of course, these sorts of cases, where I am judging whether the newsagent will be likely to have a copy of my morning paper, or whether I can get across the lights without somebody bumping into me, are the easy ones. We manage in daily life, but it is much harder in public and professional life, because we are dealing with others whom we do not know, with institutions and practices that are opaque to us, and with expertise that is arcane. It is hard to make judgements of trustworthiness because of this complexity, and it is not feasible to rely on repeated interaction or repeated interaction in very similar situations as we do every day. So in complex societies, we need to find or construct indirect ways of judging honesty, reliability, competence and motivation, as the basis for judging trustworthiness and then, finally, placing and refusing trust intelligently.
It is often possible. Intelligent shoppers may not know much about computers, but if they can judge the product's specification, the brand name, the guarantee, they may feel confident enough to put their money into the purchase. Intelligent savers may refuse to place much trust in retail banks, if they make too many mistakes, market incomprehensible savings products, and cannot answer sensible questions. I will say nothing about investment banks.
These indirect methods of judging whom to trust, in which respects, will not be infallible, but they are not stupid and they are of course what lies. Audit, we may note, is a special case of this approach. Systems of accountability seek indirect evidence about the performance of those who are held to account. They provide second order information about the adequacy or inadequacy of performance for first order tasks or obligations. This evidence has two uses: it can be used by those who hold to account - managers, boards, shareholders - to improve and maintain levels of performance; and it can be used by the wider public, who need to judge trustworthiness. Herein lies some problems.
At their best, approaches to accountability give us useful indirect evidence for judging trustworthiness in institutional life. Systems of accountability come in many kinds - I am sure more than 57 varieties. A short list would include: principles for individual conduct in public life, like the Nolan Principles; rules on registering interests, on declaring conflicts of interest; requirements for professional qualifications and CPD; professional codes; setting targets and measuring performance against them; regulatory supervision; requirements for various types of transparency; requirements of consultation; and of course requirements for institutional reporting and audit. How useful are these forms of accountability as evidence of trustworthiness, how helpful? What makes some better than others?
I think accountability works - and I have to say I think there are quite a lot of examples where there is an elaborate system and it does not work - and it works in two main ways. Most obviously, they create incentives for trustworthy action by identifying and sanctioning untrustworthy action. The managers of institutions know that if their auditors qualify the accounts, unwanted consequences will follow, or if they do poorly in an environmental audit, there will be costs and perhaps fines. As a second matter, systems of accountability can provide evidence that allows the less expert to place and refuse trust with greater discrimination. Without a clean audit, investors may not be willing to trust their savings by investing in a company. Without an adequate Ofsted report, parents may find that they mistrust certain schools, and schools will have difficulty filling their places.
This dual use of systems of accountability is quite pervasive in institutional and professional life, and I think it throws up two practical questions. First, does a given approach to accountability in fact incentivise or secure more trustworthy performance? Second, does it in fact help individuals to place and refuse trust intelligently? Unfortunately, I believe some systems of accountability fail to incentivise trustworthy performance, some do not help the intelligent placing of trust, and some fail in both respects.
Because it is rather hard to judge levels of trustworthiness, it can of course be hard to tell whether various approaches to holding professionals and institutions to account in fact reduce untrustworthiness. Some rather unintelligent, but quite popular, approaches to accountability probably do not reduce, and at their worst, perhaps increase untrustworthiness, most obviously because they may create perverse incentives. Others may require office holders and professionals to follow procedures that obstruct adequate performance of their primary tasks. Consider the report that was released this morning, on child protection services. Other approaches to accountability probably are helpful in increasing trustworthiness. I do not detect much of a move to abolish, as opposed to reform, financial audit, or to remove requirements to declare conflicts of interest, though there is very understandable scepticism about specific ways in which versions of these forms of accountability can work. I know that the accountancy profession and others have had deep discussions about conflict of interest and audit, in the wake of Enron. I believe that empirical research on the effectiveness or ineffectiveness of actual or proposed systems of accountability in securing trustworthy performance would be far more useful to us than surveys of reported levels of public trust, however obtained. It is of course also far more difficult to do that research. There is no snapshot way of showing how effectively a specific approach to accountability contributes to trustworthy performance, and there are lots of cautionary tales about ways in which unintelligent and hyper-complex systems of accountability damage trustworthy performance. If you want an example, consider Warwick Mansell's book 'Education by Numbers' on the ways in which the education of children has been damaged by holding schools, children and teachers to account with an assessment system that creates such powerful incentives that it distorts the very education it purports to measure.
So unfortunately, I think we have a lot of current approaches that do much less for trust. That is bad enough, but unfortunately, many current systems of accountability, or approaches to accountability, do even less for trust than for trustworthiness. The information they provide, even if it could be useful to those who are being held to account, may fail to offer a user-friendly basis for others to judge whether to place or refuse trust. Institutional reports and accounts are often, we should confess, turgid and complex reading. Simplified selections from the information they assemble to enable the wider public to work things out - for example, school league tables, or ratings of public services - are of course far more comprehensible, but only at the cost of making assumptions that dramatically limit their trustworthiness.
For example, ranking schools by counting average exam passes per pupil above a certain level, A's, or A-C's, assumes that exam marks in all subjects are equivalent. This goes well beyond the evidence and is widely doubted. Similarly, tick-box approaches to measuring performance often detract from, rather than ensure, competent, let alone good, performance. People are watching the boxes and not the patient.
I believe that one of the most startling facts that we can find, if we just look across the literature in Social Science and its reflection in the media, is that we have spent twenty or thirty years trying to raise standards of accountability in order to improve trustworthiness and thereby trust, and the result has been declining trust. If one saw that sort of relationship elsewhere, one would think, we would assume we've got the wrong remedy. But that is what has happened: across the very period in which accountability has been strengthened: the polls - and it is not just in the UK - generally report declining rather than rising public trust into those who supposedly are so rigorously being held to account. This tendency is so striking that I am astonished that there is so little systematic investigation of what has happened. A common response to the phenomenon remains to recommend yet a larger dose of accountability. I think we need to ask what has gone wrong. Why have these increasingly elaborate systems of accountability failed to give us an indirect basis for placing and refusing trust that is adequate.
I take it that, at least in some cases, there is something of a straightforward answer: there has been malpractice; there has been corruption; there has been incompetence. But in other cases, the problem has not been failure to secure trustworthiness, or not visibly so, but reliance on systems of accountability that, even if they improve trustworthiness, do not support intelligent placing and refusal of trust. We can only place trust intelligently if we can judge the evidence with honesty, reliability, competence and motivation of those in whom we place it. Systems of accountability that support trustworthiness, but fail to provide that useable evidence, may undermine rather than improve trust.
That failure to provide useful assessable evidence to non-experts is very common now in professional and institutional life. The most obvious examples probably are not in the field of financial audit, where the information is indeed often intended mainly for those with some degree of professional competency, but elsewhere, patently false assumptions are constantly made about people's capacity to handle arcane and difficult information and to use it as a basis for placing or refusing trust.
Let me give you a medical example, simply because I have done some work on conceptions of informed consent, by which doctors and medical researchers are held to account. Doctors and medical researchers inflict very complex paperwork on patients - they are required to do so - and on research subjects. These include so-called informed consent forms that are very often incomprehensible to those who are supposedly giving their informed consent. Now of course, they may very well get signatures on these forms, and you probably all have seen, in the American medical sitcoms, the situation where one doctor yells across the emergency room, 'Doctor, have you consented her?' But this already gives the game away. Everybody knows it is not the patient who is doing the consenting, because the patient cannot understand all this stuff! They may get the signature and that will preserve them from liability, but they may be much less trusted than if they had offered a simplified, but honest, account of what is going to be done or could be done, which will not, however, meet the standards now demanded in what are so inaccurately called informed consent procedures. These doctors and researchers are, in all likelihood trustworthy, but the systems of accountability to which they are required to work require them to pretend to communicate complex matters likely to be incomprehensible to patients. That does not increase trust.
Politicians who point to rising exam scores as evidence of rising school standards are not likely to be trusted by those who cannot judge the arcane processes by which those exam marks were determined, but can judge that the exams are easier than they would expect and that the pupils, or their new employees, have learned less than they would expect.
I know I need not say it: those who securitise risky assets in ways that obscured risk were trading on the fact that their products would be intelligible to many others. The irony of course is that they eventually proved unintelligible to those within the banking system.
So where could we move? I think we could have more intelligent accountability that would incentivise and support trustworthy action and provide the non-expert with intelligible evidence. This would provide a much more useful basis for the intelligent placing and refusal of trust. In short, I do not believe that these hyper-complex systems of accountability that create perverse incentives and distract those who have to perform the primary tasks from those tasks have to be what we settle for.
I am not going into the ways in which it could be improved. It would be different in different domains, but I think that we have to admit that systems of accountability that provide either poor or unintelligible evidence of trustworthiness do not earn their keep, and that really matters because these are rather expensive beasts. They impose high burdens not only on those who are held to account, but on the wider public to whom they offer an inadequate basis for judging where to place or refuse trust. But if systems of accountability, audit among them, are to be better, we need to be very clear about the purposes for which we are introducing them and the standards that they need to meet. I do not think that we can get away from the fact that there are two purposes: incentivising the institutions and professionals; and informing the wider public. That said, you cannot just separate those two purposes into separate streams, and say audit for the professionals, PR for the punters. The devil is always in the detail.
I hope that I have offered reasons for thinking that the empirical study of trust is largely a distraction, that the empirical study of trustworthiness is much harder than people think, that the practical task of holding to account is much more heterogeneous than sometimes is supposed, and that evidence useful for holding to account may not be the same evidence as is useful for placing and refusing trust.
© Professor Baroness Onora O'Neill, 2009