The Right Stuff: How Do We Make Moral Choices?

  • Details
  • Transcript
  • Audio
  • Downloads
  • Extra Reading

In this lecture, I will present and review developments in moral psychology over the last two decades. Specifically, I will explore the neuroscience of moral decision-making, and the implications of this research for ethical issues such as moral responsibility. I will particularly focus on the capacity to make unwise decisions that are unpopular with others; and the question of whether values can be taken out of the research into moral choice-making. I will conclude by asking further questions about the implications of this work for training in medical ethics. 

Download Transcript

8 February 2017

The Right Stuff:
How Do We Make Moral Choices?

Professor Gwen Adshead


In these talks, the “Right Stuff” refers to the capacity to make choices that we and others think of as 'good'; and in particular, how this might apply to decision making in medical practice. I want to focus on moral decision making in medical practice because the issues of health and the loss of health affect us all, across our life span. In the UK we find ourselves particularly perplexed right now about what constitutes good medical care, and how it can and should be best provided. Medical knowledge changes swiftly, and technological changes makes new and expensive investigations and treatments possible that were only theoretical a few years before. Life length is now extended; but not life quality, and the debates about end of life decisions show us how much the notion of a ‘good life’ is bound up with health and the absence of disease, illness and suffering.

So in this first talk, I want to discuss moral decision making in general, with particular reference to medical practice. I am going to be drawing on a vast, historical literature in moral philosophy; and a slightly more recent body of work about ethics in medical care, going back over the last 50 years. I am also going to refer to some studies from the last twenty years which look what our brains do when we make moral decisions. I’m going to end with some general conclusions about moral decision making in medicine, and what it says about moral decision making generally; and pose some further questions, which I will address in the next two lectures.

I want to acknowledge here the contributions of several important people who have helped me think about these lectures; my colleague Professor Deborah Bowman MBE, who is professor of medical ethics at St George’s hospital; Dr Brian Robinson, psychiatrist and teacher of medical ethics; Professors Bill Fulford and Nigel Eastman and Mr Dan Ferris, student of ethics and theology at Heythrop College.

Moral decision making in medicine

The practice of medicine is relational as well as technical: it involves a relationship between a person who is seeking help, and who may be vulnerable, and a person who has skills and knowledge that can help. Relationships that involve disparities of power, knowledge and vulnerability are relationships that require some degree of external oversight and regulation; or at least a third-party view that can reflect on conflicts of interest and limit the potential for exploitation. Traditionally in medicine this oversight has taken the form of codes of ethics; starting with the Hippocratic Corpus and similar documents, right up to the present day. Doctors get ethical advice about their practice from regulatory bodies like the General Medical Council and the Royal Colleges that define the standards of good medical practice; and doctors who fail to meet those standards may be found unfit to practice.

But what do we mean by a ‘moral’ decision in medicine? Conventionally, we are distinguishing what is clinically and technically possible from whether it is ‘right’ to do at all. If a person’s heart stops, then we can resuscitate them: but should we do so?

To answer that question, we do not expect to rely solely on numerical data and we do not anticipate getting an obvious and single answer. Instead we anticipate that there may be more than one answer to the question; and those answers may conflict with each other. We will want to get clinical information about the situation: why did the heart stop? Will restarting the heart make things better or worse for that person in medical terms? We will also want to know what the ill person thinks about the situation: did they anticipate this? Do they want to be resuscitated? And if we don’t know these things, we will then want to ask some questions about how best to make a complex decision without the voice of the person concerned.

Moral reasoning differs from those types of reasoning that are purely computational, logical or algorithmic. To answer ethical questions, we engage in a reflection and discussion process: we begin a discourse that uses the words ‘ought’ and ‘should’, as opposed to ‘can’ and ‘must’.  If the patient’s heart has stopped because they are losing blood, then a doctor may say, ‘We must give the patient more blood or his heart will stop; and we can do so because the blood is here and we know it will work”. However, that statement does not answer the question ‘Should we resuscitate the patient if their heart stops?’ The doctor’s statement about what can be done is not irrelevant; but it is only a part of the reasoning process involved in deciding whether it is right to resuscitate. For example, if the patient had left instructions that they did not want to be resuscitated if their heart stopped during surgery; and there was good evidence that this was a true reflection of their view; then the facts of successful resuscitation practice would be irrelevant to what the doctors should do.

What I am distinguishing here are facts and values: a distinction developed by David Hume in the eighteenth century. Hume says that it is a fallacy to think that because things are a certain way (facts), then they ought to be that way (values). We cannot derive values from facts; but we do evaluate facts and make moral judgements about them, and this reasoning and reflection process is crucial to medical ethical decision making. My colleague Professor Bill Fulford (1989) has discussed in detail how values and different value perspectives are key to understanding moral dilemmas in medicine, and why people feel so differently about these types of choices. He argues that we evaluate facts in different ways, depending on our different personal and social perspectives; and these differences of perspective lead to clashes of value systems. His work has been of significance and importance in my own field of psychiatry, where doctors are empowered by the state to deprive people of liberty and autonomy (which is of value to them) based on the ‘facts’ of perceived risk and medical diagnosis. Fulford says that facts can be interpreted through a lens of different value perspectives; and this process of interpretation needs to be understood and explored if we want to make the best ethical decisions that we can.

That last sentence brings us back again to an important issue in moral reasoning generally; which is how we think about words like ‘good’ or ‘right’ or ‘best’, in relation to a human decision. It is not a question of whether we want doctors to make ethical decisions on a daily basis; that is going to happen and is a fact about the world of medical practice. What we want is for them to make ‘good’ ethical decisions, or the ‘best possible’: we want to know that doctors have engaged in the type of thinking that takes account of values and personal lived experience. One of the commonest criticisms of doctors still is that they do not listen to the lived experience of ‘the patient’, or let the patient’s ‘voice’ be present or important. This criticism is acknowledged and respected, and medical practice is encouraged to be more patient centred than it was 50 years ago; this process is helped by doctors acknowledging that they will inevitably be patients at some point in their lives, and that medical qualification does not provide immunity from pain and suffering.

Nevertheless, there are still concerns about unethical practice in medicine; and occasions when doctors do not make the best ethical decisions; and even make decisions and take actions that are deemed to be ‘wrong’ and ‘bad’.  A few years ago, a medical team described how they resuscitated a woman whose heart had stopped, despite knowing that she did not want to be resuscitated. They described how they felt they had done the right thing at the time; but they could see that their decision brought about bad consequences for the woman, as well as disrespecting her wishes. Although very difficult to do, I think it is helpful if doctors can discuss their ‘bad’ ethical decisions in public because it allows a learning process to take place; just as happens after other types of serious incident or accident. At present, doctors who do ‘bad’ things are treated as ‘offenders’; and any exploration of what happened takes place in a process that is secret and seen as shameful. The issue of shame reflects an important aspect of medical ethics, which is the ‘good name’ of the profession and I will return to this later.

What is a ‘good’ decision in medical ethics?

For centuries, it was assumed that that a ‘good’ decision ethically in medicine was the same as a good clinical decision i.e. if the doctor did what was medically indicated to benefit the patient, then this was the ethically ‘right’ thing to do. Although sometimes crudely summarised as ‘doctor knows best’, this approach to ethical dilemmas in medicine is (arguably) less about the doctor’s status, and more about the tensions between facts and values alluded to earlier. Medicine as a science utilises a method of study that focuses on consequences of actions, on causes and effects in nature. These facts about how bodies heal, or how drugs work, or the effect of this drug on the body are sometimes confused with medicine’s ethical imperative to bring about good consequences for the patient, or at least reduce harmful consequences. The facts of medicine can easily be elided with the values of medicine; or perhaps it would be better to say that we only have concerns when there is a kind of ‘friction’ between the facts and values.

Modern medical ethics developed out of an examination of medical authority after the second world war; partly in response to the Nuremberg medical trials of doctors who had used medicine to torment and kill citizens; but also in sympathy with the general increase of attention to the human rights of ordinary people which had been denied or minimised: people of colour, women and people made vulnerable by illness. Legal cases reflected this change: in one famous case, (Murray vs McMurchy), while operating on a woman for another purpose, a surgeon tied her fallopian tubes without her consent because he foresaw that if she became pregnant, then this would be clinically dangerous for her; and it would also be dangerous for her to have two surgical procedures. She sued in negligence and won: it was not disputed that the surgeon was factually right, in clinical terms; but he had not considered that the patient’s own view of herself and her body were essential to the decision-making process. He had focussed on facts, and his technical view of the facts, and presumably his evaluation of the facts; but he seems to have assigned no value to the patient’s view, even though it was her body that was being operated on. It is hard not to think that sexist stereotypes were operative here and the case is an interesting reminder of the degree of social control that is often exerted over sexual reproduction by males whose experience is arguably markedly different to that of women.

Theories of medical ethics: consequences and principles

Ethical reasoning in medicine has drawn on a range of theories in moral philosophy. There is obviously a close relationship between medical ethics and the utilitarianism of Jeremy Bentham and JS Mill; namely that the doctor should act in such a way as to bring about the best medical consequences for the greatest number of people OR act in such a way as to minimise harmful consequences for the greatest number of people.  This is sometimes summarised as the principle of beneficence and the maximisation of welfare for the patient; and the principle of non-maleficence and the minimisation and avoidance of doing harm (Beauchamp & Childress, 1979).  Although it may seem unarguable that doctors should always do that which maximises their patient’s welfare, it is not always clear how the assessment of welfare is to be done, and from whose perspective (as with the case above). A sole focus on medical consequences leaves out any discussion or analysis of the intentions, hopes and motives of both doctor and patient. A well-rehearsed criticism of the utilitarian approach in medicine especially is that it does not help doctors and patients with how to weigh different consequences and over what time scale; nor what to do when doctors, patients and patients’ carers weigh anticipated consequences very differently. Ray Tallis, a physician of older age care, writes movingly of how he finds it hard to be accused of cruelty and ageism when he does not support treatments and interventions that will prolong an aged person’s life for a short time, but cause them more suffering during that time before their inevitable death (Tallis, 2004).

In 1979, Thomas Beauchamp and James Childress proposed a highly influential model of medical ethics which has become a basic starting point for discussing and teaching health care ethics. They proposed a set of principles that would address both consequences and duties in medicine. Doctors should respect the principle of beneficence and non-maleficence; but they should also have respect for the patient’s views and choices about their condition and treatment, and respect their autonomy over decisions that affect them directly. Doctors should also have respect for a principle of justice in health care; where justice implies fairness of access to treatment.

This model is known as the Four Principles approach, and is now often used as the basis of training in health care ethics for most groups of health care professionals. Like any philosophical model, it has strengths and weaknesses, but arguably its greatest value is that it has enabled the study of health care ethics to become active and important in the training and development of doctors. Doctors used to learn about ethical reasoning by watching their trainers and seniors in an apprentice model that was purely clinical; Beauchamp and Childress gave them a structure for thinking about their ethical decisions which was based on arguments from moral philosophy not clinical medicine.

So a ‘good’ ethical decision in medicine could be said to be one that takes account of the clinical consequences for the patient and embodies a duty to respect the views of the patient and the justice of the process. Gillon (1989) has argued that doctors also need to consider the scope of their duties to patients i.e. where the limit of their duties might be. This is an important issue when we come to consider those dilemmas that involve a conflict of interest between a duty to the patient and a duty to third parties and the wider society: most commonly in relation to personal information. I will return to this in my next lecture.

Autonomy and choice

Another interesting feature of the Four Principles model is that although in theory all principles have equal moral weight, in fact respect for patient autonomy is implicitly weighted more in moral terms; or at least primus inter pares. (Gillon 2003). Such an analysis is consistent with legal and political arguments about the supremacy of human rights and dignity, and developments in the law on consent and personal ownership of identity. The problem with such an approach is that many medical conditions impair the capacity to be autonomous, even if only temporarily; giving rise to considerable debate as to how to make good quality ethical decisions in cases where people cannot express autonomous views. In many cases, it will be possible to wait until the patient has regained the capacity to be autonomous; in other cases, the patient may have left advance instructions as to how to be treated, or there are substitute decision makers (usually family members) who can make a choice for the patient. There is well articulated guidance for doctors about such cases, both from a legal and practical point of view: for example, the English Mental Capacity Act is a measure that allows for treatment of medical conditions in those cases where patients lack capacity.

The problem of lack of capacity deepens in relation to those conditions and situations where people have long term problems with autonomy: either because they are developing autonomy (children and young people); they have lost autonomy through physical and mental injury (the elderly and disabled); or where their autonomy fluctuates, due to psychological distress (that occurs in a wide variety of mental disorders). This last group also includes people whose distress occurs in the context of relationships; and raises another consideration here about the nature of autonomy itself. Autonomy is sometimes seen as a type of cognitive skill and ability that one either has or doesn’t have, like the ability to read. But several voices have argued that autonomy is not like this; that it is an expression of identity and self-experience that is organic and relational in nature. On this view, a person’s capacity to make important ethical decisions (such as having a termination of pregnancy or refusing treatment) changes naturally with time, within a spectrum of care giving and care eliciting relationships, and degrees of vulnerability and neediness. For example, parents raising children help them to be more autonomous by providing them with a network of secure relationships that allows autonomy to emerge over time (Sutton 1999). Autonomy to make important decisions reflects personal identity and values, not just an ability to understand or take in information (Tan et al 2006; 2009). For those people who live in relationships of long term dependency on others, the autonomy of the patient is located in the relationships with those who care for them, and facilitated by those carers: what has been called ‘interstitial autonomy’ (Agich 1992).

Further, it might be argued that any state of being ill or distressed entails a type of vulnerability which is an aspect of the patient’s autonomy with which the doctor must engage. The ‘good’ doctor does not wait for the patient to regain autonomy, or work with a substitute decision maker; she works with the compromised autonomy of the patient, as a type of reflective bedrock for ethical decision making. Vulnerability and neediness are not indicators of low status or even disability; they are aspects of a person’s identity that are part of the human transactions that are essential to social life.

Choices and decisions

I want now to go somewhat deeper into an analysis of what it is to make a moral decision at all. I do not propose to engage in meta-ethical analysis of what I mean by ‘good’; but rather what we mean by a ‘decision’ or ‘choice’ of one moral action over another. I also do not propose to review here all the arguments about the philosophy of action, or the vexed question of whether persons can choose to do some action that they also claim not to intend. This latter question is however important in medicine, and I will touch on it briefly later.

A moral decision is surely a complex decision; and like many medical treatment decisions, a moral decision involves facts and values. One view of the capacity to make any complex decision is that it involves a process of taking in information and believing it, weighing up of the perceived risks and benefits, and evaluating advantages and disadvantages; a process which is then followed by a binary choice that selects the outcome which is most beneficial in terms of life advantage. No doubt some decisions can be made this way; but what such an account seems to leave out is any discussion of the feelings that are involved in such a decision; or the way the subjective experience of the decision maker in reflection on what is important to her. What it also leaves out is an analysis of how the potential costs of any decision might influence the decision-making process: for example, a decision to refuse treatment that prolongs life might need a different and more nuanced decision process to that needed for a less risky decision.

Atul Gawande (2014) describes the complexity of treatment decisions in people with conditions that were going to end their lives; and the importance of thinking about what individual people value in their lives as whole in making these decisions. He argues that doctors have been poor at making these kinds of discussion possible because of the emotional discomfort that they entail. We might infer from this that emotional discomfort is often an important part of the moral decision making process; and the more complex the moral decision, the more emotional discomfort there will be. The idea of coolly weighing up alternatives seem implausible in relation to decisions like ‘Shall I keep this pregnancy? ’ or ‘Shall I refuse this treatment that is keeping me alive?’.

There is evidence to support a more complex and emotional account of moral decision making from several sources. An early study by Carol Gilligan (1977) explored how women approached the decision to have an abortion. When making their decision, they reflected on their moral identity over time; and the kind of person they wanted to be, both now and in the future. They also considered the impact of their decision on the people they were closest to: family, friends, and partners. Gilligan suggests that these women located their autonomy to make a complex moral decision within a narrative of who and what they valued as people. This ‘ethic of care’ complemented the type of rights based argument that asserted a woman’s right to choose what happens to her body.

Another study (Tan et al 2006; 2009) explored the capacity of young women to refuse treatment for an eating disorder. Tan’s group found that these young women could take in information about the consequences of their decisions and appeared to be able to weigh it up; their capacity to make such a high-risk decision was not obviously cognitively impaired. But what they also found was that there was a profound difference between the way the clinicians saw the problem, and the way the young women saw the problem. The clinicians saw the young women as ‘having’ a disorder that was threatening their lives, whereas the young women described experiencing the eating disorder as part of their identity, and thus to give it up was to give up a part of themselves. Their capacity to make an autonomous decision about life saving treatment was tied up with their identity and personal values, not just an analysis of consequences.  A similar study of people who repeatedly self-harmed had similar findings (Gutridge, 2012): in this study, the participants also expressed real ambivalence about their decisions, and they owned the complexity of it in ways which were unsettling. Clarity and simplicity were not obvious features of the decision making process; nor was the process experienced as a binary choice.

Finally, there is evidence from developmental psychology that people develop their moral identities within a narrative of their values and feelings about the sort of person they want to be (Tappan & Brown 1989; Day & Tappan 1996). McAdams (2015) describes how moral emotions are incorporated into personality in a story of the choices we make, especially in young adulthood. This is especially important when we think about the training of young doctors, and their early experiences of moral dilemmas in medicine. He cites the work of Jonathan Haidt (2012) who has been influential in helping us understand how strong emotions influence the moral positions we take, especially in relationships that involve loyalty, care, trust and fair dealing.

The brain and moral decision making

Improved techniques for brain scanning have led to great interest in what happens in the brain when people make moral decisions. A recent review ( Boccia et al 2016) suggests that different areas of the brain are involved, depending on the perspective taken i.e. first person perspective decisions involve different parts of the brain to third person perspective decisions. Area of the brain that are known to be active in emotional experience and regulation are also activated in moral decision making (Greene et al., 2001) and the experience of moral emotions (e.g. Moll, Oliveira-Sousa et al., 2002). Not only are these processes and experiences complex, they involve different neural pathways and networks between different parts of the brain: including the orbito-frontal cortex, medial pre-frontal cortex and amygdala. Disruptions of different processes may lead to variations in moral reasoning, and altered experience of moral decision making.

There is little doubt that most people know the difference between right and wrong. However it appears that some people seem not have the feeling of what is right and wrong. This “moral feeling”, (Greene & Haidt, 2002; Moll et al., 2005), based on the functioning of the moral neural circuit, is thought to translate the cognitive recognition that an act is immoral into inhibition of that action. Work by Antonio Damasio suggests that good quality moral decision making involves a type of rapid unconscious  intuitive process, which is distinct from information processing; and if this is absent ( for example, after some types of brain damage) , then people will struggle to make moral decisions at all (Damasio 1995, 2000; Anderson et al 1999).

The Doctrine of Double Effect and the trolley problem

I want to return to discuss the Doctrine of Double effect using what is commonly referred to as “The trolley problem” (Edmonds 2014). This is a thought experiment first described by Philippa Foot in 1967, and involves a scenario in which a tram (trolley in the USA) is heading towards a line of track on which five people are trapped. You can hit a lever that will switch the tram’s course onto a line of track where only one person is trapped. Essentially the question facing the decision maker is whether it is right to prevent the death of five people, even if that means bringing about the death of one.

A simple utilitarian calculus (if there is such a thing) would suggest that it is right to save five lives if possible; even if it means bringing about the death of one. This is the option that most ordinary people choose, when asked. Edmonds says that these people are invoking the Doctrine of Double Effect by which they assert that they do not intend to kill the one person, but that single death is an inevitable by-product of their intention to save five people.

The Doctrine of Double Effect was first expounded by St Thomas Aquinas, and has been especially influential in medicine because many interventions in medicine are risky to the subject. The most well-known example of the doctrine of double effect occurs in palliative care, where people in the last stages of life are often given high doses of pain relieving drugs. These drugs shorten life (often by depressing respiratory function), but doctors who prescribe them argue that they do not intend to shorten or end life, only to relieve severe and intense pain. Other common examples in medicine also involve side effects of drugs like chemotherapy for cancer; where harmful effects are not intended, but are an ‘inevitable’ consequences of the intention to benefit the patient.

The trolley problem has been given several different variants to explore different moral responses. In one variant, you can stop the tram from killing five people by physically pushing one person in front of it, and thus bringing the tram to a stop (the unfortunate sacrifice is often described as fat, but since the thought experiment is based on the assumption that your action is successful in saving the five others, the victim’s size is probably irrelevant). When people are asked about this variant, many express reluctance to physically push the man in question; fewer than the number of those who wanted to push the lever, even though the intended outcome is the same (five lives saved). What this result implies is that people feel differently about physically harming someone directly, even when doing so would bring about ‘good’ consequences.

Intentions and emotions

The distinction between pulling a lever and a physical push has an emotional effect that means something to the decision makers, even if it is hard to articulate. One possible explanation for the distinction people make between pulling a lever and pushing a person may be to do with the sense of intention or agency that has to be owned. In both cases, the Doctrine of Double Effect is invoked; I intend to save five people, I don’t intend to kill one person, but sadly that happens because of my primary intention to save. But when the saving of five people entails physically pushing an innocent person in harm’s way, it seems that the Doctrine of Double effect cannot allay anxiety about doing harm. It seems difficult to claim that you do not intend to kill a man when you push him in front of a train; criminal jurisprudence would find you guilty, just on the basis of the anticipated consequences alone.

Another possibility is that people feel a sense of injustice on behalf of the single man, and an awareness that if one of us can be sacrificed willy-nilly for a good cause, then any of us could be sacrificed without consent: which seems unjust and cruel. It may be of interest that people who score highly on a measure of psychopathy are more likely than low scorers to endorse more utilitarian responses (Bartells & Pizzaro, 2011); which suggests that a lack of anxiety about hurting others allows for easier focus on simple utilitarian calculus. Yet another possibility is that people do not like to think of themselves as causing direct harm to others, even if they accept that they did so; in a recent book about the life of Rudolf Hoess who was the commandant at Auschwitz, he is quoted as saying of himself that that he was not a murderer, he was “just in charge of an extermination camp”.

No doctor would accept that taking a single life is justifiable even if five lives could be saved:  and doctors have been and will be prosecuted where there is a suspicion that they have intentionally ended life, even where there is prior consent and family support for this.  Edmonds (2014) describes a tragic case where a young man was brain dead, and his organs were to be used to save several people’s lives when life was extinct. A doctor was accused of giving a drug to bring about the young man’s death so the organs could be used; although he was acquitted of this charge. In fact, the young man eventually died but his organs were never used. One can only imagine the different emotional responses to this series of events, depending on whether you were a relative of the dying man, or a relative of those whose life might be saved by his death.

Conclusions: what makes a ‘good’ doctor

The doctor is empowered to do harm to the patient in pursuit of doing good; and there is a social acceptance that treatment may entail a deliberately imposed suffering that is not the primary intention of the doctor. This acceptance entails a type of trust in the medical identity that is reflected in public polls; doctors are still the most trusted professional group. The trust that makes these interactions possible assumes that doctors will not be the kind of people who exploit vulnerability and exercise influence for bad purpose. There is a question here about how society expects doctors not just to be good technically but to be good personally.

There are other accounts of ethical reasoning that may be helpful when thinking about doctors as ‘good’ people. Virtue ethics draws on work by Aristotle, that suggests that we become good people by practising actions that develop ‘good’ character (e.g. Radden & Sadler, 2010). Michael Sandel (2009) has argued that moral decision makers also need to include an ethical reasoning process that pays attention to justice and the ways that people weigh the value of their decisions. He argues that impartiality is not always the keystone of justice; but rather, justice processes need to pay attention to what people value from their perspective as actors.

Medicine needs a way of thinking about ethics that addresses different moral values and intuitions. What remains unclear is how we train doctors to be good people, not just to do good work and make good choices. There remains a question about whether it is just and fair to expect a group of people who are chosen for cognitive intelligence and intellectual skills in exam passing to become morally superior individuals. It is often said that doctors are held to a higher moral standard than other people; but how are they trained to that higher moral standard? After the Shipman Inquiry, it was recommended that doctors undergo revalidation every five years to ensure that good practice; but evaluation of the revalidation process does not seem to indicate that it is effective. Doctors still do 'bad' things; even when they are good people in other ways and technically good at what they do.  

© Professor Gwen Adshead, 2017


Agich, G. (2003). Dependence and autonomy in old age: an ethical framework for long-term care. Cambridge University Press.

Anderson, S. W., Bechara, A., Damasio, H., Tranel, D., & Damasio, A. R. (1999). Impairment of social and moral behavior related to early damage in human prefrontal cortex. Nature neuroscience, 2(11), 1032-1037.

Beauchamp, T. L., & Childress, J. F. (1979). Principles of biomedical ethics. Oxford, Oxford University Press. 7th Edition 2012.

Boccia, M., Dacquino, C., Piccardi, L., Cordellieri, P., Guariglia, C., Ferlazzo, F., ... & Giannini, A. M. (2016). Neural foundation of human moral reasoning: an ALE meta-analysis about the role of personal perspective. Brain imaging and behavior, 1-15.

Damasio, A. R. (1995). Descartes' error: emotion, reason and the human brain. London, Picador.

Damasio, A (2000) The feeling of what happens. London, Heinemann.

Day, J. M., & Tappan, M. B. (1996). The narrative approach to moral development: From the epistemic subject to dialogical selves. Human development, 39(2), 67-82.

Edmonds D (2014) Would you kill the fat man? Princeton UP.Princeton

Fulford, K. W. M. (1989). Moral theory and medical practice (pp. 101-12). Cambridge: Cambridge University Press.

Gawande, A. (2014). Being mortal: medicine and what matters in the end. Macmillan.

General Medical Council: Good medical practice. London, GMC.

Gilligan, C. (1977). In a different voice. Harvard Educational Review, 47(3), 365-378.

Gillon, R. (1994). Medical ethics: four principles plus attention to scope. British Medical Journal, 309(6948), 184.

Gillon, R. (2003). Ethics needs principles—four can encompass the rest—and respect for autonomy should be “first among equals”. Journal of medical ethics, 29(5), 307-312.

Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. (2001). An fMRI investigation of emotional engagement in moral judgement. Science, 293, 2105-2108.

Greene, J. D., & Haidt, J. (2002). How (and where) does moral judgment work? Trends in Cognitive Science, 6, 517-523.

Gutridge, K. (2012). Assisted self-harm in mental health care facilities: an ethically acceptable approach? (Doctoral dissertation, University of Bristol).

Haidt, J. (2012). The righteous mind. Why good people are divided by politics and religion. London, Allen Lane.

McAdam, D (2015) The art and science of personality development. London, Guilford Press. P 206

Moll, J., de Oliveira-Souza, R., Eslinger, P. J., Bramati, I. E., Mourão-Miranda, J., Andreiuolo, P. A., & Pessoa, L. (2002). The neural correlates of moral sensitivity: a functional magnetic resonance imaging investigation of basic and moral emotions. The journal of neuroscience, 22(7), 2730-2736.

Radden, J & Sadler J (2010) The virtuous psychiatrist: character ethics in psychiatric practice. Oxford , Oxford University Press.

Sandel, M. J. (2010). Justice: What's the right thing to do?. London, Macmillan.

Sutton, A. (1997). Authority, autonomy, responsibility and authorisation: with specific reference to adolescent mental health practice. Journal of medical ethics, 23(1), 26-31.

Tallis, R. C. (2005). Hippocratic oaths: medicine and its discontents. Clinical Medicine, 5(2), 186-186.

Tappan, M., & Brown, L. M. (1989). Stories told and lessons learned: Toward a narrative approach to moral development and moral education. Harvard Educational Review, 59(2), 182-206.

Tan, J. O., Hope, T., Stewart, A., & Fitzpatrick, R. (2006). Competence to make treatment decisions in anorexia nervosa: thinking processes and values. Philosophy, psychiatry, & psychology: PPP, 13(4), 267.

Tan, J. O., Stewart, A., & Hope, T. (2009). Decision-making as a broader concept. Philosophy, Psychiatry, & Psychology, 16(4), 345-349.

Thomson, JAK (trans) (1955) Aristotle: the Nichomachean Ethics. Penguin Edition. London.

Legal Cases

Murray vs McMurchy [1949] 2 DLR 442

This event was on Wed, 08 Feb 2017

Gwen Adshead

Dr Gwen Adshead

Visiting Professor of Psychiatry

Dr Gwen Adshead was Visiting Gresham Professor of Psychiatry and currently consultant forensic psychiatrist at Ravenswood House. Prior to this post, she worked at Broadmoor...

Find out more

Support Gresham

Gresham College has offered an outstanding education to the public free of charge for over 400 years. Today, Gresham plays an important role in fostering a love of learning and a greater understanding of ourselves and the world around us. Your donation will help to widen our reach and to broaden our audience, allowing more people to benefit from a high-quality education from some of the brightest minds.