22 April 2009
A lot of rumours were awash a couple of months ago about the new government counter-terrorist policy, Contest 2. The Guardian rushed to press with a leak about what it would contain, telling us:
'Contest 2 would widen the definition of extremists to those who hold views that clash with what the government defines as shared British values. Those who advocate the wider definition say hardline Islamist interpretation of the Qur'an leads to views that are the root cause of the terrorism threat Britain faces. But opponents say the strategy would brand the vast majority of British Muslims as extremists and alienate them even further.
The Guardian has also learned of a separate secret Whitehall counterterrorism report advocating widening the definition of who is considered extremist. Not all in Whitehall agree with the proposals and one official source said plans to widen the definition were "incendiary" and could alienate Muslims, whose support in the counterterrorism effort is needed. There were also fears it could aid the far right.
Contest 2 is still being finalised by officials and ministers. Those considered extreme would not be targeted by the criminal law, but would be sidelined and denied public funds.' [Guardian 17 Feb 2009]
As it happened, when Contest 2 was released on 24th of March it didn't say that. Contest 2 was still being written, or more likely re-written, well into March; whether the Guardian or its source got it wrong or Contest 2 was revised away from that direction most of us will never know.
But it is as neat an example I could have wished for to illustrate conflicting views over my particular topic. How much diversity in beliefs should we encourage or at least tolerate in our society? Should the government work to eliminate 'extreme' beliefs as the real source of violence and social division? Or is that a dangerous step towards Soviet-style mind control, with a state-approved set of beliefs on homosexuality (one of the items said to be on the government list of criteria that defines 'extreme' Islamic beliefs)?
For years Western thinking on democracy has been dominated by the thought of John Rawls, and in Continental Europe by Jürgen Habermas. In different variants they have shared the view that a healthy, well-functioning democracy strives to reach a consensus; decisions are made by a process of deliberation undertaken by free and equal citizens. Note that this is not simply 'aggregative'. They don't just claim that a majority will arise who will win the vote. They think that you can get a consensus on the right answer, even on moral issues. Provided the process is right, that people set aside emotions and personal interests and operate with impartiality, equality, an absence of coercion, the workings of public deliberative reason mean that at the end of the process everyone will agree.
This sounds fine to many philosophers and political theorists. Psychologists, when left to their own devices, come up with some rather different perspectives. There has been an interesting history of psychological studies that you could say study what happens in parallel to Rawlsian thinking - in that one could say they are studies in how people reach a consensus. That's not in fact how they conceptualise what they are doing. The psychologists think they are studying conformity. And much of it is very eyebrow-raising. Muzafer Sherif in 1935 studied Americans, whom he felt were inclined to conform because their culture of democracy emphasized mutually shared agreements. His research confirmed the propensity of people to conform to the opinions of others even when it contradicted their own perceptions (and, in fact, the truth). Solomon Asch in 1955 was sceptical and set out to prove that Americans were more independent than that. But what he found was that when put in a group of people who disagreed with them most people do, in fact, conform to the group even when it contradicts their own - accurate - perceptions. (70% of the time).
So Rawls and philosophers of similar views think there are grounds for believing that reaching a consensus is a rational process, and is done through reason. But even with the data that can be hardest to deny - what our senses are telling us - what numerous experiments find is that people are inclined to go along with a perceived consensus even when they perceive it to be wrong. Not merely in terms of sense data, in fact. In a famous set of experiments, American social psychologist Stanley Milgram found that 65% of his experimental group (psychologically healthy members of the public) were willing to give people electric shocks up to the highest level, 450 volts (fake, but which they believed to be real).[i]This experiment has been repeated various times with broadly the same results, from 61-66%, although it depends which variables you introduce. Charles Sheridan and Richard King constructed an alternative experiment in which a puppy was given real shocks; the compliance level reached 75%, with 100% of the women giving the shocks despite their emotional distress, some actually weeping as they did so.[ii]
So you could say that the Rawls vision requires that the people engaging must not only be free and equal, but also reasonable. And it entails that consent, consensus, and so on are the result of rational process - not social pressure, even if no actual coercive pressure exists (as in the Milgram and Sheridan & King studies, where no one was coerced to do the worst).
It also demands that people leave outside of consideration a number of factors that could be described as 'personal' or 'subjective'. This may include religious beliefs, or cultural practices, or other sorts of values. It is not value-free, in that it is a process guided by liberal political values; but those are deemed to be the values around which there is a social consensus.
Of late other theorists of democracy have challenged this view, in particular those who advocate what they call an 'agonist' model - from the Greek 'agon', or struggle. They reject the idea that it is possible to achieve that kind of consensus - not just aggregative, but normative, moral. Chantal Mouffe, a leading exponent, argues that a consensus only comes about as a result of 'hegemony' 'an exercise of power, an exercisein stabilizing power, and can only happen on the basis of exclusion of some sort. People, in fact, don't ultimately agree on many important and substantive issues. What is excluded are not just groups who don't agree, but many of the very factors that make us human. She is sceptical that reason or rationality can deliver this sort of result - the deep differences we observe between people are accompanied by passions which can't be resolved by this cool rationalist deliberation. 'The prime task of democratic politics is not to eliminate the passions' but to mobilize these passions towards the promotion of democratic designs'[iii]. And accepting other people's positions as legitimate doesn't occur as the result of a rational argument, but rather through a 'conversion', an empathetic appropriation of how they see things.
So what if we explore an idea rather different to the classical Rawlsian position - that a diversity of ideas, beliefs, and opinions is actually desirable in a pluralist democracy. That rather than striving for a consensus reached through rational deliberation, even disagreement on fundamental issues might, in fact, be a good thing. Maybe it makes a society more resilient to have a variety of ideas, opinions, even ideologies on offer. There are many areas of life where diversity and variety are healthy - in a diet, in an ecosystem. So when need arises, that view is there.
It's also possible that diversity enhances general creativity. The more that a wide range of views are allowed, you could argue, the more people will be encouraged to come forward and participate. The expectation that a consensus must be reached might actually discourage people from participating, if they feel that their views are not those of the dominant group, and they fear being put in a difficult position of capitulation and compromise of their essential beliefs or opinions. In this way, encouraging diversity of debate might even be a good strategic move, to encourage civic engagement and social inclusion and involve as wide a range of citizens as possible - above all those who are usually marginalised.
Another reason a pluralism of ideas, beliefs and ideologies might be healthy is that it allows the flourishing or defeat of 'bad ideas' through mutual challenge. Instead of banning 'bad ideas' it lets them fight it out and may the best thought win.
In this way, a greater bandwidth of diversity keeps discussions going at the margins where it arguably needs to be. Arguments over comparatively small political differences - 'centre-left or centre-right' - may be important in some contexts, but for wider society we need the vigorous debate precisely where the ideas are most dangerous and challenging, if you believe that it is through debate, critical thinking and discussion that 'dangerous' ideas or beliefs are best challenged.
How much diversity you 'allow' or is desirable in a society can only be decided only in context; culture, history and tradition contributing to the expectations of a population. But clearly there must be dangers in allowing the maximum bandwidth of ideas. Where do you draw the line? How much liberty do you allow for those who advocate taking violent action? How do you control the dangerous notions? Do you ban them, prevent them, educate them away, or just allow them to be attacked and defeated in the marketplace of ideas?
These questions do presuppose that it is the holding of dangerous ideas that is the cause of the problems. We often casually fall into speaking as if ideas have causal force on the individual - the ideas or ideology makesyou do something or think something. It is worth opening that assumption to challenge.
There have been two particular important contributions from psychology that we can use to illuminate this discussion. One operates at the level of group psychology, and one on the individual level. Both of them have uncovered the huge impact of the situation, the social context, on how people act in what we might think are 'moral' matters - which many people think of as a matter of character, personal morality or virtue.
Ervin Staub began his research as a social psychologist in studies of altruism, but then turned to analysing the roots of genocide and mass violence.[iv] He analyses what conditions are present in a society leading up to such a horrific situation - 'cultural and social patterns predisposing to violence, historical circumstances resulting in persistent life problems, and needs and modes of adaptation arising from the interaction of these influences'; 'cultural stereotyping and devaluation, societal self-concept, moral exclusion, the need for connection, authority orientation, personal and group goals, 'better world' ideologies, justification, and moral equilibrium'.
On a more individual level, psychologist Philip Zimbardo ran the famous 'stanford Prisoners Experiment' in 1971.[v] He recently became involved in Abu Ghraib, which has only strengthened his conclusion: that it is not simply the case that individuals behave in a moral way because they are independently moral, strong, and virtuous. Other factors determine moral behaviour. While most people are good most of the time, it is easy to corrupt people into behaving in ways they and others would find unbelievable. It is terrifyingly easy to reset someone else's moral compass.
'Most institutions in any society that is invested in an individualisticorientation hold up the person as sinner, culpable, afflicted, insane, or irrational. Programs of change follow a medical model of dealing only at the individual level of rehabilitation, therapy, re-education and medical treatments, or punishment and execution. All such programs are doomed to fail if the main causal agent is the situation or system and not just the person.'[vi]
Zimbardo's 'systems' can and do use ideologies as tools for resetting the moral compass to corrupt. But the social factors are clearly at work and may even be more powerful.
What we can learn from Staub and Zimbardo's different but complementary studies is that other factors create the conditions and situations in which 'dangerous' ideologies can take root. We may be missing some important truths if we think that ideologies cause the problems. Rather, it might be the case that ideas that normally would be rejected as unacceptable, outlandish, over the top, or extreme, suddenly become more plausible to people under personal and social stresses.
One thing we might take away from this is that the 'danger' of dangerous ideas largely only constitutes a threat when a number of other serious factors, major social stresses, are present. (An analogy would be the way that bacteria are present in our bodies all the time, but for certain illnesses we only succumb when our immune system is weakened by other factors.)
In which case, for the sake of human rights and civil liberties perhaps we can afford to loosen up on policing ideas if we are attending to the socialneeds, the fault-lines that exist in society, the disadvantages and the sources of grievance.
How much diversity a society can safely tolerate or support, therefore, isn't only down to its cultural and historical factors and expectations. It is also a question of how much we can safely manage. And perhaps proactively manage, if enough of us are at work minimising the ill-effects of diversity, building bridges, improving the capacity of our citizens to understand, tolerate, process different ideas; but also the resilience of economic institutions, the success of education and health systems, and so on. So this analysis suggests that if there is enough positive care around the stresses on a society, it can absorb challenging ideas without as much danger of group violence.
But maybe we can go further than this. Perhaps there is a paradox about the dangers of dangerous ideas, the extremes, and consensus. We often intuitively feel that the consensus is likely to be safer, and the exceptions will be marginal, kooky, or extreme beliefs, ideas, or indeed people. But the consensus of a society under the major stressful conditions described by Staub is one of the preconditions for genocide; individuals and communities can fall prey to an ideology more readily if it is a matter of consensus. People will abandon their own strongly, emotional held moral feelings to conform with an authority in an emergency, as did the women weeping as they administered painful electric shocks to puppies, or people obviously distressed nevertheless administering further shocks an individual in the next room they believed they had already electrocuted to unconsciousness. A society which strongly emphasises the need to agree with the majority has created some of the conditions under which - in extreme situations - it is more susceptible to murderously dangerous ideas. Whereas a society which is more willing to listen to a range of ideas, to allow nonconformism and a free debate even of core values, might be inoculating itself against a susceptibility to dangerous indoctrination.
This would suggest there is a real importance to having even contrarian, subversive ideas in the atmosphere for the overall health of society. For it's the contrarians who may, in the end, help to point the way against a destructive consensus should one ever arise.
But ideally we could turn this potential for conformity and group violence in a pro-social direction. Perhaps we should pay less attention to the question of 'bad apples' - and instead ensure that we act swiftly against 'bad barrels'. Meanwhile, the construction of good barrels and fostering good apples should receive more attention than it does.
Some research we conducted at The Lokahi Foundation (the 'What Works' project) interviewed seventy five people from ethnic and religious minorities to learn what factors play a part in their successful participation in British life. We found that contribution is key to people becoming positively engaged in a community. Whether it's shaping 'Britishness' or shared values, a society you have helped to build is something you have a stake in protecting. That might suggest allowing people to participate is protective of prosocial effects. Even if you don't like some ideas, the fact they have been brought into the tent and into challenging conversation means that they may move in a more prosocial direction.
© Professor Gwen Griffith-Dickson, 2009
[i]Milgram, Stanley. (1974), Obedience to Authority; An Experimental View. Harpercollins. See also Blass, Thomas. (2004), The Man Who Shocked the World: The Life and Legacy of Stanley Milgram.
[ii]Sheridan, C.L. and King, K.G. (1972) Obedience to authority with an authentic victim, Proceedings of the 80th Annual Convention of the American Psychological Association 7: 165-6.
[iii] Mouffe, C. (1999) 'Deliberative Democracy or Agonistic Pluralism', Social Research 66(3): 755-6.
[iv] Staub, E. (1992), The Roots of Evil. Cambridge University Press,
[v] Only comparatively recently discussed in detail in his book, The Lucifer Effect (2008), Rider.
[vi]Foreword to latest edition, viii.